Toward the balance of power in science publishing
The current scientific publishing model is toxic and unfair, and eLife is doing something about it.
The current scientific publishing model can be summarized by the following steps:
Submit — Review — Accept/Reject
Let me take my own experience to exemplify the process. Last year, my team and I submitted a manuscript reporting an observation that has important implications for the prevention of hip fractures. Our findings were based on the world's longest-running study in the field, which had meticulously tracked over 4000 individuals for a period of 20 years. Our analysis was rigorous and flawless — according to reviewers.
Following submission, the manuscript was subjected to review by five anonymous reviewers. After three months, we received their feedback and meticulously addressed each point. We subsequently resubmitted the revised manuscript, along with our responses. Nevertheless, despite undergoing an extra review period of 30 days, our manuscript was ultimately declined. The primary reason for the rejection was due to a seemingly minor point in which we disagree with one of the reviewers and the editor. Although we believed the rejection was unfair, we decided not to pursue the matter further.
Our experience is a common occurrence in the existing scientific publishing culture. As authors, we don't know the reviewers' expertise or background. Furthermore, we are sometimes coerced into performing additional analyses or experiments, under the threat of manuscript rejection. In many cases, these requests are either extraneous or do not contribute to the central message. For instance, we have previously received reviewer's requests to perform a stepwise regression, which is considered a flawed statistical approach. Ultimately, we authors are frequently at the mercy of powerful editors and the whims of anonymous reviewers.
Problems with peer-review
Despite being a relatively recent addition to the landscape of scientific publishing, peer review has emerged as a standard method for evaluating manuscripts, allocating research funding, and conducting scientists' appraisal. It is at the heart of science.
Peer review is supposed to filter out flawed science, identify instances of fraud, and ultimately choose the best work for publication. In such an ideal situation, we expect that peer reviewers must be active researchers with significant expertise in the subject matter and that the reviewers hold scholarly ranks and credentials that are comparable to those of the authors. However, in reality, that is not the case. In reality, even junior scientists and PhD students are sometimes tasked with evaluating manuscripts authored by their more experienced colleagues since the latter are often too preoccupied to conduct the reviews themselves, raising an ethical issue. The consequence of this practice is that peer review tends to be highly subjective, unreliable, and difficult to reproduce, leading to outcomes that are nearly akin to a game of chance [2].
Many scientists contend that the current peer review system is broken [1], and to a certain extent, I concur with this viewpoint. Any scientist can attest to having experienced unprofessional reviews, which lacked constructive feedback or were unduly harsh and cruel. As a result, the peer review system causes stress and anxiety, as reviewers may act unprofessionally, engage in unproductive arguments, or display biases (including social biases) that negatively impact the evaluation's quality.
The anonymous culture of peer review renders the existing publishing model inequitable. This is primarily due to the disproportionate power wielded by journal editors and reviewers, which leaves authors vulnerable and subject to their whims. The model is unfair because reviewers, who often believe that they possess superior knowledge, tend to dismiss the hard-earned expertise of authors, who may have spent decades in their field of study.
Enter eLife model
However, eLife, a selective journal, is poised to shift the power dynamics between authors and editors.
In October of last year, eLife made an important announcement declaring that they would be discontinuing their customary accept/reject approach to decision-making. Instead, they will publish every manuscript that their editors deem appropriate for review. Currently, to my knowledge, only 30% of submitted manuscripts are selected for review.
The eLife’s new process of publication can be summarized as follows:
· Submission. Authors post their manuscript in an eLife preprint server. eLife editor will decide whether the preprint is sent out for review.
· Review. If the manuscript is selected for review, a consultative process that involves editor and reviewers will be initiated, and publication fee is collected. Authors will receive an eLife Assessment and reviewers’ comments and recommendation how to improve the manuscript.
· Reviewed Preprint Publication. The manuscript will be published on eLife website as a ‘Reviewed Preprint’ along with the eLife Assessment and public reviews. The Reviewed Preprint is citable.
· Revision. Authors may choose to revise the manuscript and resubmit. If that is the case, the revised manuscript is published as a new Reviewed Preprint with updated reviews and assessment.
· Version of Record (VoR). Following reviews, authors can choose to publish the Reviewed Preprint as a VoR and will be indexed on PubMed.
In this process, the authors have control over the publication of their manuscripts. This new publishing model allows authors to decide when to publish the version they want to see as a VoR rather than the paper that the editor and reviewers want to see. It makes a lot of sense.
In a FAQ about eLife’s new editorial policy it is stated that: "As far as eLife is concerned, authors can do anything with their paper that they want to. It is their paper, not ours. This includes, but is not limited to, having their work assessed by a traditional journal on the basis of eLife reviews. We expect most authors will not find this necessary or desirable, but we will fully support whatever choice the authors make."
Great!
Moreover, every paper will receive an "eLife Assessment" [3] which is based on two dimensions: significance and strength of evidence. They employed a semi-quantitative scale to evaluate the caliber of a manuscript (Figure). I wish that all other scientific journals to adopt this assessment methodology since it would simplify the task of funding agencies, as well as hiring and promotion committees.
Naturally, there is a concern that authors may game the system by publishing as many papers as they wish, regardless of reviewers’ comments. Another danger is that reviewers will be reluctant to comment as their comments may be ignored anyway. Others thought that the change will undermine the prestige of eLife as a selective journal.
I remain optimistic that such a scenario will not transpire, as scientists who choose eLife to disseminate their research are conscientious individuals. Furthermore, a triage system (step 1) is still in place, where only a small fraction of submitted preprints will be considered for review. Thus, I am confident that eLife will continue to be a fair and discerning journal.
I hope that other journals, regardless of their prestige, will follow eLife's lead and adopt their publishing model. As for myself, I have published my work in eLife and remain supportive of their new approach to publishing.
___
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798
[2] https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review
[3] https://elifesciences.org/inside-elife/db24dd46/elife-s-new-model-what-is-an-elife-assessment