Disrupting the equation ‘journal title = research quality’
Until now, the quality of research and the prestige of researchers are largely judged by the journal where they publish their work. The journal eLife is going to disrupt that equation — for good.
How would you assess the quality of a scientific paper? The best way is to read the paper.
Now, imagine that you sit on a hiring and promotion committee, and your task is to evaluate applicants for professorial promotion. Each applicant lists hundreds of papers in their CV. Obviously, you don’t have time to read all papers. Furthermore, even if you manage to read all the papers, it may be difficult to assess the body of work, especially if you lack specialist knowledge in the specific field. You must find a short and inexpensive way to quickly assess a research paper or an applicant’s collection of papers.
A common practice among funding agencies and hiring & promotion committees is to assess the quality of a research work based on the journal title where it is published. As a result, papers published in a general journal such as JAMA, BMJ, New England Journal of Medicine , Lancet, Nature or Science are commonly regarded as more important and of superior quality compared to those appearing in a specialist journal such as (my favorite) Osteoporosis International or Bone. This is attributed, in part, to the fact that general journals tend to have a greater impact factor compared to specialist journals. It is also driven by a perceived prestige, with general journals like CNS (Cell, Nature, Science) being regarded as exclusive venues for high-quality work.
But what is 'quality' anyway? In my view, the notion of 'quality' in research is multifaceted, and can be defined by several methodological factors, including research design, data collection, data analysis, transparency, and reproducibility. It is worth noting that a substantial number of papers featured in CNS or journals with high impact factors are often marred by poorly designed research and inadequate analysis, which can lead to a lack of reproducibility. The current crisis of irreproducibility in science is partly attributed to the pursuit of publishing in high-impact factor luxurious journals.
The presumed correlation between research quality and journal titles constitutes an ecological fallacy. The fact that papers appear in CNS or high-impact factor journals do not automatically guarantee their superiority in terms of quality. Indeed, a considerable number of such papers may be useless or even erroneous. Conversely, numerous papers published in lower-impact factor journals can provide robust data that have significant implications for clinical practice. In my area of expertise, osteoporosis research, some of the groundbreaking papers on fracture risk assessment were published in journals with impact factors of around 3, which would have been desk rejected by CNS. It is imperative to emphasize that research quality should never be evaluated solely based on journal title or journal impact factor.
The quality of research ought to be assessed at the paper level, not the journal level. I believe that any scientist would agree with me on this statement. However, we as active scientists have done nothing to change the status quo because, admittedly, most of us have benefited from the existing culture. Most of us still used journal titles as proxies for assessing the quality of a research paper. In some institutions, the lack of publication in journals with an impact factor greater than 10 is considered an indication of a scientist's incompetence!
Enter eLife. In October last year (2022), eLife announced that all manuscripts would undergo a process referred to as 'eLife Assessment' which evaluates them based on two-pronged criteria: significance of finding and the robustness of the evidence.
The significance of finding is classified as follows:
A. Landmark: findings with profound implications that are expected to have widespread influence;
B. Fundamental: findings that substantially advance our understanding of major research questions;
C. Important: findings that have theoretical or practical implications beyond a single subfield;
D. Valuable: findings that have theoretical or practical implications for a subfield;
E. Useful: findings that have focused importance and scope.
The strength of evidence is assessed based on a 6-scale classification:
1. Exceptional: exemplary use of existing approaches that establish new standards for a field;
2. Compelling: evidence that features methods, data and analyses more rigorous than the current state-of-the-art;
3. Convincing: appropriate and validated methodology in line with current state-of-the-art;
4. Solid: methods, data and analyses broadly support the claims with only minor weaknesses;
5. Incomplete: main claims are only partially supported;
6. Inadequate: methods, data and analyses do not support the primary claims.
The two evaluation dimensions are independent. So, a technically impressive work that is primarily of interest to a specialized field would be classified as D1. In contrast, a work that represents a groundbreaking idea but has only preliminary data to support its claim would receive a classification of A5. Both D1 and A5 level works merit publication and readership.
When perusing a scientific paper, we often ask 3 questions: (a) the importance of the finding, (b) the reliability of the finding, and (c) what should be done if the finding can be relied on. The eLife Assessment answers the questions precisely. I really like this system of assessment.
More significantly, from Jan 2023, eLife will do away with accept/reject decisions and instead publish all manuscripts that are selected for peer review. In other words, if a manuscript is selected by an eLife editor for a review then the manuscript will be published either as a "Reviewed Preprint" or a "Version of Record" (VoR). The author, not the editor, will decide the format of the publication. Whether it is an RP or VoR, the publication will be accompanied by
• a short eLife Assessment using the above 2 criteria;
• peer reviewers' comments; and
• authors' response to the comments.
That is a significant change.
The change has generated mixed responses from the scientific community. Some of my colleagues who have published their work in eLife thought the change would spell the end of eLife as a highly selective journal. Others thought that by relinquishing the accept/reject decisions eLife undermines its prestige. However, several prominent scientists applauded the change, believing that it would revolutionize the way scientific results are shared.
I endorse the approach taken by eLife, and I believe that the new policy will not diminish the prestige that eLife has enjoyed since its establishment in 2012. It is incorrect to think that eLife will publish every manuscript it receives; it will publish all manuscripts that are selected for peer review. How many manuscripts are selected for review? While I am not entirely certain, my understanding is that currently eLife only sends out about 30% of submitted manuscripts for review, with the remaining 70% not being selected. In the new publishing model, the bar may be lower, but eLife will continue to be a selective journal.
I strongly support eLife's bold move, because I consider that the culture of equating journal titles with research quality should have been eradicated a long time ago. It is not just a matter of being lazy, but it is fundamentally wrong to rely on journal titles or impact factors as substitutes for research quality when making decisions related to promotion, hiring, fellowships, and grants.
In my country (Australia) when examining applications for high-profile fellowships, the National Health and Medical Research Council or NHMRC (Australia's equivalent of the NIH) currently evaluates each applicant based on only 10 publications over the past 10 years, without taking into account H-index or impact factor. The review panel assesses the quality and impact of each publication based on its narrative. Nonetheless, reviewers tend to (privately) give more significance to studies published in prestigious journals like CNS.
If every paper were to receive a two-dimensional badge introduced by eLife, it would make the task of NHMRC and hiring & promotion committees much easier. I hope this becomes a reality in the future.