digest

Editorial Peer Reviewers’ Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?

Martin Fenner
April 12, 2010 3 min read

Peer review is central to how we evaluate science and therefore how journal papers, grants and jobs are awarded. Peer review is done in many different ways, and has dramatically changed in the last 25 years. But the purpose of peer review is still to improve the quality of research by providing feedback, and to evaluate the quality of research. The evaluation serves as a filter both for limited resources (e.g. grants or jobs; publication in a journal is no longer a limited resource), and for other researchers to focus on the most relevant work in their field.

It is therefore surprising that relatively little research on peer review itself has been done. Most discussions focus on the shortcomings of peer review, and the arguments are often based on personal experience and/or interests. Good research on peer review can help to improve the peer review process. Last week such a paper was published in PLoS ONE.

Flickr image by Gideon Burton.

Richard Kravitz and colleagues looked at the recommendations of peer reviewers, and how they influenced the editorial decision to publish or reject a paper. The study looked at 6213 manuscripts received 2004-2008 at the Journal of General Internal Medicine (JGIM) where four of the authors were either current or former editors in chief.

Figure 1. Flow chart showing outcome of reviews pertaining to 2264 manuscripts undergoing external peer review at the Journal of General Internal Medicine.

At JGIM submitted manuscripts were first screened by an editor in chief and a deputy editor. Most manuscripts were rejected at this step, 2264 manuscripts (36%) were sent out for peer review. 2916 reviewers wrote a total of 5581 reviews (1-4 per manuscript) which included comments and a recommendation. Eventually 43% of the manuscripts were accepted for publication.

Overall, there was agreement between all reviewers in just over half of the manuscripts (54.6% ), furthermore editors did not follow these recommendations in another 10% of manuscripts:

Table 1. Likelihood of Initial Decision to Reject in Relation to Reviewer Agreement.

The inter-reviewer agreement was slightly higher than what would have been expected by chance, and was lower than the agreement between recommendations for several manuscripts by the same reviewer. In contrast, there was little correlation between editorial decisions for different manuscripts handled by the same editor.

The authors write in the discussion:

If reviewers cannot regularly agree on whether to recommend rejection or further consideration, the marginal contribution of such summative recommendations may be small, and worse, they may distract from reviewers' primary contribution, which is to improve the reporting – and ultimately the performance – of science.

The paper authors consider the following to improve reviewer recommendations: using more reviewers per manuscript, providing better training for reviewers, or recommendations could be dropped altogether and reviewers asked to focus instead on evaluating the strengths and weaknesses of manuscripts. Some journals are obviously using this latter approach.

Several studies have shown that most rejected manuscripts will eventually be published somewhere else. One important reason is that publication space in journals is no longer a scarcity as it was before electronic publishing became widespread. This means that the ultimate decision whether or not something will be published in a peer-reviewed journal rests with the authors and not the editors or reviewers. Reviewers should keep this in mind.

References

Kravitz, R., Franks, P., Feldman, M., Gerrity, M., Byrne, C., & Tierney, W. (2010). Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care? PLoS ONE, 5 (4) https://doi.org/10.1371/journal.pone.0010072

Further reading:
* Questioning the Value of Recommendations in Peer Review (Michael Long)
* Scrap peer review and beware of top journals (Richard Smith)
* Peer review: What is it good for? (Cameron Neylon)
* Peer Review VI (Sabine Hossenfelder)
* The value of peer review (me)

Copyright © 2010 Martin Fenner. Distributed under the terms of the Creative Commons Attribution 4.0 License.

Comments