Reviewing the peer review process
A new study has found that a small percentage of scientists are undertaking a disproportionate percentage of peer reviews. Kathryn Allen takes a look at the peer review process.
The peer review process, whereby scientific or academic work is assessed by experts in the field and feedback is given, has little evidence to support its success. A recent study, entitled The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise, has attempted to explain why this process may be failing.
The team behind the study, predominantly based at Paris Descartes University, France, used mathematical modelling to estimate the demand and supply for peer review in biomedical research. They found that, in 2015, the supply of reviewers drastically exceeded demand, but that 69–94% of reviews were carried out by just 20% of researchers. In the same year, of the 63.4 million hours devoted to peer reviews, 18.9 million were contributed by the top 5% of reviewers. The peer review system has the resources to be sustainable but the amount of work is currently disproportionately distributed. The paper warned that this might result in a slipping standard from overworked reviewers.
Explaining the motivation behind the study, one of the paper’s authors, Michail Kovanis, said, ‘I feel that peer review, while being the best quality control system that we currently have in science, needs to be rigorously studied. Our discussions so far concerning its problems have been mostly focused on opinions and rarely on data and evidence. This doesn't happen with anything else in science.’ He added, ‘What we primarily wanted to achieve is to focus the discussion on the data and on the real problems that the system might have. Peer review in general is so central to science that people like to discuss it.’ While Kovanis personally approves of peer review, he described it as ‘a gamble that is based on the good will of all the involved parties and nobody can guarantee that it works as it should.’
Addressing this lack of data, Kovanis said, ‘Evidence-based approaches should first guide us on how we could make the system faster. It does not make sense that it takes so many months from the first submission of papers until they are finally published. Papers rejected and resubmitted are likely to be re-reviewed and evaluated again and again by different researchers. If we would share the reviews between journals, we could cut both on the time it takes for a paper to be published, and the time reviewers devote to evaluate it. We have performed simulations on this approach and the results show that it could cut these times by about 50% and 65% respectively.’
Through these evidence-based approaches, Kovanis hopes to find out whether researchers would benefit from review training, how editors can reach out to previously unused reviewers and how the imbalance of review distribution can be reduced. The issue, according to Kovanis, is access to data on peer review. He believes that, as it ‘is mostly produced by scientist paid with public money, it should be considered a public good.’
Necessary but flawed
In response to this study, Dr Ben Britton, lecturer at Imperial College London Department of Materials, UK, expressed concern at the potentially declining quality of peer review as a result of this disproportionate distribution. Britton claims that ‘peer review has been successful in establishing the authenticity and expert voice of scientific literature and is often viewed as a “gold standard” in this light. In practice, the quality of peer review is a combined effort of the editors and expert peer reviewers, and with the growth in the number, type and range of journals there is an increasing pressure on all researchers, where there is no longer enough time or resource to provide an adequate level of expert review. Peer review is certainly not perfect, as every researcher will have a few choice examples of journals where papers with obvious flaws have been let through review.’
While viewing peer review as necessary, Britton went on to express concern that the rise in public peer review ‘could result in a blog-type academic publication culture, and contribute to a rise in the “noise” of poorly communicated and incomplete research articles.’
Both Kovanis and Britton acknowledge the potential for reduced standards of peer review, but while the study promotes the need for more professionals to undertake peer review and spread the workload, Britton emphasises the need for reviewers to be recognised experts, alluding to the China Association for Science and Technology’s announcement in November 2015 that they were investigating a peer review scam in which false peer reviews were used to have papers published.
The opinion that peer review is a necessary, but flawed, process is also shared by Professor Stuart Irvine, Director of the CSER in the College of Engineering, Glyndwr University, UK. ‘The growth in peer reviewed publications, and the pressure on academics to publish, is clearly burdening the system and reviewers. I have noticed that there is far more pre-selection by editors to reduce this burden, particularly with high impact journals. This is understandable but can mean cursory rejection of good papers. As a guest editor, I have found it increasingly difficult to get good reviewers and often don’t get any response to a request to referee [...] However, I would be reluctant to see a shift away from peer review – it is still the best way to ensure quality, integrity and fairness in journal publication,’ he said.
Dr David Jesson, Manager of the Mechanical Testing Facility at the University of Surrey, UK, similarly views peer review as imperfect, but he believes these issues are not only caused by 20% of researchers doing the majority of reviews, but by the type of reviews and the lack of consistency in responses. ‘Currently, most journals review using a single blind system where the authors do not know who the reviewers are [...] Alternatives are being proposed and discussed, including double blind, where the reviewers do not who the authors are, or non-blind where reviewers are not anonymous. The former is difficult to roll out uniformly. If there are only two-or-three groups working in a particular area, it is often obvious who the work is by. In the latter case, potential reviewers may be reluctant to act in this capacity if they think there might be a less than positive reception of an honest review. What is sad is the level of politics that arises in some areas. Some authors seem to take criticism personally, some reviewers seem to enjoy being cruel and some people see opportunities for taking pot-shots at perceived competitors. This is, perhaps, only natural in a world with limited and apparently diminishing research funding,’ said Jesson.
Jesson also raised the issue of consistency, attributing it to a lack of training in peer review. ‘I'm working on the corrections requested for a recent paper at the moment. One reviewer thought the paper excellent and has no issues with the underpinning science and what might be called the research imperative that drove the project. The corrections they required were few and focused on a couple of typos and grammatical infelicities. The other reviewer was more dubious, and commented that they preferred a different methodology. They didn't say the paper was publishable but provided three more philosophical issues that they wanted to see addressed.’ Jesson suggested that, while these reviews were helpful, training reviewers might lead to a more consistent response.
‘When done properly, a review is incredibly useful - you get an expert, fresh pair of eyes to check whether you've missed anything important. What researchers must remember is that publication is not a human right and therefore their paper must conform to certain standards. If you don't meet those standards then a paper is going to have a rougher ride through to publication, and reviewers are less likely to be sympathetic, even when the research is brilliant’, said Jesson.
Professionals in the field echo the flaws highlighted in the study, along with its claim that ‘improvements in peer review will come in response to evidence.’ However, the general consensus is that while peer review is flawed, it is the best system currently available. It is worth noting that peer reviews are usually done for free, explaining the lack of training and consistency. The study alludes to various schemes offering incentives for potential reviewers, but also notes that it is seen as an ethical contribution to the field.
Dr Ben Britton, CEng CSci FIMMM, is a researcher based in the Department of Materials at Imperial College London, UK.
Professor Stuart Irvine, CPhys FIMMM FInstP, Director of the Centre for Solar Energy Research (CSER) in the College of Engineering, Glyndwr University, UK.
Dr David Jesson, CEng CSci FIMMM MInstP, runs the Mechanical Testing Facility at the University of Surrey, UK, and is the chair of the West Surrey Materials Society.
Michail Kovanis is a PhD candidate at Paris Descartes University, France, his PhD concerns ‘Modelling the complex system of scientific publication’.