By Ernesto Spinak
Recent and controversial history
The scientific literature has a history dating back at least 350 years. From the beginning, before being published, it was supervised in one way or another by the authors’ colleagues of the same academic field. However, the peer review procedures currently used by academic journals are not as old as it is thought to be, they are much more recent. The journal Nature introduced it formally in 1967. Before that, some papers were reviewed and others were directly approved by the journal editor. Michael Nielsen on his blog1 suggests that “as science became more specialized… editors gradually found it harder to make informed decisions about what was worth publishing”.
However, since the 1980s there have been several criticisms about the limitations and biases that could be introduced by the peer review procedures employed, such as the single blind, double blind, and others. Some of the best-known critics were Chubin and Hackett, respectively, from the Office of Technology Assessment of the United States Congress and the Rensselaer Polytechnic Institute in New York. They so expressed that much time ago, in 1990, in their book Peerless Science2:
“Peer review is the principle on which the internal governance system of Science has traditionally depended. During the past tem years there has been a great deal of evidence suggesting that that assumption is no longer valid (if it ever was), and that many of the strains in the science-government relationship in the U.S. are traceable either to the assumption itself or to the ways it is implemented”.
Several methods of evaluation have been tried over the last three centuries, in particular from the 1960s onwards, but a procedure that would serve all parties could not be found. But still, the number of scientific publications grows like a flood, which introduces additional pressures to current methods of assessment.
Each year, 1.7 million articles are submitted for evaluation and review prior to publication. A survey3 conducted in 2015 on the time researchers spend on manuscript review has estimated that it is in the range of 13 to 20 million hours-person. This, of course, is an approximation, because in some areas only one reviewer and the editor’s comments is needed, while in other journals three reviewers are requested. Nor does this calculation take into account the fact that an article is frequently sent to several journals sequentially until it can be published, and many others are never published, which also require time for review. All this clearly shows that peer review of academic papers is a very large burden imposed to the researchers’ available time.
The questions and the debate
Based on the reasons we are talking about, BioMed Central and the technology company Digital Science held in November 2016 a workshop where a group of editors and researchers was proposed to imagine future scientific peer review scenarios. The results were published in May 2017 under the title: What will peer review be like in 2030?4
The central questions on which the debate revolved were:
- Could technology make peer review faster and easier?
- With the application of artificial intelligence would it be possible to achieve peerless science?
- Could more transparency make the evaluation process more ethical?
- Should there be training to conduct evaluation?
- How to attribute credit to the referees’ work?
One of the innovative issues considered was how Artificial Intelligence could be incorporated into the editorial review process. Among the different uses that have been considered were the analysis of databases to find suitable referees, detection of plagiarism, structural analysis of documents to find articles with failures, and analysis of statistical data to find errors or data suspected of being fraudulent. Such support will be increasingly needed in view of the insufficient human capacity to cope with the increase in scientific publication and its corresponding review.
Preprints
As expected in this debate, the preprints servers and the different forms of open reviews were mandatory topics. For each of the themes the benefits and their own problems have been considered.
The preprints servers are nowadays in the center of the debate. If most of the research articles were published as preprints, then access to research findings would be available as soon as the article was completed, rather than as it currently occurs, that access is only possible when the article is accepted and published in a journal.
One of the current objections to preprints servers is that if hundreds of thousands of articles that are produced in all areas of knowledge are allowed to be published before being reviewed and approved, then there would also be an avalanche of errors and retractions that would make the whole world of research to come down. This objection arises due to the certain concern that the retractions have also been increasing uninterruptedly in recent times. That retractions are increasing is right, but … surprise! … the main growth of retractions is associated with first level journals and also megajournals.
If preprints servers could achieve the dominant role they have in Physics, publishing articles in journals would be attractive only if it can add value to the scientific communication process. In other words, it would only be worth publishing in quality journals because, through traditional peer review, reproducibility, reliability, and quality control prior to publication could be guaranteed.
Observing this scenario, it could be that we are transitioning to a world in which many research results would be published as they are written by the researchers and soon undergo peer review for eventual publication (or not). On the other hand, other more select research would undergo a rigorous review process, including all the mentioned controls.
Open peer reviews
And finally, the central idea that has motivated the experts meeting sponsored by BioMed Central and Digital Science was the analysis of the different options that are presented to open the reviews to the public (open peer-review). Because open peer-review also has its own problems, among which stand out:
- How to reward scientists to make them volunteer to make reviews, because the task is unpaid and takes time and work.
- Young researchers are not yet known to the journal editors, so their reviews, even if they are public, will not necessarily be appreciated.
- Young researchers might feel embarrassed to review the work of more experienced scientists and publish their names in public reviews.
To encourage scientists to volunteer to make reviews, particularly those who have never done it before, it was suggested that peer review training programs should be developed, especially if these courses were promoted by renowned organizations.
Opening the doors of peer review to all researchers is not only a journals’ goal. The reviewers say that it is rewarding to be part of the academic community, as well as it keeps them informed about new research and what is being worked on. They also expressed that they would like to be able to help improve the communications they review. In addition, they feel that working as a reviewer could improve their reputation and position in their academic communities.
A Wiley poll5 of more than 3,000 reviewers suggests that researchers prefer a formal recognition of their work than being paid for. In any case, they consider that the recognition of this labor is not adequate and should have a greater weight in the evaluation processes in their institutions.
Among scientists, peer review is seen as a crucial component of trusting academic literature. Despite technological advances in, for example, automated statistical controls, few reviewers see in the future a publication model without peer review prior to publication. So, if peer review is to be kept sustainably in a world of ever-increasing scientific communication, we must ensure that the available basis of peer reviewers is as broad, global, and diverse as it is the source of scientists publishing papers.
Notes
1. NIELSEN, M. Three myths about scientific peer review [online]. Michael Nielsen’s blog, 2009 [viewed 10 Mar 2017]. Available from http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
2. CHUBIN, D. E. and HACKETT, E. J. Peerless Science: Peer Review and U. S. Science Policy [online]. New York: State University of New York Press. 1990. 267 p. SUNY series in Science, Technology, and Society [viewed 24 May 2017]. Available from: http://capr-peerreview.mines.edu/Peerless%20Science.pdf
3. DEVINE, E. and FRASS, W. Peer review in 2015: A global view [online]. Taylor & Francis. 2015 [viewed 24 May 2017]. Available from: http://authorservices.taylorandfrancis.com/custom/uploads/2015/10/Peer-Review-2015-white-paper.pdf
4. SpotOn Report. What might peer review look like in 2030? A report from BioMed Central en Digital Science [online]. BioMed Central and Digital Science. 2017, 24p [viewed 24 May 2017]. Available from: http://figshare.com/articles/What_might_peer_review_look_like_in_2030_/4884878
5. Why high-profile journals have more retractions [online]. Nature. 2014 [viewed 24 May 2017]. DOI: 10.1038/nature.2014.15951. Available from: http://www.nature.com/news/why-high-profile-journals-have-more-retractions-1.15951
References
CHUBIN, D. E. and HACKETT, E. J. Peerless Science: Peer Review and U. S. Science Policy [online]. New York: State University of New York Press. 1990. 267 p. SUNY series in Science, Technology, and Society [viewed 24 May 2017]. Available from: http://capr-peerreview.mines.edu/Peerless%20Science.pdf
DEVINE, E. and FRASS, W. Peer review in 2015: A global view [online]. Taylor & Francis. 2015 [viewed 24 May 2017]. Available from: http://authorservices.taylorandfrancis.com/custom/uploads/2015/10/Peer-Review-2015-white-paper.pdf
NIELSEN, M. Three myths about scientific peer review [online]. Michael Nielsen’s blog, 2009 [viewed 10 Mar 2017]. Available from http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
SPINAK, E. and PACKER, A. 350 years of scientific publication: from the “Journal des Sçavans” and Philosophical Transactions to SciELO [online]. SciELO in Perspective, 2015 [viewed 24 May 2017]. Available from: http://blog.scielo.org/en/2015/03/05/350-years-of-scientific-publication-from-the-journal-des-scavans-and-philosophical-transactions-to-scielo/
SpotOn Report. What might peer review look like in 2030? A report from BioMed Central en Digital Science [online]. BioMed Central and Digital Science. 2017, 24p [viewed 24 May 2017]. Available from: http://figshare.com/articles/What_might_peer_review_look_like_in_2030_/4884878
WARNE, V. Rewarding reviewers – sense or sensibility? A Wiley study explained. Learned Publishing [online]. 2016, vol. 29, no. 1, pp. 41-50 [viewed 24 May 2017]. DOI: 10.1002/leap.1002. Available from: http://onlinelibrary.wiley.com/doi/10.1002/leap.1002/full
Why high-profile journals have more retractions [online]. Nature. 2014 [viewed 24 May 2017]. DOI: 10.1038/nature.2014.15951. Available from: http://www.nature.com/news/why-high-profile-journals-have-more-retractions-1.15951
About Ernesto Spinak
Collaborator on the SciELO program, a Systems Engineer with a Bachelor’s degree in Library Science, and a Diploma of Advanced Studies from the Universitat Oberta de Catalunya (Barcelona, Spain) and a Master’s in “Sociedad de la Información” (Information Society) from the same university. Currently has a consulting company that provides services in information projects to 14 government institutions and universities in Uruguay.
Translated from the original in Spanish by Lilian Nassi-Calò.
Como citar este post [ISO 690/2010]:
Read the comment in Spanish, by Javier Santovenia Diaz:
http://blog.scielo.org/es/2017/07/26/como-sera-el-arbitraje-por-pares-en-el-ano-2030/#comment-41137