By Jan Velterop
“Men only care for science so far as they get a living by it, and that they worship even error when it affords them a subsistence.” — Goethe1
Before addressing the question in the title, let us first have a look at peer review. It has become one of the pillars on which science rests, and rightly so. Scientific research results and theories must be scrutinised and critically appraised before they can be regarded as fit to be added to what is the scientific consensus at any one time. Publications should always be approached with a certain level of professional scepticism and never be accepted at face value. And, crucially, that holds true irrespective of whether an article has been peer-reviewed before being published. After all, pre-publication peer review just means that a few peers—usually two—have reviewed the article, and at best, if the authors are lucky, those peers happen to be true and conscientious experts who actually manage to help improving the article, and not some relatively random researchers who finally responded to the publisher’s invitation to review. If you think this sounds too cynical, just remind yourself of the fact that just about any article eventually gets published in a so-called peer-reviewed journal. Only 1.4% of the 11.962 scholarly peer-reviewed journals monitored by Thomson Reuters (well less than half of the estimated 28,000 or so in existence) have an Impact Factor of 10.0 or more, and only 62% have one of 1.0 or more. I am saying this to illustrate that just being a peer-reviewed journal doesn’t have anything to do with having a good reputation or the perceived quality that is usually associated with that.
The peer review that any of the articles in any of these journals has undergone may have been reasonable, or even good. The problem is, nobody knows for sure for which articles that is the case (with the possible exception of the very few that have undergone a totally open peer review process, and for which the reviewers’ reports and authors’ responses are public). It means that close scrutiny of any published paper with a critical and professionally sceptical attitude is always called for.
This may not happen sufficiently. Results and conclusions presented in published and peer-reviewed articles may not be questioned often enough. The peer-reviewed status of these articles may too often be regarded as an assurance of verity. Of course, this is conjecture that I could only back up with some anecdotal indications that I received personally and informally, but the question remains: wouldn’t the lack of reproducibility, pretty widespread if the Nature2 survey is any guide, improve if readers had to regard articles more critically, due to absence of formal peer review, and authors knew that?
If this is indeed the situation, primarily publishing as preprints may offer a solution, as preprints are not formally peer-reviewed prior to publication, which should incentivise the serious reader to be careful and critical before accepting the results. Any post-publication review, visible and coming from disclosed sources (i.e. not anonymous, which is the norm for pre-publication peer review, and part of the problem), would help the reader in that regard.
For the authors, there would be an incentive to make at least sure that descriptions of methods published in preprints are such that replication could be successfully attempted, lest they get a reputation for publishing irreproducible results.
1. GOETHE, J.W., et al. Conversations of Goethe with Eckermann and Soret. London: Smith, 1850, p. 271. [viewed 17 October 2016] Digitally available from: http://archive.org/stream/conversationsofg01goetuoft#page/n5/mode/2up
2. BAKER, M. 1,500 scientists lift the lid on reproducibility. Nature. 2016, vol. 533, nº 7604, pp. 452-454. DOI: 10.1038/533452a. Available from: http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970
BAKER, M. 1,500 scientists lift the lid on reproducibility. Nature. 2016, vol. 533, nº 7604, pp. 452-454. DOI: 10.1038/533452a. Available from: http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970
BOON, S. 21st Century Science Overload. Canadian Science Publishing. [viewed 17 October 2016]. Available from: http://www.cdnsciencepub.com/blog/21st-century-science-overload.aspx
GOETHE, J.W., et al. Conversations of Goethe with Eckermann and Soret. London: Smith, 1850, p. 271. [viewed 17 October 2016] Digitally available from: http://archive.org/stream/conversationsofg01goetuoft#page/n5/mode/2up
Journal impact factor 2016 Thompson Reuters. ResearchGate. Available from: http://www.researchgate.net/publication/304153883_Journal_impact_factor_2016_Thompson_Reuters
About Jan Velterop
Jan Velterop (1949), marine geophysicist who became a science publisher in the mid-1970s. He started his publishing career at Elsevier in Amsterdam. in 1990 he became director of a Dutch newspaper, but returned to international science publishing in 1993 at Academic Press in London, where he developed the first country-wide deal that gave electronic access to all AP journals to all institutes of higher education in the United Kingdom (later known as the BigDeal). He next joined Nature as director, but moved quickly on to help get BioMed Central off the ground. He participated in the Budapest Open Access Initiative. In 2005 he joined Springer, based in the UK as Director of Open Access. In 2008 he left to help further develop semantic approaches to accelerate scientific discovery. He is an active advocate of BOAI-compliant open access and of the use of microattribution, the hallmark of so-called “nanopublications”. He published several articles on both topics.
How to cite this post [ISO 690/2010]: