By Jan Velterop
Phil Campbell, Editor of Nature, once said the following1: “If gold open access became the norm for the primary literature, the cost per article could be in excess of $10,000 to publish in highly selective journals such as Nature, Cell or Science.”
I don’t know what exactly his reasoning was, but if it was what I think it was, the figure of $10,000 is probably too low. Let me explain. Scientific journal publishers typically charge – authors in the case of gold open access; librarians in the case of subscriptions – only for content that has been published. That means that the cost of their operations (however they calculate those) is fully carried by the published articles. However, the costs of their operations also include all costs associated with being selective, i.e. the work done to reject manuscripts for publication. It follows that journals that are more selective have to release a larger amount in revenues per article that they publish than journals that are less selective.
One doesn’t see this easily in subscription fees, as the revenues from subscriptions are of course dependent on the combination of subscription fees and numbers of subscribers, so a very selective journal that needs to make a high amount per published article, in order to cover its costs, may also have a large number of subscribers, and can do that – cover its costs, that is – with relatively low subscription fees. Article processing charges (APCs) on the other hand, must reflect the amounts needed to cover a publisher’s costs. So generally, one could expect to see higher APCs for more selective journals, and vice versa.
If only it were that simple. The reality is that the case seems to be made by publishers that APCs should be higher for selective journals than for less selective ones – Phil Campbell’s example above – but I’ve heard or seen very little explicit reasoning in the opposite direction, namely that APCs should be lower for less selective journals.
For larger publishers, the APCs for open access, particularly in so-called ‘hybrid’ journals, that combine open access articles with paywalled ones, typically range between $3000 and $5000 per published article. This is across the board, for all journals in their portfolios, irrespective of their selectivity. Except perhaps for one or two truly exceptional journals, with rejection rates of over 90%, for which the APCs are set – if they offer open access at all – at higher levels, such as the amounts Phil Campbell mentioned. By the way, if Nature (or Science, or Cell) had to cover all their costs and current profit levels with APCs, I reckon the amounts should be appreciably higher than $10,000. Rumor has it that the estimate of the journal Science is in the $20,000 range.
So, it seems that we cannot have selective journals with reasonable, affordable, APCs.
One solution could perhaps be to charge submission fees rather than – or in addition to – publication fees that could then be substantially lower. One could see submission as similar to entering an exam, for which you pay an exam fee, whether or not you pass. Like the test one has to pass in order to get a driver’s license, for instance. That is unlikely to succeed, though, if not all publishers would simultaneously introduce such submission fees. But it is unlikely to happen for another, perhaps more fundamental, reason. Namely the hurdle that a submission fee would oblige a publisher to guarantee carrying out proper peer review and truly justifying any rejection. That is something they probably cannot do, or are most uncomfortable doing.
But we have to ask ourselves the question why selectivity of journals is really necessary beyond filtering out ‘crackpottery’ or fake articles, especially since the space constraints present in the print era are not relevant any longer in the internet world in which any serious journal is published electronically. The usual answer is: “Quality!” And peer review is almost universally seen as the way to assess ‘quality’. That is interesting, as peer review has many characteristics, none of which can credibly be described as a reliable quality indicator. Some of its characteristics are most discouraging: slow, inefficient, unreliable, highly variable, ineffective, arbitrary, undermining scientific skepticism, confirmation-biased, putting careerism before science, expensive, to name a few. I have addressed these one by one in a recently published article2.
There may well be a need for some sort of quality assessment of articles, but that shouldn’t stand in the way of scientific communication, as it currently clearly does. Scientific communication (i.e. publication) on the one hand, and subjective quality assessment on the other, should be seen as separate processes. One way of doing that is to publish an article first on one of the so-called ‘preprint’ platforms, such as ArXiv and BioRxiv, and then submit the article to a journal for assessment of ‘quality’ (whatever that means) and ‘importance’ or ‘significance’ (whatever those mean). I am writing this post literally a day after I happened to talk to Harry Kroto (of C60 – buckminsterfullerene – fame3), who told me that he probably wouldn’t have pursued his research in the current environment of scientific funding and publishing, since he earned his Nobel Prize on the basis of what he called ‘playful research’. He said he didn’t for a moment believe true quality and significance can be assessed at the moment of publication. It always comes afterwards, over time. That’s why it almost always takes decades before a discovery leads to awarding a Nobel Prize.
Given that peer review cannot establish quality and significance, it seems that journal selectivity is rather less important for scientific progress than it is made out to be. That said, peer review can be a great help to authors in preventing unnecessary errors or ambiguities in their articles. In my view, that is the most important value of peer review. Imagine peer review focused primarily, even exclusively, on the aspect of clarity and avoiding unnecessary errors. In that case, peer review can just as well be arranged by the authors. They are more likely to know the real experts in their particular sub-field than any publisher does, anyway. And if the reviewers they ask (subject to a few criteria to ensure fairness4) would, after some discussion and iteration, conclude that any errors and ambiguities were removed (or that they never were there in the first place), and openly endorse its publication (i.e. the addition of the article in question to the scientific discourse), then the whole process could be done faster, and, not unimportant, orders of magnitude cheaper, as rejections and the cost of those would be history. The publisher would only have to recoup the technical cost of publication of articles published, and not have any costs associated with peer review and rejection.
It pleases me to announce that the first article published on this ‘peer review by endorsement’ basis has just been published5.
This method of publishing with ‘peer review by endorsement’ ensures that articles have had some scrutiny before being published. If they could subsequently be critiqued and be subjected to ‘post-publication review’ as well, we will have arrived at the situation that the communication of scientific results has been secured, and that in a subsequent and separate process the article’s ‘value’ in terms of significance, quality, and all that, can be assessed, potentially over a period as long as years. True quality will emerge; the chance of false positives – prematurely calling a paper significant – will have been diminished. And careers can then be built on true achievements rather than on spurious impact factor scores.
1. Open access to research is inevitable, says Nature editor-in-chief. The Guardian. 2012. Available from: http://www.theguardian.com/science/2012/jun/08/open-access-research-inevitable-nature-editor
2. VELTEROP, J. Peer review – issues, limitations, and future development. ScienceOpen Research. 2015. DOI: 10.14293/S2199-1006.1.SOR-EDU.AYXIPS.v1 Available from: http://www.scienceopen.com/document/vid/1dcfbe69-c30c-4eaa-a003-948c9700da40
3. Harry Kroto. Wikipedia. Available from: http://en.wikipedia.org/wiki/Harry_Kroto
4. Competing Interests. ScienceOpen.com. Available from: http://about.scienceopen.com/competing-interests/
5. ZHDANOV, R.; et al. A subset of cellular lipids may provide a new dimension of epigenetic regulation through control over the structure and functions of chromatin. ScienceOpen Research. 2015. DOI: 10.14293/S2199-1006.1.SOR-LIFE.AUXYTR.v1 Available from: http://www.scienceopen.com/document/vid/8feb0edb-4724-4f85-9bd4-fd3b0eee2868
Competing Interests. ScienceOpen.com. Available from: http://about.scienceopen.com/competing-interests/
Harry Kroto. Wikipedia. Available from: http://en.wikipedia.org/wiki/Harry_Kroto
Open access to research is inevitable, says Nature editor-in-chief. The Guardian. 2012. Available from: http://www.theguardian.com/science/2012/jun/08/open-access-research-inevitable-nature-editor
VELTEROP, J. Peer review – issues, limitations, and future development. ScienceOpen Research. 2015. DOI: 10.14293/S2199-1006.1.SOR-EDU.AYXIPS.v1 Available from: http://www.scienceopen.com/document/vid/1dcfbe69-c30c-4eaa-a003-948c9700da40
ZHDANOV, R.; et al. A subset of cellular lipids may provide a new dimension of epigenetic regulation through control over the structure and functions of chromatin. ScienceOpen Research. 2015. DOI: 10.14293/S2199-1006.1.SOR-LIFE.AUXYTR.v1 Available from: http://www.scienceopen.com/document/vid/8feb0edb-4724-4f85-9bd4-fd3b0eee2868
About Jan Velterop
Jan Velterop (1949), marine geophysicist who became a science publisher in the mid-1970s. He started his publishing career at Elsevier in Amsterdam. in 1990 he became director of a Dutch newspaper, but returned to international science publishing in 1993 at Academic Press in London, where he developed the first country-wide deal that gave electronic access to all AP journals to all institutes of higher education in the United Kingdom (later known as the BigDeal). He next joined Nature as director, but moved quickly on to help get BioMed Central off the ground. He participated in the Budapest Open Access Initiative. In 2005 he joined Springer, based in the UK as Director of Open Access. In 2008 he left to help further develop semantic approaches to accelerate scientific discovery. He is an active advocate of BOAI-compliant open access and of the use of microattribution, the hallmark of so-called “nanopublications”. He published several articles on both topics.
How to cite this post [ISO 690/2010]: