By Jan Velterop
Whenever you talk to research scientists about what is important in a journal, especially if you’re about to submit a paper, you hear mutterings about ‘quality’. They usually mention ‘impact factor’ (the ease of counting—something, anything—makes it tempting to infer quality from quantity). And they almost never offer a credible definition of what ‘quality’ actually means. On the rare occasions that they do, they seem to have ‘scientific quality’ in mind. By which they apparently mean ‘scientific significance’. Yet ‘scientific significance’ is mostly what journal editors and peer reviewers—arbitrarily and subjectively—deem an article to have. It is not an objective assessment, with clear criteria. It cannot be. Scientific significance typically manifests itself only after a long time, when flaws and irreproducibility and the like have been ruled out (lack of reproducibility is a particularly problematic issue). Sometimes that takes a decade or more. An article may seem most interesting and promising, but its true significance almost certainly cannot be established at the time of publication. Sometimes, insignificance can be established at that point, but even in that regard, caution is needed. What may seem insignificant at first may later turn out to be significant, even highly significant. Such is the nature of science. And even if an article is insignificant on its own, if taken into account in meta-analyses it may add information that enhances the accuracy of the finding. Also, many ‘null-results’ or ‘negative results’ are often seen as insignificant, which is wholly inappropriate.
The widespread preoccupation with ‘quality’ (or, as the authors call it, ‘excellence’) is actually detrimental to science, as argued in a recent most interesting and thought-provoking article1 on Figshare. It is a must-read for any scientist and scholar, and in particular for those who think of themselves as excellent.
There are, however, elements of ‘quality’ or ‘excellence’ that can be measured fairly objectively. Such as the clarity and intelligibility of the language used and the absence of ambiguity. Or the comprehensiveness and details of a Methods and Materials section. Or the clarity of illustrations and graphs. Or the comprehensiveness of the reference list, with clear identification of, and links to, the full papers cited, for instance via DOIs. Or the appropriate application and presentation of statistics. Or the data presented properly supporting the conclusions reported. This is, roughly, the approach to ‘quality’ by journals such as PLOS One (and a growing number of open access journals that follow the same set of criteria). I would add to that an article’s openness. After all, the ability to be shared and re-used is highly beneficial to science, and worthy of being regarded as a ‘quality’ criterion.
But then there is another category of quality elements. Those are related to the quality of a publisher’s service. To readers, openness and re-usability are important qualities of a journal’s and publisher’s service. To authors, the service they receive from journals and publishers is paramount. In open access publishing, the service to authors is particularly important, given that in many cases there is a fee, the so-called Article Processing Charge, or APC, for being published upon acceptance after a peer review procedure. Mostly, however, information about a journal’s and publisher’s service is only available anecdotally. A recent effort to collect such anecdotes (after all, a collection of anecdotes is data), at least for open access journals, is QOAM, Quality Open Access Market. The initiative aims to crowd-source information on the services open access journals offer, and gather that information in ‘score cards’. They use two types of score cards: Base Score cards, addressing a journal’s information on their web site about editorial policies, peer review, governance, and workflow, and Valuation Score cards, covering the journal’s responsiveness, added value from peer review, its value for money, and general satisfaction as perceived by the author. The project is still in its early stages, and crowd-sourcing this information is its first priority, but once a reasonably comprehensive database has been created, it would be a good source of guidance for those seeking the journal that best suits their requirements. And it would be a positive counterweight to the rather negatively oriented blacklist of ‘predatory journals’ maintained by Jeffrey Beall.
But let’s get back to quality and excellence. Given that the preoccupation with them is detrimental to science (you’re right, I agree with the authors of that article), shouldn’t we reassess the role of journals in the whole process of scientific communication altogether? In one of my earlier posts2 on this blog, I tried to make the case that communication should come first, and career advancement later. I have come to the conclusion that the best of both worlds might be realized if all articles were posted on preprint servers first, and then submitted to journals, in order to get the ‘ribbons’ needed for the author’s reputation management. I touched on that in a recent presentation3. Publishing first on an open preprint server has some very significant benefits. It is fast, as sharing one’s research results is not subject to delay due to peer review and other journal publication procedures; it enables potentially much wider, open and frank peer review by members of the scientific community in a given discipline, if compared to the limited and often anonymous peer review offered by journals; and there is, on the whole — at least where it has been analyzed — very little substantial difference between the preprint versions and eventually formally published versions of articles. Preprints may even satisfy the requirements of immediate open access the EU have recently announced4 — I think they will.
It really would be the best of both worlds. The fast and free flow of scientific information would be secured via preprints, and publishers would be able to justify charging for their services of formal publication in their journals as well. They would just not be ‘publishers’ anymore; their role would not involve the actual dissemination of knowledge. They would be ‘purveyors of reputation management badges’ (or ‘ribbons’, as I like to call them). Those clearly are considered worthwhile by large sections of the scientific community, and in this way the need for these ‘ribbons’ would not be an impediment to the free flow of knowledge, but a complement to it.
Notes
1. MOORE, S., et al. Excellence R Us: University Research and the Fetishisation of Excellence. Figshare. 2016. Available from: http://figshare.com/articles/Excellence_R_Us_University_Research_and_the_Fetishisation_of_ Excellence/3413821/1
2. VELTEROP, J. Science (which needs communication) first, careers (which need selectivity) later. SciELO in Perspective. [viewed 08 June 2016]. Available from: http://blog.scielo.org/en/2015/10/29/science-which-needs-communication-first-careers-which-need-selectivity-later/
3. VELTEROP, J. Openness is a scientifically and societally relevant part of a published article’s quality. In: ALLEA symposium, Vienna, 2016. Available from: http://figshare.com/articles/Openness_and_Quality_Vienna_April_2016_copy_pptx/3187030 and http://www.allea.org/Content/ALLEA/General%20Assemblies/GA2016/Symposiumpresentations/ 18042016nachmittag_Velterop.mp3
4. KHOMAMI, N. All scientific papers to be free by 2020 under EU proposals. The Guardian. 2016. Available from: http://www.theguardian.com/science/2016/may/28/eu-ministers-2020-target-free-access-scientific-papers
References
BAKER, M. 1,500 scientists lift the lid on reproducibility. Nature. 2016, vol. 533, nº 7604, pp. 452-454. DOI: 10.1038/533452a. Available from: http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970
KHOMAMI, N. All scientific papers to be free by 2020 under EU proposals. The Guardian. 2016. Available from: http://www.theguardian.com/science/2016/may/28/eu-ministers-2020-target-free-access-scientific-papers
KLEIN, M, et al. Comparing Published Scientific Journal Articles to Their Pre-print Versions. Arxiv. 2016. Available from: http://arxiv.org/pdf/1604.05363v1.pdf. No prelo.
MOORE, S., et al. Excellence R Us: University Research and the Fetishisation of Excellence. Figshare. 2016. Available from: http://figshare.com/articles/Excellence_R_Us_University_Research_and_the_Fetishisation_of_ Excellence/3413821/1
VELTEROP, J. Openness is a scientifically and societally relevant part of a published article’s quality. In: ALLEA symposium, Vienna, 2016. Available from: http://figshare.com/articles/Openness_and_Quality_Vienna_April_2016_copy_pptx/3187030 and http://www.allea.org/Content/ALLEA/General%20Assemblies/GA2016/Symposiumpresentations/ 18042016nachmittag_Velterop.mp3
VELTEROP, J. Science (which needs communication) first, careers (which need selectivity) later. SciELO in Perspective. [viewed 08 June 2016]. Available from: http://blog.scielo.org/en/2015/10/29/science-which-needs-communication-first-careers-which-need-selectivity-later/
External link
QOAM, Quality Open Access Market – <http://www.qoam.eu/oamarket/>
About Jan Velterop
Jan Velterop (1949), marine geophysicist who became a science publisher in the mid-1970s. He started his publishing career at Elsevier in Amsterdam. in 1990 he became director of a Dutch newspaper, but returned to international science publishing in 1993 at Academic Press in London, where he developed the first country-wide deal that gave electronic access to all AP journals to all institutes of higher education in the United Kingdom (later known as the BigDeal). He next joined Nature as director, but moved quickly on to help get BioMed Central off the ground. He participated in the Budapest Open Access Initiative. In 2005 he joined Springer, based in the UK as Director of Open Access. In 2008 he left to help further develop semantic approaches to accelerate scientific discovery. He is an active advocate of BOAI-compliant open access and of the use of microattribution, the hallmark of so-called “nanopublications”. He published several articles on both topics.
Como citar este post [ISO 690/2010]:
Read the comment in spanish, by Javier Santovenia:
http://blog.scielo.org/es/2016/06/13/lo-mejor-de-ambos-mundos/#comment-39191
Pingback: 7th OpenAIRE workshop i Oslo 14/2 2017: Impact and Measurement of Open Access – Camilla Lindelow