By Jan Velterop
When you speak to academics and ask what is important in the scholarly literature, you most often hear “quality”. It is why they submit their articles to certain journals and not to others (at least that’s what they say). But it seems that nobody can define what ‘quality’ means in the context of scholarly literature. What is mentioned is ‘journal brand’ or ‘Impact Factor’, or even ‘established journal’. This perception that just the fact that a journal is well known, or long established amounts to ‘quality’ is particularly difficult for new journals, or platforms, which operate on a model that ensures full open access from the point of publication (“born open”), as by their very nature of being new, they are not ‘long established’ and have not had time to build up brand recognition, let alone a high Impact Factor.
That is a problem, as those new journals and platforms (which are publication venues without a circumscribed subject matter ‘scope’ and don’t want to be known as ‘journals’) can represent a significant increase in efficiency of the published scientific discourse, and a substantial decrease in overall costs to the scholarly community.
But is it right that those new initiatives shouldn’t be seen as having enough ‘quality’?
First of all, ‘quality’ is a neutral term, originally just meaning ‘being of a kind’, ‘having a property, characteristic’, which can be positive or negative: “a good quality; a bad quality.” However, in common usage it almost always is used to mean ‘good or high quality’. Given that journal quality is associated with citation scores and having been established for long, the question whether that is appropriate is a valid one.
With regard to citation scores, as expressed in the form of the Impact Factor, there is a growing understanding that it is a “statistically illiterate” and useless, even harmful, measure.
And with regard to being long established as a measure of good quality, that assumes that only journals with good quality survive for long. There are journals that have been established for very long, yet have an insignificant Impact Factor, so these criteria for quality are not exactly in agreement.
But what is quality in this context of the scholarly literature? For PLOS One, for instance, the criteria revolve around sufficient clarity and detail of describing the experiment and the methods employed, the conclusions being supported by the data provided, the ethics and adherence to community standards, and the like. ‘Relevance’ and such nebulous notions are, sensibly, not in the picture.
I would argue that if scholarly research is to benefit science and scholarship as well as society at large, the PLOS One criteria I mentioned are not enough. Openness itself is a scientifically and societally relevant part of a published article’s quality. PLOS One offers that, of course, and indeed it is one of their requirements that authors assign a true open access license to their articles (which should “not be more restrictive than CC-BY”).
However, openness is rarely mentioned as a quality in its own right, which accrues kudos to authors. It should. The benefit of true openness is immeasurable (and difficult to measure, let alone to capture in a single simple number), not just for science and scholarship, but definitely for society at large. For science and scholarship, openness means that the literature can be used and analyzed across the board, using content- and data-mining techniques for instance, to get the most out of the published information, massively increasing its usefulness. As far as society at large is concerned, not only the general educated public, but also industry (crucially: including small enterprises), policy-makers, educators, etc. would benefit.
It is time that openness is recognized as a most important element of the quality of a research publication and that those who judge researchers on their publications (e.g. tenure and promotion committees) take that into account. For the benefit of science and the benefit of society as a whole.
BOSMAN, J. Nine reasons why Impact Factors fail and using them may harm science. I&M 2.0. Available from: http://im2punt0.wordpress.com/2013/11/03/nine-reasons-why-impact-factors-fail-and-using-them-may-harm-science/
CC BY 4.0. Creative Commons. Available from: http://creativecommons.org/licenses/by/4.0/
Content License. PLOS One. Available from: http://journals.plos.org/plosone/s/content-license
Criteria for publication. PLOS One. Available from: http://journals.plos.org/plosone/s/criteria-for-publication
CURRY, S. Sick of Impact Factors. Occam’s Typewriter. 2012. Available from: http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
MAYOR, J. Are scientists nearsighted gamblers? The misleading nature of impact factors. Front. Psychol. 2010, vol. 1. DOI: DOI: 10.3389/fpsyg.2010.00215 Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2010.00215/full
About Jan Velterop
Jan Velterop (1949), marine geophysicist who became a science publisher in the mid-1970s. He started his publishing career at Elsevier in Amsterdam. in 1990 he became director of a Dutch newspaper, but returned to international science publishing in 1993 at Academic Press in London, where he developed the first country-wide deal that gave electronic access to all AP journals to all institutes of higher education in the United Kingdom (later known as the BigDeal). He next joined Nature as director, but moved quickly on to help get BioMed Central off the ground. He participated in the Budapest Open Access Initiative. In 2005 he joined Springer, based in the UK as Director of Open Access. In 2008 he left to help further develop semantic approaches to accelerate scientific discovery. He is an active advocate of BOAI-compliant open access and of the use of microattribution, the hallmark of so-called “nanopublications”. He published several articles on both topics.
How to cite this post [ISO 690/2010]: