By Lilian Nassi-Calò
The evaluation of science in general and the performance of researchers, journals, institutions and countries, in particular, use a variety of bibliometric indicators. Most of them are based on citations, although, indicators that consider social media sharing have recently gained momentum and credibility by academia and society.
The practice of citing scientific articles is influenced by a number of factors, and it is not possible to establish a direct and unequivocal relationship between citations and scientific merit, as it would be desirable. In fact, studies indicate that not always an author’s most cited article is his best work, in his own opinion.
Bibliometric indicators represent, however, much more than an indication of the visibility, relevance and impact of articles. A researcher’s entire career profile can be summarized in one or more numerical productivity and impact indicators of the research he produces. Career promotions, hiring, receiving awards, obtaining research grants and reputations may depend on indicators such as h index or the journals’ Impact Factor, although not being recommended to do so, given the circumstances and biases of these indicators and recommendation by several international initiatives, such as the Leiden Manifesto1 and the San Francisco Declaration on Research Assessment2.
It is known that citation metrics vary considerably with the area of knowledge, the publication age, the type of document and the coverage of the database where citations were collected. An article by John Ioannidis, at Stanford University, published in PLoS Biology3 evaluates the pros and cons of metrics normalization, as well as the challenges of this bibliometric exercise, which should be undertaken not only by experts on this science, but by all those involved in research assessment, and of course, those who are being assessed. In this post we analyze the aspects of metrics normaliization raised in that article.
The influence of the knowledge area in the citation volume seems intuitive, because not all subjects have the same citation opportunities. It is known that articles in Social Sciences, Engineering and Humanities attract fewer citations and these also have a higher half-life as compared to life sciences’. That is why some impact indexes consider a 3 -5 year windows to collect citations in these areas and two year intervals for Life Sciences articles. On the other hand, Health Sciences journals covering Medical Sciences in general receive more citations than journals dedicated to specialties. The main challenge lies, however, in defining the subjects for normalization, since they are not isolated entities but cite each other interdisciplinarily. The definition of areas of knowledge can be made using taxonomy of journal categories or citation analysis. There is no ideal or accurate method to define subjects, since the choice will depend on the time interval, the database used to accrue citations and the type of articles considered, making, thereby, normalization a complex task.
The year of publication or the article’s age is apparently an easily normalization factor. Articles published ten years ago had more time to accumulate citations than articles published only a year ago. Thus, one could only compare articles published in the same year, if we do not take into account the variation of the month of publication. In this regard, the continued publication of articles, a practice increasingly adopted by online journals, tends to neutralize this factor by allowing exposing articles to citation as soon as they are accepted for publication.
Normalization by document type brings many challenges. It is known that review articles attract, on average, more citations than original articles, and that other types of documents, such as letters to the editor and editorials, typically do not attract citations. However, an editorial of interest may accrue a large number of citations, as did an editorial on corporate social responsibility in the developing world, published in the journal International Affairs4. The 2005 paper received 667 citations according to Google Scholar in October 14th, 2016. It is also necessary to consider that articles that report truly innovative ideas and come up against skeptics peer reviewers can be better disseminated through non arbitrated, citable communications, or as preprints, subject to post-publication review. The categorization of article type, however, may not be as obvious as it seems. It is the journals prerogative to categorize articles by type and, consequently, errors may occur. The term “review” does include not-systematic expert’s review, systematic reviews, and meta-analyzes, among others. These differ widely regarding credibility, scientific value, contribution to the area and reasons to be cited. Many review articles deserve to be cited, since they have a considerable aggregated value. Penalizing these articles towards original articles in a normalization attempt seems unfair. On the other hand, there are still poor reviews, whose received citations are only a measure of how much they harm science.
Citation analysis strongly depends on bibliographic database used to count them. There are specialized databases on a particular subject (PubMed, for example) and multidisciplinary databases (Web of Science, Scopus and Google Scholar, for example). The first two have controls that restrict what is indexed, as compared to Google Scholar, which is more comprehensive, and both are also less inclusive than the academic search engine in the coverage of Social Sciences and Humanities. Thus, the h index of a particular author, for example, is usually higher in Google Scholar than in other databases. In this respect, greater database coverage may seem desirable, but irrelevant scientific data may change the results and hinder normalization. Besides data coverage, the newly created preprint repositories (bioRxiv, PsyArXiv, SocArXiv, ChemRxix) inspired by the pioneering Physics and Mathematics repository – arXiv – will possibly compete for citations with journal articles, and it will be necessary to also consider them in normalization.
Besides the factors discussed above, other variables can be considered in the normalization process. Some impact indicators – such as SCImago Journal Rank (Scopus) and Eigenfactor (Web of Science) weight citations received by the prestige of the citing journal. Citation normalization should do the same? Other relevant questions refer to the frequency of mentioning a paper by the citing article, and whether it is a favorable or unfavorable criticism. Although negative citations are not frequent, they are difficult to estimate – a recent article5 estimated its frequency by about 2% – in the Social Sciences these citations have a certain relevance. There are also controversies on how to deal with self-citations in different areas.
Two initiatives that aim at normalizing citations were compared by the author of the study under analysis regarding specific factors, the Relative Citation Ratio (RCR) method6 and the evaluation system of scientific output used in the university ranking developed by Leiden University, in the Netherlands. Both systems, according to Ioannidis and coauthors, perform corrections in the following areas: definition and characterization of area of knowledge; age of publication and document type. On the other hand, the systems do not adjust citations by citation source; place of citation in citing source; multiplicity of references in citing the source, and context of citation (supportive versus negative citation). The authors warn, however, that these adjustments, or their absence, does not necessarily imply in greater validity of the citation analysis results, but simply reflect the criteria of various systems. Some controversial aspects still deserve attention, such as multi-authored articles and the difficult to give due credit to each author. Some approaches for this type of correction were cited in the article.
From the analysis of factors to consider in citation metrics normalization more doubts than certainties may emerge. It is important to keep in mind that it is the interest of all those involved anyhow with research assessment to try to understand its meaning and applicability in different situations.
Notes
1. Can we implement the Leiden Manifesto principles in our daily work with research indicators? Leiden Manifesto. Available from: http://www.leidenmanifesto.org/blog
2. Declaration on Research Assessment. Available from: http://am.ascb.org/dora/
3. IOANNIDIS, J.P.A., BOYACK, K. and WOUTERS, P.F. Citation Metrics: A Primer on How (Not) to Normalize. PLoS Biol. 2016, vol. 14, nº 9, e1002542. DOI: 10.1371/journal.pbio.1002542.
4. BLOWFIELD, M. and FRYNAS, J.G. Setting new agendas: critical perspectives on Corporate Social Responsibility in the developing world. Int. Aff. 2005, vol. 81, nº 3, pp. 499-513. DOI: 10.1111/j.1468-2346.2005.00465.x
5. BALL, P. Science papers rarely cited in negative ways. Nature. 2015. DOI: 10.1038/nature.2015.18643
6. HUTCHINS, B.I., et al. Relative citation ratio (RCR): a new metric that uses citation rates to measure influence at the article level. PLoS Biol. 2016, vol. 14, nº 9, e1002541. DOI: 10.1371/journal.pbio.1002541
References
BALL, P. Science papers rarely cited in negative ways. Nature. 2015. DOI: 10.1038/nature.2015.18643
BLOWFIELD, M. and FRYNAS, J.G. Setting new agendas: critical perspectives on Corporate Social Responsibility in the developing world. Int. Aff. 2005, vol. 81, nº 3, pp. 499-513. DOI: 10.1111/j.1468-2346.2005.00465.x
Can we implement the Leiden Manifesto principles in our daily work with research indicators? Leiden Manifesto. Available from: http://www.leidenmanifesto.org/blog
Declaration on Research Assessment. Available from: http://am.ascb.org/dora/
HUTCHINS, B.I., et al. Relative citation ratio (RCR): a new metric that uses citation rates to measure influence at the article level. PLoS Biol. 2016, vol. 14, nº 9, e1002541. DOI: 10.1371/journal.pbio.1002541
IOANNIDIS, J.P.A., BOYACK, K. and WOUTERS, P.F. Citation Metrics: A Primer on How (Not) to Normalize. PLoS Biol. 2016, vol. 14, nº 9, e1002542. DOI: 10.1371/journal.pbio.1002542.
IOANNIDIS, J.P.A., et al. Bibliometrics: Is your most cited work your best? Nature. 2014, vol. 514, nº 7524, pp. 561-562. DOI: 10.1038/514561a. Available from: http://www.nature.com/news/bibliometrics-is-your-most-cited-work-your-best-1.16217#assess
NASSI-CALÒ, L. Declaration recommends eliminate the use of Impact factor for research evaluation. SciELO in Perspective. [viewed 25 September 2016]. Available from: http://blog.scielo.org/en/2013/07/16/declaration-recommends-eliminate-the-use-of-impact-factor-for-research-evaluation/
NASSI-CALÒ, L. Paper investigates: is your most cited work your best work?. SciELO in Perspective. [viewed 25 September 2016]. Available from: http://blog.scielo.org/en/2014/11/24/paper-investigates-is-your-most-cited-work-your-best-work/
NASSI-CALÒ, L. Study proposes a taxonomy of motives to cite articles in scientific publications. SciELO in Perspective. [viewed 25 September 2016]. Available from: http://blog.scielo.org/en/2014/11/07/study-proposes-a-taxonomy-of-motives-to-cite-articles-in-scientific-publications/
SPINAK, E. What can alternative metrics – or altmetrics – offer us?. SciELO in Perspective. [viewed 25 September 2016]. Available from: http://blog.scielo.org/en/2014/08/07/what-can-alternative-metrics-or-altmetrics-offer-us/
External link
Leiden Ranking – <http://www.leidenranking.com/>
About Lilian Nassi-Calò
Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.
Translated from the original in portuguese by Lilian Nassi-Calò.
Como citar este post [ISO 690/2010]:
Read the comment in Spanish, by Javier Santovenia:
http://blog.scielo.org/es/2016/10/14/es-posible-normalizar-las-metricas-de-citas/#comment-40127
Pingback: Weekend reads: Bad peer reviews; crimes against science; misconduct at Oxford - Retraction Watch at Retraction Watch