The myopia of bibliometric indicators

By Lilian Nassi-Calò

The use of bibliometric indicators in science evaluation is a ubiquitous practice, despite the absence of an unequivocal relationship between citations and scientific quality, impact or merit. When considering innovation – the inherent characteristic of scientific research – the relationship is even more disconnected.

This is the opinion of researchers at Georgia State University, Atlanta, GA, USA, and the Department of Managerial Economics, Strategy and Innovation at the University of Leuven, Belgium, according to a recent article in Nature1. Paula Stephan, Reinhilde Veuglers and Jian Wang have noted that members of science panels in various countries still largely use citation-based bibliometric indicators – such as the Journal Impact Factor, h-index, and citations accrued from Google Scholar – as proxies for evaluating the quality and impact of the scientific output for hiring and promotion purposes. Initiatives such as the 2012 San Francisco Declaration on Research Assessment2 and the 2015 Leiden Manifesto3, though their wide repercussion and support by numerous research institutions and funding agencies worldwide, actually little changed the way science and scientists are evaluated. After all, bibliometric indices provide a simple (in many cases, simplistic) and convenient way to evaluate a large number of candidates, proposals or articles.

The limitations of the Journal Impact Factor (JIF) and similar indicators of journals’ performance in assessing individual articles and researchers are known by all. According to Stephan, even funding agencies that do not specifically request to inform the JIF in publications lists use this indicator, as well as citations counts and the h index to rank proposals. The researchers themselves contribute to this vicious circle. When asked to identify their most relevant publications, they usually select them based on citation indexes, rather than assigning articles their true academic significance or spotting a particularly innovative discovery.

The article mentions the large-scale use of citation-based indicators for career promotion and hiring. Besides Italy, the Czech Republic, Flanders (Northwest Belgium) and China, the authors cite the Qualis program of the Brazilian Ministry of Education, which uses the JIF to define the allocation of research resources, which particularly penalizes Brazil’s journals. According to the authors, a special exception is made to the United Kingdom’s Research Excellence Framework, which explicitly recommends not to use the JIF in the evaluations.

Innovation requires time

Scientists are eager to make breakthrough discoveries, and on their behalf, they are said to even incur unethical practices and overestimate preliminary results. Stephen and his coworkers, however, actually believe that excessive use of bibliometric indexes with short-term windows (2-3 years) may discourage the publication of innovative results. To test their hypothesis, the authors analyzed the citations in the Web of Science of more than 660,000 articles published between 2001-2015 categorized as research with high, moderate and no degree of innovation. As a proxy for the degree of innovation, the researchers evaluated the list of references of the articles in search of unusual patterns of combination. From this analysis, the authors came to the conclusion that highly innovative articles take longer to be cited, as compared to moderately innovative and non-innovative papers. Among the highly innovative articles, two types of behavior were observed: either they became highly cited articles – citations begin to increase after 3-4 years and keep growing up to 15 years after publication – or were ignored as compared to articles with no degree of innovation. However, it is important to note that in the first three years after publication, the probability of a highly innovative article being among the 1% most cited papers is less than the probability for articles with no degree of innovation. This led the authors to conclude that the current system of research evaluation underestimates work that may have a high impact on long-term evaluation. It is also important to point out that articles that proved to be of high impact over time were published in journals with lower JIF. Thus, Stephen and colleagues1 conclude that “the more we bind ourselves to quantitative short-term measures, the less likely we are to reward research with a high potential to shift the frontier – and those who do it”.

However, this observation is not totally unprecedented. In 2014, an article by John Ioannidis4, also published in Nature, sought to investigate whether, in the own researchers’ view, his most cited work was his best work. The paper, object of a post in this blog5, actually brought up more questions than answers, such as the difficulty of early identifying an innovative article based on 2-3 years window bibliometric indicators, or when they are cited by articles from other, less similar areas. However, at the time, one of the author’s conclusions was the need of resorting to other indexes besides citation based metrics to complement the evaluation of science.

Recommendations to the scientific community

In order to encourage researchers to undertake more innovative fields of science, it is necessary to foster a shift in the whole scientific community in order to restrict the indiscriminate use of short-term bibliometric indicators.

Researchers – Restrict the use of JIF and citation-based indexes to guide the choice of topics and where to submit articles. Do not include such indicators in CV and grant proposals.

Funding agencies – Provide multiple ways to evaluate the publications of researchers and institutions. Exclude citation and JIF measures from grant proposals, and do not allow them to be discussed by reviewers. Include experts from other areas on review panels and periodically evaluate the performance of grant applicants using 5-10 year window indexes.

Peer reviewers – Attempt to evaluate the article or candidate leaving metrics aside, especially short term ones.

Editors – Ignore the metrics used to evaluate papers. Propose the use of metrics with longer time frames.

Universities – Adopt as a standard practice in the evaluation panels that members actually read the candidates’ research and not just their bibliometric indexes, like the recommendation by UK’s REF. When evaluating candidates, emphasize how researchers approach certain proposed questions. In this sense, Professor of Philosophy of Science at UFRJ Antônio Augusto P. Videira’s consideration6 applies: “The fact that the use of an indicator makes one author or the other eligible due to the fact that he has published in a journal with a higher IF, should be surprising, since more importance is given to where he was published than reading his work”.

The authors of the study believe that “if the academic community really wants to create more objective assessments, all of us – from early-career researchers to the heads of funding agencies – need to use qualitative and quantitative indicators responsibly […] in order to avoid indicators that penalize the types of researcher and project that have the greatest potential to push boundaries”.

Notes

1. STEPHAN, P., VEUGELERS, R. and WANG, J. Reviewers are blinkered by bibliometrics. Nature [online]. 2017, vol. 544, no. 7651, pp. 411-412 [viewed 14 May 2017]. DOI: 10.1038/544411a. Available from: http://www.nature.com/news/reviewers-are-blinkered-by-bibliometrics-1.21877

2. The San Francisco Declaration on Research Assessment (DORA) [online]. San Francisco Declaration on Research Assessment (DORA) [viewed 14 May 2017]. Available from: http://www.ascb.org/dora/

3. HICKS, D., et al. Bibliometrics: The Leiden Manifesto for research metrics. Nature [online]. 2015, vol. 520, nº 7548, pp. 429-431 [viewed 14 May 2017]. DOI: 10.1038/520429a. Available from: http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

4. IOANNIDIS, J. P. A., et al. Bibliometrics: Is your most cited work your best? Nature [online]. 2014, vol. 514, nº 7524, pp. 561-562 [viewed 14 May 2017]. DOI: 10.1038/514561a. Available from: http://www.nature.com/news/bibliometrics-is-your-most-cited-work-your-best-1.16217#assess

5. NASSI-CALÒ, L. Paper investigates: is your most cited work your best work? [online]. SciELO in Perspective, 2014 [viewed 14 May 2017]. Available from: http://blog.scielo.org/en/2014/11/24/paper-investigates-is-your-most-cited-work-your-best-work/

6. Videira A. A. P. Declaração recomenda eliminar o uso do Fator de Impacto na avaliação de pesquisa [online]. Estudos de CTS – Estudos sociais e conceituais de ciência, tecnologia e sociedade, 2013 [viewed 14 May 2017]. Available from: http://estudosdects.wordpress.com/2013/07/29/declaracao-recomenda-eliminar-o-uso-do-fator-de-impacto-na-avaliacao-de-pesquisa/

References

HICKS, D., et al. Bibliometrics: The Leiden Manifesto for research metrics. Nature [online]. 2015, vol. 520, nº 7548, pp. 429-431 [viewed 14 May 2017]. DOI: 10.1038/520429a. Available from: http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

IOANNIDIS, J. P. A., et al. Bibliometrics: Is your most cited work your best? Nature [online]. 2014, vol. 514, nº 7524, pp. 561-562 [viewed 14 May 2017]. DOI: 10.1038/514561a. Available from: http://www.nature.com/news/bibliometrics-is-your-most-cited-work-your-best-1.16217#assess

NASSI-CALÒ, L. Paper investigates: is your most cited work your best work? [online]. SciELO in Perspective, 2014 [viewed 14 May 2017]. Available from: http://blog.scielo.org/en/2014/11/24/paper-investigates-is-your-most-cited-work-your-best-work/

STEPHAN, P., VEUGELERS, R. and WANG, J. Reviewers are blinkered by bibliometrics. Nature [online]. 2017, vol. 544, no. 7651, pp. 411-412 [viewed 14 May 2017]. DOI: 10.1038/544411a. Available from: http://www.nature.com/news/reviewers-are-blinkered-by-bibliometrics-1.21877

The San Francisco Declaration on Research Assessment (DORA) [online]. San Francisco Declaration on Research Assessment (DORA) [viewed 14 May 2017]. Available from: http://www.ascb.org/dora/

Videira A. A. P. Declaração recomenda eliminar o uso do Fator de Impacto na avaliação de pesquisa [online]. Estudos de CTS – Estudos sociais e conceituais de ciência, tecnologia e sociedade, 2013 [viewed 14 May 2017]. Available from: http://estudosdects.wordpress.com/2013/07/29/declaracao-recomenda-eliminar-o-uso-do-fator-de-impacto-na-avaliacao-de-pesquisa/

 

lilianAbout Lilian Nassi-Calò

Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.

 

Translated from the original in Portuguese by Lilian Nassi-Calò.

 

Como citar este post [ISO 690/2010]:

NASSI-CALÒ, L. The myopia of bibliometric indicators [online]. SciELO in Perspective, 2017 [viewed ]. Available from: https://blog.scielo.org/en/2017/06/01/the-myopia-of-bibliometric-indicators/

 

One Thought on “The myopia of bibliometric indicators

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation