The San Francisco Declaration on Research Assessment (DORA) from December 2012 makes a critical call against the use of the Impact Factor (IF) in the evaluation of research.
The IF was conceived by Eugene Garfield, the founder of the Institute for Scientific Information (ISI) in 1975. His concept, however, has been designed by the author in 1955 when the publication of the Science Citation Index (SCI) started. The indicator was initially created to assist the selection of journals to be indexed in SCI and showed that even journals with a small number of papers, but that were highly cited could be selected. It is thus born the first and most popular and controversial impact ranking of scientific journals ever created. The ISI was incorporated into the international news agency Thomson Reuters in 1992, and since then, the IF and the source database of the indicator, the Journal Citation Reports (JCR), are an integral part of the Web of Knowledge product.
The use of IF to measure the impact of journals has become universal since its inception. The reasons range from the facility to calculate it, the fact of being for many years the only bibliometric indicator available, being available to a large number of journals, mainly from developed countries, and serve as a ranking and to compare journals. However, as the use of the IF overstepped the scope of journals and became popular in academic circles as a direct and indirect source for research quality and for promotions in the career, granting of funds for research, evaluation of post-graduate programs and ranking of universities and research institutions, many criticism regarding its usability and limitations arose. The most common are:
- What is taken into account in the numerator not always is in the denominator;
- It is the citation rate of the articles that determines the journal’s IF, and not otherwise; a low correlation between the number of citations of individual articles with the IF of the journal has been observed;
- The indicator focuses on areas that have short half-life citations (life sciences and hard sciences) and hinders other areas;
- Review articles receive more citations than original articles and some editors tend to prefer the former in their journals;
- There is a clear predominance of English-language journals in the database, despite recent efforts to boost journals that depict science from local regions through its inclusion in JCR.
The San Francisco Declaration on Research Assessment (DORA) is an initiative of scientists from the American Society for Cell Biology aiming to stop the use of IF for evaluation of scientific research. The Declaration, which has been receiving wide support, recommends that IF should not be used in assessments for research funding, career promotions and hiring of academics. The document was signed by over 150 prominent scientists and 75 academic organizations, including the American Association for the Advancement of Science.
The isolated use of IF academic evaluation is highly destructive, according to the signatories of DORA, since it can prevent journals from publishing articles from less cited areas or subjects, besides burdening high-impact journals with often inadequate submissions. However, the most pernicious consequence to science is to prevent the natural progress of research, which, in the search for new approaches, may lead to relatively long periods without generating publications. Researchers should be able to “employ” this period with no publications and/or citations without being penalized for it.
It should surprise the fact that the use of an indicator makes eligible one or either author because he has published in a high IF journal, that it is more important to know where he published than reading his work. DORA emphasizes the need to assess research on its own merits and not by the journal where it has been published.
Referências
ALBERTS, B. Impact factor distortions. Science, 17 May 2013, vol. 340, nº 6134, p. 787.
BREMBS, B., BUTTON, K. and MUNAFÒ, M. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci., 2013, vol. 7, nº 291. [cited 13 July 2013]. Available from: http://www.frontiersin.org/human_neuroscience/10.3389/fnhum.2013.00291/full
GARFIELD, E. The history and meaning of the journal impact fator. JAMA, 2006, vol. 295, p. 90-93.
HORTON, R. Science: a new generation. Lancet, February 2013, vol. 381, suppl . 1, p. S2-3. Available from: doi:10.1016/S0140-6736(13)60445-6.
Impact Factor Shifting from Journal to Article. Just Publics @ 360. [viewed 13 July 2013]. Available from: http://justpublics365.commons.gc.cuny.edu/2013/07/05/impact-factor-shifting-from-journal-to-article/
LEE, C.H. Journal impact factor and individual article impact. Am J Emerg Med, March 2013, vol. 31, nº 3, p. 624-625.
MURPHY, E.J. Impact factor and science publishing: what impact should it have on selecting journals in which we publish? Lipids, May 2013, vol. 48, nº 5, p. 431-3.
SEGLEN, P.O. Why the impact factor of journals should not be used for evaluating research. BMJ, February 1997, p. 314-497.
About Lilian Nassi-Calò
Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.
Translated from the original in portuguese by Lilian Nassi-Calò.
Como citar este post [ISO 690/2010]:
“DORA emphasizes the need to assess research on its own merits and not by the journal where it has been published.”
I fully agree!