The use of research metrics is diversified in the Leiden Manifesto

By Lilian Nassi-Calò

leiden

Researchers gathered at the 19th International Conference on Science and Technology Indicators (STI 2014) held in September 2014 in Leiden, Netherlands, aiming to advise on the use of metrics in research evaluation and curb abuses of the use of metrics and indicators drafted a set of rules – the Leiden Manifesto1. According to the signatories of the document, many of the recommendations are not new to most researchers, but they are often not taken into account when conducting research assessment.

An article published last week in Nature by researchers who are part of the Scientific Committee of STI 20142 present the ten principles of the Leiden Manifesto to guide the proper use of metrics in research evaluation, so that researchers can hold evaluators accountable and evaluators, in turn, can hold their indicators to account.

The following is a summary of the ten principles:

  1. 1. Quantitative evaluation should support qualitative, expert assessment. Quantitative metrics can challenge bias trends in peer review and facilitate assessment, strengthening peer review. The reviewers should not, however, give in to the temptation to replace decision-making by the numbers. The indicators should not replace peer review and all involved are responsible for their assessments.
  2. Measure the performance against the research objectives of the institution, group or researcher. The project objectives should be clear at the beginning, and the indicators used to evaluate the performance should clearly refer to these goals. The choice of indicators and the ways they are used should consider the broader socio-economic and cultural contexts. A single assessment model does not apply to all contexts.
  3. Protect excellence in locally relevant research. In many places in the world, excellence in research means publication in English language. Biases are particularly problematic in the social sciences and humanities, where research is more regional and of national centrality. Many other areas have a national or regional dimension – for example, specific epidemiological studies of certain regions. This pluralism and social relevance tend to be suppressed when creating articles of interest to the gatekeepers of the English-language high impact journals. Metrics based on high-quality literature in non-English language would serve to identify and recognize excellence in local or regional research.
  4. Keep data collection and analytical processes open, transparent and simple.The construction of databases required for evaluation must follow predetermined and clear rules. Transparency allows the monitoring of the results. Simplicity is a quality of an indicator as it increases transparency. However, simplistic metrics can skew the results.
  5. Allow those evaluated to verify data and analysis. To ensure data quality, researchers included in bibliometric studies should be provided to check whether their results were correctly identified, or subject these to independent audit. Information systems of the institutions should implement this check, while transparency should guide the selection process of the service providers. It should be taken into account that accurate and high quality data cost time and financial resources to collect and process.
  6. Account for variation by field in publication and citation practices. The best practice is to select a number of possible indicators and provide areas to choose one of them. Certain areas – notably social sciences and humanities – preferably publish books rather than journal articles and others like computer science has most of its scientific production disseminated at conferences. This diversity of forms of publication of research results must be taken into account when assessing different areas. Citations also vary by area, which requires standardized indicators. Most robust methods of standardization are based on percentiles.
  7. Base assessment of individual researchers on a qualitative judgement of their portfolio. It is known that the h-index increases with age, even without further published work. In addition, it depends on the database which is calculated, being considerably greater in Google Scholar than in Web of Science. Read about a researcher’s work is much more appropriate than rely on numbers. Even when comparing a large number of researchers, an approach that considers more information about their specialty, experience, activity and influence is always better.
  8. Avoid misplaced concreteness and false precision. Science and technology indicators tend to conceptual ambiguity and uncertainty and require strong assumptions that are not universally accepted. Thus, best practices suggest the use of multiple indicators to provide a more consistent and real picture.
  9. Recognize the systemic effect of the assessment and indicators. Indicators can change the system through the incentives they set, and these effects can be anticipated. This means that it is always preferable to use a set of indicators. The use of a single indicator – such as number of publications or total number of citations – can lead to misinterpretation.
  10. Scrutinize indicators regularly and update them. The research mission and objectives of the evaluation change and the research system also evolves. When the usual metrics become inadequate, new emerge. Therefore, indicators should be reviewed and perhaps modified.

Next steps

According to the authors of the article, research assessment can play an important role in the development of science and its interaction with society once abiding by these ten principles. Research metrics can provide crucial information that would be difficult to gather or understand through personal experience. Quantitative information, however, is mainly an instrument and should not become the goal.

According to the Manifesto, qualitative and quantitative evidence are necessary to base decisions, and their choice must be made against the purpose and nature of the research being evaluated. “Decision-making about science must be based on high-quality processes that are informed by the highest quality data.”

Notes

1 HICKS, D, and et al. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015, vol. 520, nº 7548 , pp. 429–431. DOI: http://dx.doi.org/10.1038/520429a

El manifiesto de Leiden sobre indicadores de investigación. Ingenio.upv. 2015. Available from: http://www.ingenio.upv.es/es/manifiesto

References

El manifiesto de Leiden sobre indicadores de investigación. Ingenio.upv. 2015. Available from: http://www.ingenio.upv.es/es/manifiesto

HICKS, D, and et al. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015, vol. 520, nº 7548 , pp. 429–431. DOI: http://dx.doi.org/10.1038/520429a

External Link

STI 2014 Leiden – <http://sti2014.cwts.nl/>

 

lilianAbout Lilian Nassi-Calò

Lilian Nassi-Calò studied chemistry at Instituto de Química – USP, holds a doctorate in Biochemistry by the same institution and a post-doctorate as an Alexander von Humboldt fellow in Wuerzburg, Germany. After her studies, she was a professor and researcher at IQ-USP. She also worked as an industrial chemist and presently she is Coordinator of Scientific Communication at BIREME/PAHO/WHO and a collaborator of SciELO.

 

Translated from the original in portuguese by Lilian Nassi-Calò.

 

Como citar este post [ISO 690/2010]:

NASSI-CALÒ, L. The use of research metrics is diversified in the Leiden Manifesto [online]. SciELO in Perspective, 2015 [viewed ]. Available from: https://blog.scielo.org/en/2015/04/30/the-use-of-research-metrics-is-diversified-in-the-leiden-manifesto-for/

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation