Vincent Larivière has his doctorate from the School of Information Studies, McGill University. He holds the position of Assistant Professor at the School of Library and Information Science at the Université de Montréal.
At the SciELO 15 Years Conference, Larivière will address the controversial topic of bibliometric indicators based on citations, such as the Impact Factor, which are used in the evaluation of academic output, particularly in developing countries. His view is that its indiscriminate use is detrimental to topics of regional interest in favor of topics of international interest to “high impact” journals. In an interview with SciELO in Perspective, Larivière also observes that scientometry is a multidisciplinary science, where one third of the academic output on a particular subject is published in journals outside of the field of information science.
1. The bibliometric study on the history of Library and Information Studies (LIS) that you recently led shows that LIS has shifted from a professional focus to a more academic one. Could you comment on the wider impact of this change? Is it restricted to the research domain or does it also affect the functions of librarians and information science professionals?
Research in LIS cannot (and should not!) be separated from practice. Over the last Century, the function of library and information scientists has changed, shifting away from the traditional functions of cataloging and classifications to providing services on research, information retrieval, and training users. Our results tend to show that research topics are a reflection of what affect the profession: less research is being done on libraries and cataloging, while more work is being conducted on the issues related to access, information-seeking behavior and data analysis, among others. In other words, the “information science” part of the discipline has an increasingly important role, both in research and practice.
2. You also highlight the importance given to bibliometric methods as a formal field of LIS. Bibliometric methods have been intensively used in other fields and disciplines, particularly in the so called scientometric studies. How does scientometrics relate to LIS considering that a high proportion of its articles are authored by researchers of other disciplines and published in non-LIS journals? Has scientometrics become or is becoming a separate discipline from LIS, centered on research output metrics?
Scientometrics has, historically, been at the crossroads of library and information science – where the tools and methods are from – and sociology of science, which provided its theoretical framework. So from the start, bibliometrics has been quite ‘interdisciplinary’. With the advent, in the 1980s, of the social constructivism paradigm in sociology of science, which is more interested in micro-level case studies than in the macro-level structure of science, bibliometric methods lost some of its interest for many sociologists of science (and less published in journals of this community). Information science and science policy journals, on the other hand, have remained the natural home for such studies. I think that bibliometrics/scientometrics is at the heart of information science and, actually, is one of our few exports: about half of the papers using bibliometric methods are published outside LIS journals, with about a third of these published in journals from the medical and natural sciences.
3. Bibliometric methods based on citations led to the creation of the Journal Impact Factor, which became the most popular expression of bibliometric methods in research communities. However, its generalized use in research evaluation and in reward systems has been widely criticized. The DORA declaration summarizes the claims against the use of the Impact Factor in comparing the research output of individuals and institutions. Many alternative indicators have been suggested to measure journal performance but the use of the Impact Factor remains unaffected, if not expanding. According to the current state of knowledge, for which purposes is the use of the Impact Factor is still acceptable?
Despite its numerous flaws – asymmetry between numerator and denominator, incommensurability across disciplines and inclusion of journal self-citations, among others – the journal Impact Factor still provides a useful indication of the relative position of a journal within a subfield. But one should not over-interpret the differences in the values obtained by the various journals – especially when it comes to the third decimal! The Impact Factor should be viewed as an ‘holistic’ indication of its position (top-tier, mid-tier, low-tier, etc.) Let us recall that the indicator was first proposed to help librarians choose the periodicals to which they should subscribe rather than a measure for evaluating research and researchers.
4. There is also a common criticism regarding the application of the Impact Factor across journals in the social sciences and the humanities (SSH) given the nature of the research in their disciplines and the questioned indexing coverage given by the Web of Science (WoS) and Scopus that are used by researchers and evaluation systems. In your view, are journals in the social sciences and humanities destined to not have appropriate indexes and therefore bibliometric indicators?
Here we need to distinguish countries having English as their main language from all other countries. For the first group of countries – United States, United Kingdom, etc. – SSH literature is relatively well covered with the current databases (e.g. Web of Science and Scopus), as they mainly index English-language literature. For the second group of countries, these two major databases cover only a small fraction of the SSH literature, as it generally focuses on topics of a more local interest and, hence, is published in national journals that aren’t in English. There is, thus, a need for national databases, such as SciELO in Brazil and Érudit in Québec, to name a few, in order to have an adequate coverage of research in the SSH. We also need to have a better coverage of books and books chapters, which account for a significant proportion of the research output in those disciplines.
5. JCR has produced yearly journal ranking lists based on Impact Factor where most of the top ranked journals are actually published by commercial companies. Do you think that this commercial association contributed to reinforce the misuse of the Impact Factor since it has also been used as a commercial marketing positioning factor?
Definitely! Journals want to improve their impact factors, as they know that a higher Impact Factor will probably lead to ‘better’ submissions and, in turn, to more subscribers. They might be more likely to accept papers on topics that they know will attract more citations. Also, the fact that a lot of journals actually advertise their Impact Factor on their webpage reinforce the legitimacy of the indicator, especially given that most researchers outside LIS have no clue on its actual limitations!
6. In developing and emerging countries, the demand to publish in high impact journals has been explicitly promoted by national and institutional research evaluation and reward practices. These policies represent a major barrier to improving the competitiveness of national journals to receiving high-quality manuscripts. How do you assess the consequences of these policies for the entire national research capacities?
These policies indeed have several adverse effects. In addition to reducing the competitiveness of national journals, it can also influence the topics of researchers’ academic work, who might shift from local topics to international ones; the latter being more likely to be published in a high Impact Factor journal. Hence, local problems related to education, health or history – which are of crucial importance for a nation – might become understudied.
7. Why is Google Scholar practically ignored as a metric source even though it has a wide coverage of journals in the social sciences and humanities?
Although the coverage of Google Scholar is, for recent papers, much better than that of WoS or Scopus, it still has several limitations when it comes to the compilation of advanced bibliometric indicators. Despite the fact that institutions and countries are (and should be) the main focus of bibliometric analyses, it is quite difficult, if not impossible, to compile bibliometric data at this level. On the other hand, data on individual researchers is readily available, despite the fact that this should, in my opinion, definitely be avoided. It is also impossible to extract datasets from Google Scholar or to use an API with it. Finally, Google Scholar is also a moving target: the numbers you obtain often vary from one day to another.
8. You will participate in the SciELO 15 Years Conference panel Scientometrics – Measurement of Research and Journal Quality. The purpose of the panel is to discuss the strengths and weakness of current scientometrics analyses and methods on the measurement of research and journal quality. How do you see the panel’s purpose?
I think that it very is important to recall the various limitations of bibliometric indicators. Although I am very happy to see that bibliometric methods are being increasingly used, I have the feeling that some researchers do not always understand the data they are using. Important decisions are sometimes being made based on these indicators, and it is imperative that these are based on the right interpretation of the right indicator.
[Reviewed on 23 August 2013]
How to cite this post [ISO 690/2010]: