The Journal Impact Factor and its discontents: steps toward responsible metrics and better research assessment

Authors

  • J. Roberto F. Arruda Special Advisor to the Scientific Director, São Paulo Research Foundation (FAPESP), Brazil
  • Robin Champieux Scholarly Communication Librarian, Oregon Health & Science University
  • Colleen Cook Trenholme Dean of the McGill University Library
  • Mary Ellen K. Davis Executive Director, Association of College and Research Libraries
  • Richard Gedye Director of Outreach Programmes, International Association of Scientific, Technical & Medical Publishers (STM)
  • Laurie Goodman Editor-in-Chief, GigaScience
  • Neil Jacobs Head of Scholarly Communications Support, UK Joint Information Systems Committee (JISC)
  • David Ross Executive Director, Open Access, Sage Publications
  • Stuart Taylor Publishing Director, The Royal Society, UK

DOI:

https://doi.org/10.13021/G88304

Abstract

A small, self-selected discussion group was convened to consider issues surrounding impact factors at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016, and focused on the uses and misuses of the Journal Impact Factor (JIF), with a particular focus on research assessment. The groupââ¬â¢s report notes that the widespread use, or perceived use, of the JIF in research assessment processes lends the metric a degree of influence that is not justified on the basis of its validity for those purposes, and retards moves to open scholarship in a number of ways. The report concludes that indicators, including those based on citation counts, can be combined with peer review to inform research assessment, but that the JIF is not one of those indicators. It also concludes that there is already sufficient information about the shortcomings of the JIF, and that instead actions should be pursued to build broad momentum away from its use in research assessment. These actions include practical support for the San Francisco Declaration on Research Assessment (DORA) by research funders, higher education institutions, national academies, publishers and learned societies. They also include the creation of an international ââ¬Åmetrics labââ¬Â to explore the potential of new indicators, and the wide sharing of information on this topic among stakeholders. Finally, the report acknowledges that the JIF may continue to be used as one indicator of the quality of journals, and makes recommendations how this should be improved.

OSI2016 Workshop Question: Impact Factors

Tracking the metrics of a more open publishing world will be key to selling ââ¬Åopenââ¬Â and encouraging broader adoption of open solutions. Will more openness mean lower impact, though (for whatever reasonââ¬âless visibility, less readability, less press, etc.)? Why or why not? Perhaps more fundamentally, how useful are impact factors anyway? What are they really tracking, and what do they mean? What are the pros and cons of our current reliance on these measures? Would faculty be satisfied with an alternative system as long as it is recognized as reflecting meaningfully on the quality of their scholarship? What might such an alternative system look like?

Downloads

Published

2016-04-19

Issue

Section

Reports