Introduction

This short report describes the outcomes of a small, self-selected workgroup convened at the first meeting of the Open Scholarship Initiative in Fairfax, Virginia, USA, in April 2016. It is made available as an aid for further discussion, rather than with any claims to being an authoritative text.

Background

The Journal Impact Factor (JIF) is a score based on the ratio of citations to papers published in a journal over a defined period, to the number of papers published in that journal over that period. It is calculated over the dataset provided by the Journal Citation Report (JCR) (Thomson Reuters). The JIF is widely used, and misused. The factors influencing it, and their implications have been well documented elsewhere.1

How does the existence and use of the JIF affect moves toward open scholarship?

Scholarly communication is a complicated system, with subtle relationships between components and some unexpected feedback loops. As a result, it is rather difficult to pin down a direct causal relationship between the existence and use of the JIF, and moves toward open scholarship. In particular, the relationship between a journal’s JIF (or lack of one) and its perceived prestige can be subtle. There is probably enough evidence, though, to justify the claim that the JIF inhibits openness and that action should be taken to reduce its influence.

The power of the JIF stems largely from its misuse in research assessment and, especially, in funding, recruitment, tenure and promotion processes. There is both a perception and a reality that such processes are influenced by the JIF, and so researchers who are subject to those processes understandably adjust their publishing behaviour based on the JIF. It would be hard to over-state the power this gives the JIF. So, given the JIF’s influence, what are the effects of its use and misuse? We focus here on those effects related specifically to open scholarship.

The influence of the JIF can retard uptake of open practices. For example, whereas hybrid journals are usually well-established titles that have had time to build an impact factor and so attract good authors, wholly Open Access (OA) journals are often new titles, and therefore not in so strong a position. There are a few high-profile exceptions to this, notably:

These are the exceptions, however; in general, the JIF imposes a high barrier to entry for journals, and since OA is an innovation in journal publishing, that barrier is particularly acute for OA journals. As soon as one moves beyond conventional journal publishing (for example, models such as F1000 Research2 or preprint repositories) the influence of the JIF is extremely strong and inhibits take-up by authors. Furthermore, the JIF is based on a largely Anglophone dataset (the JCR), which makes it likely that the JIF particularly disadvantages alternative models of scholarly communication outside the “global north.” There are operational implications here, especially where the JIF is used in research assessment, but there are also implications with respect to research culture and values.

Without going into current debates about the functioning of the Article Processing Charge (APC) market, a high JIF can be used by publishers to justify a high APC level for a journal, despite concerns about whether this is legitimate.

But open scholarship is about more than just OA, it also includes sharing research data, methods and software, the preregistration of protocols and clinical trials, better sharing of the outcomes of all research including replication studies and studies with negative results, and early sharing of information about research outcomes. The power of the JIF acts against all of these aspects, for example by not counting all the specific kinds of research output, or by focusing on authorship as the sole contribution to a research output. The influence of the JIF can also weaken the position of low JIF journals, which then risk losing authors if the journals put up perceived barriers to submission such as data sharing requirements, while strengthening the position of high JIF journals, which may then prevent early disclosure of research findings for fear of being scooped by the science press. Another key problem is the distortion of the scholarly record that arises from disproportionately incentivising the publication of papers that are likely to be cited highly early in their life, as opposed to papers that comprise sound research but are of a type (replication studies, or negative results) that are unlikely to be “citation stars.” Given the highly skewed distribution of citations within a journal, editors seeking to maximise their JIF are incentivised to look out for such “citation stars” that will boost the journal’s JIF. PLOS One and other similar journals, which focus their acceptance decisions on research method, not outcome, argue that their success is despite—not because of—the power of the JIF.

Of course, the JIF is unable to measure the impact of research beyond merely the citation of papers by other papers. Public engagement, impact on policy, and the enabling of commercial innovation, for instance, are all beyond the scope of JIF. These are all important aspects of open scholarship that could be highlighted by other indicators, and it is troubling that use of the JIF is seldom supplemented by the use of such indicators.

Fundamentally, many of these problems result from the fact that the JIF is an indicator (albeit imperfect) of the quality of the container (the journal) rather than of the research itself.

Finally, but by no means less importantly, the JIF is not itself open. Neither the dataset nor the algorithm is truly open, which flies in the face of moves toward a more transparent approach to scholarship. There are moves, such as the forthcoming Crossref Event Data service,3 and various other open citation initiatives,4 that might address this problem in due course.

Research assessment and the JIF

As a result of the above and other considerations, our team reached consensus on the following six points:

  1. There is a need to assess research and researchers, to allocate funding and to make decisions about tenure and promotion.
  2. JIF is not appropriate for these purposes.
  3. No single metric would be appropriate for these purposes either.
  4. A number of metrics may be developed which can help inform these decisions (including, but not limited to, “altmetrics”5) in addition to peer review.
  5. Some of these metrics might be based on citation data.
  6. Enough information exists about the issues and shortcomings of the JIF6 to render further significant research on this unnecessary.

Action plans

To improve the current situation, and move toward responsible metrics and better research assessment in support of open scholarship, the workgroup proposes the following actions:

# Intended Change Specific actions
1 The DORA recommendations should be implemented.
  1. Research funders should only provide funding to higher education institutions that have signed DORA and that have published a recruitment, tenure and promotion framework, which demonstrates their implementation of the DORA recommendations.
    1. Future OSI workgroups focused on indicators or impact factors should assess the initial response of research funders, especially in the biomedical field, to this proposed action and amend the following actions accordingly.
    2. National academies should gather and present evidence to inform the case for funders to take this action, and should release open invitations to funders to join this conversation via meetings, workshops or other forums.
    3. National academies, senior institutional representative organisations, and research funders should agree on how this action can be implemented to greatest effect and with the least burden in their particular national context.
    4. National academies, learned societies, and institutional representative organisations should work with senior academics in universities to ensure that this action finds support in the academic community.
    5. Supportive funders should recommend this action to their peers, e.g. through the Global Research Council.
    6. OSI workgroups focused on indicators or impact factors should support DORA’s publicity and marketing efforts, including gathering testimonials from those who have signed it, and investigating why others have not.
  2. Funders or institutions that are already implementing the DORA recommendations in their internal evaluation processes should be asked to declare this publicly.7
  3. The meetings recommended above should be used by all stakeholders as an opportunity for discussion of the wider issues associated with metrics, research assessment and open scholarship.
2 Disciplines take ownership over the assessment of research in their area, through the development and use of tools, checklists and codes of conduct.
  1. Create templates for universities / disciplines, to facilitate the development of appropriate tenure and promotion frameworks to implement DORA (see 1, above). Relevant learned societies should create discipline-specific outline templates based on DORA and existing evidence on good practice in using evidence in research evaluation. These efforts should be informed iteratively as further evidence becomes available on the potential of indicators, e.g., from the metrics lab (see 3, below). This work should be done in consultation with relevant funders and university representatives; some limited international coordination may be beneficial and practical.
  2. OSI workgroups focused on indicators or impact factors should discuss with learned societies whether author-publishing practices (in particular avoiding reference to the JIF in publishing decisions) should be part of the scope of their codes of practice.
3 Create an international metrics lab, learning from prior attempts to do this. This would include: data sources; developers to explore and propose indicators; incentives to participate; and tests for reliability, validity, and acceptability of proposed indicators.
  1. OSI workgroups focused on indicators or impact factors should build a coalition of parties willing to undertake this effort. At a first pass, this coalition might include Force11, Crossref Labs, Association of Research Libraries, Jisc, Snowball Metrics, NISO, COUNTER and other standards bodies, representatives of publishers (e.g., STM futures lab), and funders.
  2. This coalition should identify a trusted organisation to lead the metrics lab initiative or, at least, to coordinate it.
  3. The coalition should define the terms of reference for the metrics lab.
  4. The coalition should identify funding, governance and operational options.
  5. The coalition should commission work to create and maintain a register of open data sources that could underpin useful indicators, e.g. OpenURL, Crossref Event Data.
4 Share information about the JIF, metrics, their use and misuse. OSI should add a resources page on its website to bring this information together and publicise it. The Metrics Dashboard, a pilot project recently funded by FORCE11, which aims to provide actionable information on research metrics use and misuse could be leveraged as a data source. Additionally, the page should include the NISO use cases for altmetrics,8 Crossref Event Data,9 the UK Metric Tide report,10 DORA,11 the Leiden Manifesto,12 the NIH Biosketch,13 CRediT,14 etc.

In addition to the above actions, which are specifically about the use of metrics in research assessment (where the JIF is not appropriate), the following actions are proposed to improve how journals are compared. This is a different and entirely separate use case to research assessment, and the JIF may be a useful indicator here.

# Intended change Specific actions
1 Improve the validity of the JIF as one indicator of journal quality
  1. OSI workgroups focused on indicators or impact factors should draft a list of improvements required to the JIF to improve its validity and openness.
  2. OSI workgroups focused on indicators or impact factors should gather support for this list and present it to the owners of the JIF.
2 Investigate whether best practice or standards can be agreed to describe and measure aspects of journal publishing services, e.g. to inform the operation of journal comparison sites
  1. OSI workgroups focused on indicators or impact factors should identify a willing partner to commission a landscape review and analysis of how journal publishing services for authors are already being compared, the criteria used, the rigour of the assessment, etc.
  2. OSI workgroups focused on indicators or impact factors should identify a willing partner to commission landscape review and analysis of how journal publishing services for readers (and librarians) are already being compared, the criteria used, the rigour of the assessment, etc.
  3. OSI workgroups focused on indicators or impact factors should consider the findings of these two studies and recommend next steps.

Challenges

Some significant challenges and questions toward the implementation of these actions exist that are not specific to this workgroup but are general to OSI. They include:

  1. How to continue to engage the OSI participants in this activity, to ensure we remain active and effective?
  2. What channels and methods should be used to effectively extend the participation to represent fully all stakeholders from around the world?
  3. Given limited resources, how should the work that we have proposed be prioritized?

The OSI Impact Factor Workgroup

Workgroup delegates comprised a wide mix of stakeholders, with representatives from Brazil, Canada, the United Kingdom, and the United States:

José Roberto F. Arruda, São Paulo State Foundation (FAPESP), Brazil

Robin Champieux, Scholarly Communication Librarian, Oregon Health and Science University, USA. ORCID: 0000-0001-7023-9832

Dr. Colleen Cook, Trenholme Dean of the McGill University Library, Canada

Mary Ellen K. Davis, Executive Director, Association of College & Research Libraries, USA

Richard Gedye, Director of Outreach Programmes, International Association of Scientific, Technical & Medical Publishers (STM). ORCID: 0000-0003-3047-543X

Laurie Goodman, Editor-in-Chief, GigaScience. ORCID: 0000-0001-9724-5976

Dr Neil Jacobs, head of scholarly communications support, Jisc, UK. ORCID: 00000002-8050-8175 David Ross, Executive Director, Open Access, SAGE Publishing. ORCID: 0000-00016339-8413

Dr Stuart Taylor, Publishing Director, The Royal Society, UK. ORCID: 0000-00030862-163X

Notes:

  1. For example: the Metric Tide report: as of May 24, 2016: http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html; San Francisco Declaration on Research Assessment, DORA, as of May 24, 2016: http://www.ascb.org/dora/; Leiden Manifesto, as of May 24, 2016: http://www.leidenmanifesto.org/

  2. F1000 Research, as of May 24, 2016: http://f1000.com/

  3. Crossref DOI event data service, as of May 24, 2016: http://eventdata.Crossref.org/

  4. For example, CORE semantometrics experiment, as of May 24, 2016: http://www.slideshare.net/JISC/introducing-the-open-citation-experiment-jisc-digifest2016-58968840; Open Citation Corpus, as of May 24, 2016: https://is4oa.org/services/open-citations-corpus/; CiteSeerX, as of May 24, 2016: http://citeseerx.ist.psu.edu/index;jsessionid=9C3F9DA06548EACB52B7E8D50E9009F2

  5. See NISO altmetrics initiative, as of May 24, 2016: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2

  6. E.g., HEFCE (2015) The Metric Tide, as of May 24, 2016: http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html; and studies such as Kiesslich T, Weineck SB, Koelblinger D (2016) Reasons for Journal Impact Factor Changes: Influence of Changing Source Items. PLoS ONE 11(4): e0154199. doi:10.1371/journal.pone.0154199

  7. Indiana University Bloomington has recently made a strong statement in this direction, as of May 24, 2016: http://inside.indiana.edu/editors-picks/campus-life/2016-05-04-from-thedesk.shtml

  8. NISO altmetric initiative, as of May 24, 2016: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2

  9. Crossref Event Data, as of May 24, 2016: http://eventdata.Crossref.org/

  10. Metric Tide report, as of May 24, 2016: http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html

  11. DORA, as of May 24, 2016: http://www.ascb.org/dora/

  12. Leiden Manifesto, as of May 24, 2016: http://www.leidenmanifesto.org/

  13. Example of NIH Biosketch, as of May 24, 2016: https://grants.nih.gov/grants/funding/2590/biosketchsample.pdf

  14. CASRAI CRediT, as of May 24, 2016: http://casrai.org/credit