Our group started with a number of general observations about the growth and current state of open scholarship:

All of this suggests that existing attitudes and practices within the academy must change, but in order for change to happen we must first identify the barriers— whether real or perceived—to broader or faster uptake. Only once we’ve identified those barriers can we focus on removing them.

Barriers to openness

We spent much of the first day identifying barriers to more open scholarship. We singled out the following as the key ones:

Flawed incentives

For most scholars, the list of publications on their CV remains central to tenure and promotion, judged either by the impact factors of the journals they publish in or, for monographs, the prestige of the press that publishes their book. These measurements are only loosely linked (by proxy) to the actual quality of individual scholars and their work.

A host of new metrics are becoming available to assess impact at a more granular level, and yet these emerging metrics (article citations, views and downloads, alt-metrics etc.) are not widely used for funding or tenure and promotion decisions—despite initiatives like DORA (the Declaration on Research Assessment), which calls for an end to using journal-based metrics, such as journal impact factors, as a surrogate assessment of the quality of individual research articles.

Until the way in which we measure reputation changes, then scholars—and the many stakeholders that determine their career progression—will continue current practices, even if new more open practices could be shown to be effective in extending the influence and application of their work.

Dysfunctional market

The scholarly communications market has been widely described as dysfunctional. Journals are—in economic terms— complements, not substitutes. Each journal contains original works that are not available in alternative journals, which means limited market competition. This is evidenced in huge price disparities for subscription journals, even within the same fields—a clear symptom of inefficiency in the market.1

However, the current reputation system continues to encourage researchers to publish in journals with high impact factors regardless of the cost to institutions or ease of access to the content for readers. A more open market with greater transparency around costs and access would create more competition oriented towards the needs of the research community.

Misalignment of funding

This dysfunction within the market in turn leads to the misalignment of funds. Although some progressive work by funders in the UK (such as the Wellcome Trust and RCUK) has begun to address this by requiring grant-holders to publish in a more open manner, much of the allocation of the one trillion dollars invested in research every year is based on subjective measures. Thus, funders are challenged in identifying the best people and projects to support. There is limited focus on outcomes as well as outputs.

Current system too suppressive and slow

A print-based system has created a role for publishers as “super filters”—selecting and curating the best and most original content for publication through the peer review process. Such high selectivity was required in the pre-Internet age, due to the high costs of packaging, printing, and distribution. Although online publishing brings with it significant costs that must not be overlooked, the production and dissemination part of those costs has been dropping. In addition, factors such as the elimination of the need for print and alternative peer review systems (post publication, etc.) offer the potential to publish much more material online without a proportional increase in costs.

The value of the traditional role of publishers as pre-publication filters was much debated within our group and a difference of opinion emerged with the publisher representatives believing that the initial filtration role remained important (particularly relating to medical information, which needed clear badging in terms of its credibility). The funder and library representatives were less convinced that this was so necessary in a world where post publication review could mean very rapid presentation of research ideas and discoveries on a pre-print server, vetted through a managed process of post-publication review (much like the F1000 model).

It was agreed by all that, at the very least, a light pre-publication review was desirable to ensure a certain quality threshold. The group discussed the opportunity for publishers to maintain their role in filtration and curation by reviewing and selecting content from pre-print servers for representation in branded journals/resources that represented a particular editorial or quality focus—much as journals make their selections currently. Business models could then apply for publishers to generate revenues from in terms of adding value to those selected articles through brands/prestige associations and other value-add services for authors and readers.

Restrictive formats

The research article remains the currency of career progression in STEM and social sciences; the monograph continues to dominate in the humanities. Both are historic print-based formats. Although progress has been made in terms of online features and functionality, these basic units of scholarly communication remain much the same. Additional content like data, images, infographics, presentations, and other outputs count little towards a researcher’s funding success and career progression.

The availability of data relating to a funded project is a particular problem. Huge value might be gained for the progress of knowledge through more sharing of data as soon as it is available, but the current system dis-incentivizes this by rewarding authors instead for “salami publication” (multiple articles based on a single data set over the course of a grant). There is a critical problem of attribution relating to data (e.g., who has gathered and analyzed the data?), which in turn leads to a problem of valuation (e.g., how is such work properly recognized?).

There are a vast array of research activities and outputs aside from formal publications that should be better recognized as contributing to scholarship—for instance, data sharing; the development of software, cell lines and reagents; peer review; blogs; social media, talks and posters (outreach); training and teaching; pre-prints and essays, and much more. Incentives will be required for researchers to produce and share their work in a wider range of formats, an in an open manner. Work also needs to be done to define which of these activities and outputs are most effective in driving impact (and what kind of impact). This in itself will the help act as an incentive, as long as funders and universities subscribe to the same measures of impact.

Lack of normalization of metadata and taxonomies

A key barrier to more openness in scholarly communications remains the relative silos that exist across the scholarly communications industry. Common standards and technical infrastructure are beginning to emerge, but there remains much work to be done here.

Overcoming barriers to openness

Having identified these barriers to openness, the group decided to focus its efforts on the second day to the problem of flawed incentives, seeing it as the key to addressing all of the other barriers. Tackle the incentive structure built into the current system, and the other players in the system (not just scholars but all of the stakeholders) will adjust to fit the new incentives.

This in turn led us to the tenure and promotion system, which everyone agreed lies at the heart of scholarly communications practices. We can’t change researcher behaviors until we change how we reward them. And for this to happen we need another measure to replace impact factors, which reflect neither openness nor impact for a particular researcher and their work. Such a replacement measure needs—at least initially—to be relatively simple, but represent a fairer discipline-specific comparison and embrace a wider array of activities and outputs. But how can something as fundamental as the tenure and promotion system be changed? The group came up with the following thoughts on how to move the agenda forward:

  1. We need a better understanding of how the system works now. Specifically, we need a comprehensive study that shows in detail, country by country, how funding, tenure, and promotion decisions are made and the role of research outputs and activities within this decisionmaking process.

  2. Define an ideal future. Building on the results of this study, a working group should be established to define an alternative system for funding, tenure, and promotion. Such a group must come from (or have the endorsement of) the highest levels (e.g., AAU, RCUK, and others) to ensure that its recommendations are taken seriously. The resulting system must move us beyond the blunt instrument of impact factors and toward an evaluation framework that accounts for the full range of practices we value as a community, including:
    • Open-access
    • Peer review
    • Data sharing
    • Normalized metadata & taxonomies
    • Software/cell-line/reagent/ tools development
    • Blogs, social media, talks, posters (outreach)
    • Students trained/taught
    • Pre-prints, monographs, publications and essays
  3. Most important, any new evaluation system must be transparent! It is not enough to propose new measures of impact. Any new evaluation system used in funding, tenure and promotion decisions should be developed to ensure complete transparency.

OSI Evolving Open Solutions (1) Workgroup

Geoffrey Bilder, Director of Strategic Initiatives, Crossref

Adyam Ghebre, Director of Outreach, Authorea

Melinda Kenneway, Executive Director and Co-Founder, Kudos

Robert Kiley, Head of Digital Services, Wellcome Library

Elizabeth Kirk, Associate Librarian for Information Resources, Dartmouth College

Paul Murphy, Director of Publishing, The RAND Corporation

Joshua Nicholson, CEO and Co-Founder, The Winnower

Peter Potter, Director of Publishing Strategy, Virginia Tech

Mathew Salter, Publisher, American Physical Society

Frank Sander, Director, Max Planck Digital Library, Max Planck Society

Notes: