Responsible Research

In January 2018, on the recommendation of Research Committee, Keele University signed the San Francisco Declaration on Research Assessment (DoRA). DoRA is a set of recommendations designed to ensure that “…the quality and impact of scientific outputs…is measured accurately and evaluated wisely.” To date, signatories include seven other Russell Group universities, including: Imperial College London, University of Manchester and UCL.

Keele University is committed to the critical role that peer review and expert judgement plays in the assessment of research, but also recognises the value that quantitative metrics can play in complementing and supporting decision-making. The move to find new ways to assess quality in research outputs is fully in line with the wishes of research funders, notably the European Commission, the Wellcome Trust and Office for Students (previously HEFCE).

The University has developed the following set of principles outlining its approach to research assessment and management, including the responsible use of quantitative indicators. These principles draw upon DORA and are designed to encapsulate current good practice and to act as a guide for future activities.

As a responsible employer, Keele is committed to:

  • Being explicit about the criteria used to reach new appointments, tenure and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  • For the purposes of research assessment, considering the value and impact of all research outputs (including datasets and software) in addition to research publications, and considering a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

Researchers should:

  • Make assessments based on scientific content rather than publication metrics, when involved in committees making decisions about funding, appointments, tenure, or promotion.
  • Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
  • Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs
  • Challenge research assessment practices that rely inappropriately on journal impact factors, or equivalent quantitative measures of qualitative properties, and promote and teach best practice that focuses on the value and influence of specific research outputs.
  • Encourage researchers towards 'open science' or 'reproducible research', acknowledging this may not have the same relevance/value across all disciplines.
  • Encourage and support interdisciplinarity.
  • Take into account the diverse range of possible research outputs beyond journal articles and recognise that some outputs are impossible to evaluate through standard journal metrics.
  • Be sensitive to factors that may result in legitimate delays in research publication: including personal factors that may have affected the applicant’s record of outputs.

David Amigoni and Claire Ashmore on behalf of Keele University DORA operational group

January 2019

Overview

Keele University has produced a Policy Statement and associated guidance on responsible research assessment including the appropriate use of quantitative research metrics.

This Policy Statement builds on a number of prominent external initiatives on the same task, including the San Francisco Declaration on Research Assessment (DORA); the Leiden Manifesto for Research Metrics and Metric Tide report. The latter urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment, where metrics should be considered in terms of: robustness (using the best available data); humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment); transparency (keeping the collection of data and its analysis open to scrutiny); diversity (reflecting a multitude of research and researcher career paths); and reflexivity (updating our use of metrics to take account of the effects that such measures have had). These initiatives and the development of institutional policies are also supported or mandated by research funders in the UK (e.g., UKRI, Wellcome Trust, REF).

Our aim is to balance the benefits and limitations of, for example, bibliometric use to create a framework for responsible research assessment at Keele, and to suggest ways in which they can be used to deliver the ambitious vision for excellence in research and education, as embodied in the Keele strategy.

Responsible Use of Metrics

We recognise that Keele is a dynamic and diverse university, and no metric or set of metrics could universally be applied across our institution. Many disciplines and/or departments do not use research metrics in any way, because they are not appropriate in the context of their field. Keele recognises this and will not seek to impose the use of metrics in these cases.

This Policy Statement is deliberately broad and flexible to take account of the diversity of contexts, and is not intended to provide a comprehensive set of rules. To help put this into practice, we will provide an evolving set of guidance material with more detailed discussion and examples of how these principles could be applied. Keele is committed to valuing research and researchers based on their own merits, not the merits of metrics.

Further, research “excellence” and “quality” are abstract concepts that are difficult to measure directly but are often inferred from metrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.

When applied in the wrong contexts—such as hiring, promotion, and funding decisions—irresponsible metric use can incentivise undesirable behaviours, such as chasing publications in journals with a high Journal Impact Factor (JIF) regardless of whether this is the most appropriate venue for publication, or discouraging the use of open research approaches such as preprints and/or data sharing.

Bibliometrics

Bibliometrics is a term describing the quantification of publications and their characteristics. It includes a range of approaches, such as the use of citation data to quantify the influence or impact of scholarly publications. When used in appropriate contexts, bibliometrics can provide valuable insights into aspects of research in some disciplines. However, bibliometrics are sometimes used uncritically, which can be problematic for researchers and research progress when used in inappropriate contexts. For example, some bibliometrics have been commandeered for purposes beyond their original design; the JIF was reasonably developed to indicate average journal citations (over a defined time period), but is often used inappropriately as a proxy for the quality of individual articles.

Other Quantitative Metrics

Other quantitative metrics or indicators may include grant income, number of postgraduate students or research staff, etc. As with bibliometrics, these can provide useful information and insights, but can also be misapplied. For example, grant income can reflect the ability to obtain competitive funding, but what is typical will vary considerably across disciplines and specific research questions or methodologies. Moreover, it is better regarded as an research input rather than a research output; substantial grant income that does not lead to substantial knowledge generation (in the form of scientific insights, scholarly publications, impact, etc.) is arguably evidence of poor value for money or inefficiency. This illustrates why quantitative metrics should be used thoughtfully and in combination with other factors, in a discipline-appropriate way.

Principles

Keele is committed to applying the following guiding principles where applicable (e.g., in hiring and promotion decisions): 

  1. Quality, influence, and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.
  2. Different fields have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs. journal articles). All research outputs must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes.
  3. Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with a range of disciplines with different publication practices and citation norms. In fields where quantitative metrics are not appropriate nor meaningful, Keele will not impose their use for assessment in that area.
  4. When making qualitative assessments, we should avoid making judgements based on external factors such as the reputation of authors, or of the journal or publisher of the work; the work itself is more important and must be considered on its own merits.
  5. Not all indicators are useful, informative, or will suit all needs; moreover, metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts may help estimate elements of usage, but in others they are not useful at all.
  6. Avoid applying metrics to individual researchers, particularly those that do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.
  7. Ensure that metrics are applied at the correct scale of the subject of investigation, and do not apply aggregate level metrics to individual subjects, or vice versa (e.g., do not assess the quality of an individual paper based on the JIF of the journal in which it was published).
  8. Quantitative indicators should be selected from those that are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
  9. If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality itself.
  10. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output. Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, consider the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.
  11. Metrics (in particular bibliometrics) are available from a variety of services, with differing levels of coverage, quality, and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, choose a source that allows records to be verified and curated to ensure records are comprehensive and accurate, or compare publication lists against data from the Keele systems.

See Keele’s webpage on Research Integrity & Improvement for more information, resources, and details of the support Keele offers. Contact us at research.integrity@keele.ac.uk with any questions or suggestions.

This statement is licenced under a Creative Commons Attribution 4.0 International Licence. Developed from the UCL Statement on the Responsible Use of Metrics.