Responsible Research

In January 2018, on the recommendation of Research Committee, Keele University signed the San Francisco Declaration on Research Assessment (DoRA). DoRA is a set of recommendations designed to ensure that “…the quality and impact of scientific outputs…is measured accurately and evaluated wisely.” To date, signatories include seven other Russell Group universities, including: Imperial College London, University of Manchester and UCL.

Keele University is committed to the critical role that peer review and expert judgement plays in the assessment of research, but also recognises the value that quantitative metrics can play in complementing and supporting decision-making. The move to find new ways to assess quality in research outputs is fully in line with the wishes of research funders, notably the European Commission, the Wellcome Trust and Office for Students (previously HEFCE).

The University has developed the following set of principles outlining its approach to research assessment and management, including the responsible use of quantitative indicators. These principles draw upon DORA and are designed to encapsulate current good practice and to act as a guide for future activities.

As a responsible employer, Keele is committed to:

  • Being explicit about the criteria used to reach new appointments, tenure and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  • For the purposes of research assessment, considering the value and impact of all research outputs (including datasets and software) in addition to research publications, and considering a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

Researchers should:

  • Make assessments based on scientific content rather than publication metrics, when involved in committees making decisions about funding, appointments, tenure, or promotion.
  • Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
  • Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs
  • Challenge research assessment practices that rely inappropriately on journal impact factors, or equivalent quantitative measures of qualitative properties, and promote and teach best practice that focuses on the value and influence of specific research outputs.
  • Encourage researchers towards 'open science' or 'reproducible research', acknowledging this may not have the same relevance/value across all disciplines.
  • Encourage and support interdisciplinarity.
  • Take into account the diverse range of possible research outputs beyond journal articles and recognise that some outputs are impossible to evaluate through standard journal metrics.
  • Be sensitive to factors that may result in legitimate delays in research publication: including personal factors that may have affected the applicant’s record of outputs.

David Amigoni and Claire Ashmore on behalf of Keele University DORA operational group

January 2019


Keele University has produced a Policy Statement and associated guidance on responsible research assessment including the appropriate use of quantitative research metrics.

This Policy Statement builds on a number of prominent external initiatives on the same task, including the San Francisco Declaration on Research Assessment (DORA); the Leiden Manifesto for Research Metrics and Metric Tide report. The latter urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment, where metrics should be considered in terms of: robustness (using the best available data); humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment); transparency (keeping the collection of data and its analysis open to scrutiny); diversity (reflecting a multitude of research and researcher career paths); and reflexivity (updating our use of metrics to take account of the effects that such measures have had). These initiatives and the development of institutional policies are also supported or mandated by research funders in the UK (e.g., UKRI, Wellcome Trust, REF).

Our aim is to balance the benefits and limitations of, for example, bibliometric use to create a framework for responsible research assessment at Keele, and to suggest ways in which they can be used to deliver the ambitious vision for excellence in research and education, as embodied in the Keele strategy.

Responsible Use of Metrics

We recognise that Keele is a dynamic and diverse university, and no metric or set of metrics could universally be applied across our institution. Many disciplines and/or departments do not use research metrics in any way, because they are not appropriate in the context of their field. Keele recognises this and will not seek to impose the use of metrics in these cases.

This Policy Statement is deliberately broad and flexible to take account of the diversity of contexts, and is not intended to provide a comprehensive set of rules. To help put this into practice, we will provide an evolving set of guidance material with more detailed discussion and examples of how these principles could be applied. Keele is committed to valuing research and researchers based on their own merits, not the merits of metrics.

Further, research “excellence” and “quality” are abstract concepts that are difficult to measure directly but are often inferred from metrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.

When applied in the wrong contexts—such as hiring, promotion, and funding decisions—irresponsible metric use can incentivise undesirable behaviours, such as chasing publications in journals with a high Journal Impact Factor (JIF) regardless of whether this is the most appropriate venue for publication, or discouraging the use of open research approaches such as preprints and/or data sharing.


Bibliometrics is a term describing the quantification of publications and their characteristics. It includes a range of approaches, such as the use of citation data to quantify the influence or impact of scholarly publications. When used in appropriate contexts, bibliometrics can provide valuable insights into aspects of research in some disciplines. However, bibliometrics are sometimes used uncritically, which can be problematic for researchers and research progress when used in inappropriate contexts. For example, some bibliometrics have been commandeered for purposes beyond their original design; the JIF was reasonably developed to indicate average journal citations (over a defined time period), but is often used inappropriately as a proxy for the quality of individual articles.

Other Quantitative Metrics

Other quantitative metrics or indicators may include grant income, number of postgraduate students or research staff, etc. As with bibliometrics, these can provide useful information and insights, but can also be misapplied. For example, grant income can reflect the ability to obtain competitive funding, but what is typical will vary considerably across disciplines and specific research questions or methodologies. Moreover, it is better regarded as an research input rather than a research output; substantial grant income that does not lead to substantial knowledge generation (in the form of scientific insights, scholarly publications, impact, etc.) is arguably evidence of poor value for money or inefficiency. This illustrates why quantitative metrics should be used thoughtfully and in combination with other factors, in a discipline-appropriate way.


Keele is committed to applying the following guiding principles where applicable (e.g., in hiring and promotion decisions): 

  1. Quality, influence, and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.
  2. Different fields have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs. journal articles). All research outputs must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes.
  3. Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with a range of disciplines with different publication practices and citation norms. In fields where quantitative metrics are not appropriate nor meaningful, Keele will not impose their use for assessment in that area.
  4. When making qualitative assessments, we should avoid making judgements based on external factors such as the reputation of authors, or of the journal or publisher of the work; the work itself is more important and must be considered on its own merits.
  5. Not all indicators are useful, informative, or will suit all needs; moreover, metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts may help estimate elements of usage, but in others they are not useful at all.
  6. Avoid applying metrics to individual researchers, particularly those that do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.
  7. Ensure that metrics are applied at the correct scale of the subject of investigation, and do not apply aggregate level metrics to individual subjects, or vice versa (e.g., do not assess the quality of an individual paper based on the JIF of the journal in which it was published).
  8. Quantitative indicators should be selected from those that are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
  9. If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality itself.
  10. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output. Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, consider the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.
  11. Metrics (in particular bibliometrics) are available from a variety of services, with differing levels of coverage, quality, and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, choose a source that allows records to be verified and curated to ensure records are comprehensive and accurate, or compare publication lists against data from the Keele systems.

See Keele’s webpage on Research Integrity & Improvement for more information, resources, and details of the support Keele offers. Contact us at with any questions or suggestions.

This statement is licenced under a Creative Commons Attribution 4.0 International Licence. Developed from the UCL Statement on the Responsible Use of Metrics.

Keele University Statement on Transparency in Research

 Approved at University Research Committee (May, 2020)

Keele University is committed to transparency and rigour in research across all disciplines, and to continue to improve the ways in which we conduct research. This is part of our broader efforts to enhance the quality of research practice, as outlined in related Keele University positions:

  • Keele’s Research Strategy emphasises the highest standards of rigour and integrity in research, including a commitment to open research and ensuring open access to research outputs.
  • Keele’s Statement on Research Integrity sets out our commitment to the highest standards of integrity in all aspects of research, as well as setting out how we meet our commitment to the UUK concordat for research integrity.

Expectations of Keele Researchers

Approaches to Transparency

We recognise that actions that support transparency in research and scholarship vary considerably across disciplines and methodologies. Therefore, we expect researchers to pursue transparency through the most effective and appropriate means, according to the nature of their research. In addition to pursuing transparency more broadly, there remains the expectation that all potential conflicts of interest should be declared, in line with Keele’s policy

Open Research

Making research “open” is a core part of research transparency, and engagement with open research practices are rewarded in promotion decisions. We recognise that there is significant variation across disciplines, influencing how appropriate open research practices may be. With this in mind—as far as is possible and appropriate—we expect researchers to:

  • make their research methods, software, raw data, and outputs open, and available at the earliest possible point in the research stream;
  • describe their data according to FAIR data principles, which ensures data are Findable, Accessible, Interoperable, and Reusable;
  • deposit their outputs in open access repositories:
    • publications in repositories such as Symplectic (for final post-peer review author copies of manuscripts) and preprint servers (for pre-peer review and also for final post-peer review author copies of manuscripts); 
    • research data in repositories such as the Keele Data Repository. Where subject-specific repositories are used, we recommend using repositories that meet Nature Scientific Data’s trusted repository criteria, such as these recommended repositories;
    • software in suitable repositories, such as GitHub.

We note that exceptions exist where research data should not or cannot be shared, and we therefore recognise the principle of “as open as possible; as closed as necessary”. There are many examples where data should not be open. For instance:

  • researchers may be allowed access to private archives on the condition that the records accessed are not made open;
  • data pertaining to research participants should only be shared when this is in line with ethics and privacy policies associated with the research, consent has been obtained in line with guidelines, and the data can be fully anonymised;
  • in some cases research participants may have agreed to certain data—such as merged data—being shared but not individual data, such as transcripts;
  • the data could be misused by others with the intention of causing harm;
  • it may not be possible to share fully raw data for practical reasons, such as the size of the data. Data should be at a level of granularity that is feasible to share, while also enabling research methods or results to be reproduced as comprehensively as possible;
  • it may be necessary to delay publication of research outputs and research data to allow for protection of intellectual property, for example through patenting;
  • publication of research data or outputs may breach confidentiality of collaborating parties or require their consent under the terms of a collaboration agreement.


The reproducibility of both research methods and research results (see Appendix for definitions) is critical to research in most contexts, particularly in the experimental sciences with a quantitative focus. Reproducibility forms part of Keele’s wider commitment to transparency and rigour in all of our research. We recognise that behaviours in support of transparency and rigour vary considerably across disciplines and methodologies, and we therefore encourage our researchers to adopt actions most appropriate to their disciplines.

It may be more useful to refer to transparency or academic rigour in the use of research methods and in the whole research process—from the collection of evidence or thoughts through analysis to final conclusions and the publication of findings.

The reproducibility of research methods is required for research to be replicated (see Appendix for definitions). This is essential in research contexts where findings must be robust and reproducible in order to form a solid foundation on which to build further knowledge. In research contexts where reproducibility is possible and appropriate, we strongly encourage researchers to use measures that support it. These include (but are not limited to):

  • pre-registration of study procedures and analysis plans, and use of registered reports where appropriate;
  • transparent reporting of research in line with recognised community guidelines;
  • disclosure of all tested conditions, analysed measures and results;
  • transparency around statistical methods (including sample size planning and statistical assumptions and pitfalls);
  • use of preprintsto facilitate the timely communication of scholarly output;
  • Conducting—and valuing in promotion / hiring decisions—replication studies;
  • publication of “null” findings.

Munafò et al. (2017) have produced a summary of initiatives that support reproducibility. For a comprehensive transparency checklist that can be used to improve and document the transparency of research outputs, see Aczel et al. (2020). 

Keele’s Work to Promote Transparency in Research

Keele is committed to supporting transparency in research and to developing approaches to improve the quality of the research we produce. This includes:

  • continuing to support open research and engaging with the necessary cultural change
  • the development of governance processes to enable research outputs to be found, accessed, and reused appropriately when open sharing is not appropriate
  • the development of additional training—including in research methods—and consideration of how to promote transparency in academic teaching at all levels
  • improving the sharing of knowledge and best practice across Keele through Faculty Champions for Research Integrity.

See Keele’s webpage on Research Integrity & Improvement for more information, resources, and details of the support Keele offers. Contact us at with any questions or suggestions.

This statement is licenced under a Creative Commons Attribution 4.0 International Licence. It is based on UCL’s Statement on Transparency in Research, November 2019.


See below for Appendix containing definitions of terms used in this paper.


Appendix — Definitions

This appendix provides definitions of key terms used in this paper.


Research is transparent if the methods, analysis and data are reported and disseminated openly, clearly and comprehensively.


Research has integrity if it has been conducted, analysed, reported, and disseminated honestly and to a high standard, ensuring that the research and its findings can be trusted.

Reproducibility of Results

The findings of a research study are reproducible if the same inferential outcome is obtained in an independent analysis using the original raw data and following the same analysis method described by an original study. 

Reproducibility of Methods

A research investigation is reproducible if sufficient detail about the methods and data used is provided, so that the study can be independently repeated as it was originally conducted.

Replication Study

A replication study aims to test the reliability of a prior study’s findings. It usually involves repeating the original study using the same methods, but involving different data or a new context, to confirm whether the study’s conclusions are applicable to other circumstances. Alternatively, a replication study may use the original data and context in an effort to reproduce the original study and its results.


A research study is replicable if its results can be obtained in an independent study using the same methods as those in the original study, but using different data or a new context.


Research findings are robust if they can be consistently produced a) across a range of tests within a research study, and/or b) across different research studies that involve variations in assumptions, variables or procedures.