Because they are quantitative measures, research metrics are limited in what they can assess. In order to use research metrics responsibly, the limitations of research metrics must be acknowledged and understood. Below are a couple key guidelines to consider when using research metrics:
Research metrics should not be relied upon as the only indicators of quality or impact. Numbers are not replacements for actually reading and evaluating research works. They are not substitutes for peer review, where experts in the field or discipline evaluate the quality of particular research works or the quality of researchers' contributions to their disciplines. Research metrics are not always accurate reflections of the impact a work has had on a field of study or within the public sphere.
When using research metrics, it is (in some cases) very important to account for differences in publication and citation practices by discipline. For example, humanities citations may accumulate at a much lower rate than citations in disciplines such as physics or medicine. Similarly, historians may value publishing books over publishing articles. Perhaps computer scientists need their conference papers to be counted and valued, along with their journal articles. So, a ranking or metric that does not account for these discipline-specific publication and citation differences may (if not otherwise accounted for or elaborated on) at worst be misleading or, at best, not tell the whole story.
These basic guidelines were drawn from studying the following two well-known statements (which have been signed or publicly endorsed and adopted by many organizations and individuals) outlining responsible research assessment guidelines. Each statement contains additional nuanced guidelines:
A number of groups have created resources for practically applying responsible research assessment principles and guidelines. The INORMS Research Evaluation Group has created a broad framework for responsible research assessment with the acronym of SCOPE.
Image from INORMS Research Evaluation Group (2021) A one-page overview of the five-stage SCOPE Framework.
START with what you value about the entity you are evaluating. Is it the quality of research produced or something else? Are you basing your evaluation on just the available data sources (so the available metrics) instead of figuring out the steps and tools necessary to actually evaluate what you value, instead of valuing what you can produce with the metrics that are readily available to you?
CONTEXT considerations take into account disciplinary differences and the size of the entity you are evaluating.
OPTIONS for evaluating stresses the importance of using both quantitative measures and qualitative and warns against using quantities (so the number of citations an article receives, for example) to indicate quality.
PROBE deeply involves asking who your evaluation approach might discriminate against, how it might be gamed, and what the unintended consequences of your evaluation might be.
EVALUATE your evaluation asks you to reflect on your evaluation to determine whether it achieved its aim. So, use SCOPE to reflectively evaluate your own research evaluation.
Among discipline specific frameworks and resources for responsible research assessment, the following social sciences and humanities focused resources may be especially relevant to the Department of Languages, Literatures, and Linguistics: