File(s) stored somewhere else

Please note: Linked content is NOT stored on ORCID and we can't guarantee its availability, quality, security or accept any liability.

Using ORCID, DOI, and Other Open Identifiers in Research Evaluation

journal contribution
posted on 23.10.2018, 09:04 by Haak Laurel, Alice Meadows, Josh Brown
An evaluator's task is to connect the dots between program goals and its outcomes. This can be accomplished through surveys, research, and interviews, and is frequently performed post hoc. Research evaluation is hampered by a lack of data that clearly connect a research program with its outcomes and, in particular, by ambiguity about who has participated in the program and what contributions they have made. Manually making these connections is very labor-intensive, and algorithmic matching introduces errors and assumptions that can distort results. In this paper, we discuss the use of identifiers in research evaluation—for individuals, their contributions, and the organizations that sponsor them and fund their work. Global identifier systems are uniquely positioned to capture global mobility and collaboration. By leveraging connections between local infrastructures and global information resources, evaluators can map data sources that were previously either unavailable or prohibitively labor-intensive. We describe how identifiers, such as ORCID iDs and DOIs, are being embedded in research workflows across science, technology, engineering, arts, and mathematics; how this is affecting data availability for evaluation purposes: and provide examples of evaluations that are leveraging identifiers. We also discuss the importance of provenance and preservation in establishing confidence in the reliability and trustworthiness of data and relationships, and in the long-term availability of metadata describing objects and their inter-relationships. We conclude with a discussion on opportunities and risks for the use of identifiers in evaluation processes.

In evaluation studies, we try to understand cause and effect. As research evaluators, our goal is to determine whether programs are effective, what makes them effective, what adjustments would make them more effective, and whether these factors can be applied in other settings. We start off with lofty goals and quickly descend into the muck and mire: the data—or lack thereof. In many cases, programs do not have clearly stated goals. Even when goals are stated, frequently data were not collected to monitor progress or outcomes. From the perspective of a research scientist, this approach is backwards. Researchers start with a hypothesis, develop a study process with specific data collection and controls, and then analyze the data to test whether their hypothesis is supported.

Nevertheless we soldier on (one approach is described by Lawrence, 2017). Evaluators work with research program managers to develop frameworks to assess effectiveness. These frameworks, usually in the form of logic models, help establish program goals, and focus the questions to be addressed in the evaluation. Again, from lofty goals, we have to narrow and winnow our expectations based on the available data (Lane, 2016). Many program evaluations use journal article citations as the sole source of data, because citations are the only thing available. This is because one individual, Eugene Garfield, had the prescience and fortitude to create a publication citation index over 60 years ago (Garfield, 1955). This rich and well-curated index underlies much of the science, technology, engineering, arts, and mathematics (STEAM) research evaluation work, and underpins a number of metrics, indicators, and entire industries. However, by focusing on papers, it overlooks two important components of research: people and organizations. Its almost exclusive use for evaluation purposes has skewed how we think about research: as a factory for pumping out journal articles.

This emphasis on one contribution type has affected academic careers, since promotion and tenure review favor the subset of prolific publishers (the survivors in the “publish or perish” culture). It has also affected the nature of scholarly contributions. Work by Wang et al. (2017), has provided evidence that novel or blue skies thinking has been diminished as a presence in the literature by the emphasis on publication at all costs. Research is—and should be—about so much more than publications or journal articles.

We need to expand how we think about measuring research. We need to include people in our analyses (Zolas et al., 2015). We need to be thoughtful about the language we use to describe research contributions—as Kirsten Bell notes in her recent post on the topic, “outputs” is hardly the innocent bureaucratic synonym for “publications” or “knowledge” it might appear (Bell, 2018). We need to consider more “humanistic” measures of achievement, such as those proposed by the HuMetricsHSS initiative (Long, 2017). And we need to learn from Garfield and have the vision and fortitude to build infrastructure to support how we understand research and, through that, how we as a society support and encourage curiosity and innovation (Haak et al., 2012a).