4 July 2018. Opinion. Nature Magazine. A better measure of research from the global southFunders Jean Lebel and Robert McLean describe a new tool for judging the value and validity of science that attempts to improve lives.
- Does the current evaluation approach scrutinize and give equal recognition to the local researcher who focuses on specifics and the researcher who generalizes from afar?
- Does the current approach acknowledge that incentives are different for local and foreign researchers, and that those incentives affect research decisions?
- Are we adequately measuring and rewarding research that is locally grounded and globally relevant?
The answer to all of these questions is no.
With the support and leadership of partners across the global south, the IDRC decided to try something different. The result is a practical tool that is called Research Quality Plus (RQ+)
Ofir, Z., T. Schwandt, D. Colleen, and R. McLean (2016). RQ+ Research Quality Plus. A Holistic Approach to Evaluating Research. Ottawa: International Development Research Centre (IDRC).
- The tool recognizes that scientific merit is necessary, but not sufficient.
- It acknowledges the crucial role of stakeholders and users in determining whether research is salient and legitimate.
- It focuses attention on how well scientists position their research for use, given the mounting understanding that uptake and influence begins during the research process, not only afterwards.
The RQ+ approach can support planning, management, and learning processes of a research project, program, or grant portfolio. Read more here, in brief: Research Quality Plus. Also available in Spanish.
A full introduction to RQ+ is provided in the IDRC position paper:
Research Quality Plus: A Holistic Approach to Evaluating Research
Peer review is by definition an opinion. Ways of measuring citations — both scholarly and social — tell us about the popularity of published research. They don’t speak directly to its rigour, originality or usefulness. Such metrics tell us little or nothing about how to improve science and its stewardship. This is a challenge for researchers the world over.
The challenge is compounded for researchers in countries in the global south. For instance, the pressure to publish in high-impact journals is a steeper barrier because those journals are predominantly in English and biased towards publishing data from the United States and Western Europe 6. With the exception of an emerging body of Chinese journals, local-language publications are broadly deemed lower tier — even those published in European-origin languages such as Spanish, Portuguese or French.
The metrics problem is further amplified for researchers who work on local challenges. Climate adaptation research is a case in point. Countries in the global south are on the front lines of global warming, where context-appropriate adaptation strategies are crucial. These depend on highly localized data on complex factors such as weather patterns, biodiversity, community perspectives and political appetite. These data can be collected, curated, analysed and published by local researchers. In some cases, it is crucial that the work is done by them. They speak the necessary languages, understand customs and culture, are respected and trusted in communities and can thus access the traditional knowledge required to interpret historical change. This work helps to craft adaptations that make a real difference to people’s lives. But it is also fundamental to high-level meta-research and analysis that is conducted later, far from the affected areas 7.
The IDRC worked with an independent specialist to conduct a statistical meta-analysis using blinded data (see ref. 9 for a review). It aggregated results from our 7 independent evaluations of 170 components from 130 discretely funded research projects in natural and social science, undertaken in Africa, Asia, Latin America, the Caribbean and the Middle East10.
- Research housed wholly in the global south proved scientifically robust, legitimate, important and well-positioned for use.
- Researchers in the region scored well across each of these criteria (higher, on average, than the northern and north–south-partnered research in our sample). In other words, those most closely linked to a particular problem seem to be well placed to develop a solution. (See Figure S3 in Supplementary Information.)
- This finding challenges assumptions that researchers in the north automatically strengthen the capacity of partners in the south 11.
- There are many positive reasons to support north–south research partnerships, but the data suggest that we must be strategic to optimize their impact.
- Too many funders assume that research efforts in which teams receive training and skills development inevitably produce poor-quality research.
- The meta-analysis found no such trade-off. In fact, we found a significant positive correlation between scientific rigour and capacity strengthening.
- This suggests that research requiring a focus on capacity strengthening need not be avoided out of a desire for excellence. Indeed, it implies that the two can go hand in hand.
- In the fast-paced world of policy and practice, findings need to get to the right people at the right time, and in ways that they can use (see ‘Co-producing climate adaptations in Peru’). We often hear of tension between sample saturation or trial recruitment and the decision-making cycle of policymakers or industry implementers.
- Happily, the meta-analysis found a strong positive correlation between how rigorous research is and how well it is positioned for use.
- This finding builds the case for investing in scientific integrity, in even the most applied and translational programmes.
Co-producing climate adaptations in Peru. The project mapped hotspots across the region that were susceptible to climate change, and convened discussions with farmers and fishers about how they could adapt schedules and techniques to minimize its impact. The team did not rush to publish the research in top-tier Western journals, partly because of the English-language barrier but largely because of the urgency of the problem. The research outputs needed to be immediately understandable and usable, so the team rapidly published its findings in working papers and reports (many of which were collected in a Spanish-language book). These were immediately accessible to those in local government who needed the evidence to steer the response. As such, predominant metrics do not capture the value of this work.
IDRC is planning another retrospective assessment in 2020. It is already looking at ways it can use RQ+ for grant selection, monitoring the progress of individual projects, and communicating the IDRC organizational objectives to funding partners and applicants.
IDRC encourages other funders and institutions to improve their evaluations in three ways:
- consider research in context;
- accept a multidimensional view of quality;
- and be systematic and empirical about evidence collection and appraisal.
“It’s time science turned its greatest strengths on itself — experiment, appraise, debate and then improve.”
Measuring research impact in Australia (2018, 7 pages)
Andrew GunnUniversity of Leeds, United Kingdom and Michael Mintrom Monash University
This paper reviews the policy journey of research impact in Australia from the proposed, but never implemented, Research Quality Framework (RQF) to the National Innovation and Science Agenda (NISA). The analysis highlights the controversial nature of research impact assessment and the political and methodological challenges that have accompanied its implementation.
Source: PAEPARD FEED