Research assessment has become a central topic in conversations about the future of science. Traditionally dominated by quantitative metrics such as Journal Impact Factors and citation counts, it is increasingly being reimagined to reflect a broader set of values – such as openness, collaboration, societal impact, and research integrity. As a result, research assessment reform has become a multifaceted and evolving concept, sitting at the intersection of many academic, cultural, and policy-driven priorities.
At the recent Metascience Conference in London, a panel of experts from across the academic community – representing funders, institutions, publishers and advocacy groups – came together to explore the complexities of reforming how research is evaluated:
This blog post outlines some of the key themes that emerged from the discussion.
In his introduction to the session, James highlighted that research assessment reform is a convening point for many policy initiatives: open science, integrity and ethics, diversity and inclusion, and others. However, this makes the conversation very complex, with a number of tensions and interconnections that are often difficult to navigate.
From her experience as Vice Chair of CoARA, Elizabeth echoed this sentiment. While she stressed the importance of research assessment reform efforts to the success of many related agendas, she acknowledged that the range of separate reform efforts makes it harder to define and measure the “success” of individual initiatives as they all work together to achieve common goals.
Alex followed this by sharing some findings from his recent preprint, exploring lessons learned from the Netherlands, which has one of the world’s most advanced national assessment reform initiatives. He described existing assessment systems as a “wicked problem”, arising as a symptom of many other systemic issues such as outdated career structures, problematic leadership, entrenched research cultures, and competitive funding mechanisms. This means that in order for reform initiatives to be truly effective, they need to address many different challenges simultaneously – something that is far easier in theory than in practice.
Panellists acknowledged that progress has been made in recent years toward more holistic and inclusive evaluation systems. There has been strong mobilisation of support, with growing buy-in from institutions, funders, and publishers, and many policy changes have been introduced aiming to reduce reliance on publication metrics.
However, reforms are by no means ubiquitous or complete – particularly at the researcher level, where a lot of evaluation takes place, but culture is more ingrained. Ed shared data from a recent Springer Nature survey of 6,600+ researchers which found that most respondents are currently assessed entirely or mostly based on metrics (such as number of papers published), but the majority feel that the ideal research assessment would be less reliant on research outputs, and involve an equal balance of quantitative and qualitative measures.
Alex and Kelly both highlighted the difficulty of encouraging behaviour change at all levels within the research ecosystem. Kelly explained that leadership at many institutions have reached out to her after signing DORA, unsure of how to get started with implementing its principles – especially because there is no “one-size-fits-all" approach. She stressed the importance of providing practical resources, for example DORA’s recently published guide to implementing responsible research assessment. Alex added that we also need more effective ways of targeting researchers, specialist research fields and “gatekeepers” with reform messages to improve awareness to those who are not yet engaged with the conversation.
Given the panel’s strong representation from the open science community, a key focus was how research assessment can better support and incentivise open science practices.
Bregt shared the outcomes from a survey of Science Europe’s member organisations, which found that open science elements are widely included in funding requirements for research projects – however, approaches vary, and can often be focused on more established OS practices such as open access publishing and FAIR data. Zoé emphasised the role that funders could play in shifting norms by embedding broader open science criteria into grant requirements and project evaluations, something ANR is actively pursuing as part of its commitments to DORA and CoARA.
Both Bregt and Zoé noted that there are still open questions about how best to include open science practices in research assessment: Should open science be assessed through dedicated criteria or through narrative descriptions within scientific quality criteria? Should evaluations focus on individuals, institutions, or both? And how can we effectively and collectively implement and monitor these changes?
Questions like the above led to some debate amongst the panellists, underscoring the importance of sustained dialogue and collaboration across the research ecosystem.
Elizabeth stressed the need for “global collective action”, outlining CoARA’s approach to engendering reforms by building a community united by shared commitments and working towards one ultimate outcome: A research ecosystem that provides the best conditions for the best research to take place. This involves working closely with other research assessment reform initiatives, as well as other related movements like open science. Kelly echoed this from DORA’s perspective, noting that sharing learnings is key so we can avoid duplication of efforts and move forward more quickly.
From the publisher side, Ed reinforced that Journal Impact Factors should not be used to assess individual researchers and their contributions to research, and expressed that Springer Nature wants to continue collaborating with the broader research community to communicate this position and facilitate change. He also urged that as we move to reform research assessment, we make sure to include the perspectives of stakeholders everywhere, particularly those in the Global South (noting that survey respondents from Asia and Africa were more likely to report that they are assessed on their contribution to solving global challenges and to the national interest, versus those in North America and Europe).
The session closed with reflections on what successful research assessment reform might look like. Each speaker offered a different perspective, but all pointed toward a more thoughtful, values-driven system.
Bregt and Kelly spoke to the importance of collaboration, bringing together different goals to accelerate progress and bridge the gap between policy and practice. With metascience being the theme of the conference, Alex and Zoé added that robust research on research is key to make sure we are moving forwards in the correct way. And finally, Elizabeth and Ed shared their visions for an ideal future: one where there are no longer any financial incentives to create fake papers, and the research community is measuring what we truly value.
Reform starts with understanding. Download the white paper The State of Research Assessment to learn what researchers really think and how we can move forward together.
Related content
Don't miss the latest news & blogs, subscribe to The Link Alerts!