What happens when the systems meant to reward research excellence, can be seen as one of the causes of a loss of trust in science? Does an overreliance on publication metrics mean that researchers feel compelled to target these metrics? These questions shaped one of the discussions during COPE's Publication Integrity Week, an annual initiative that brings together the global research publishing community to highlight issues that define good practice.
As COPE Trustee Board member, I had the pleasure to moderate the session Boosting Incentives for Ethical Conduct. With three fascinating speakers, we asked ourselves:
Why do incentives matter for trust in science?
Incentives shape behaviour. When institutions offer cash bonuses or base promotions on the number or prestige of publications, the pressure to publish can lead to shortcuts, and sometimes misconduct. It’s a reality for many researchers today, and my team and I have heard many researchers expressing these pressures during our research integrity investigations.
In the session, our panel of international experts discussed how the current research assessment systems influence behaviour, the unintended consequences of incentives, and the work underway to rethink how we reward researchers.
What we can learn from the Cobra effect
Achal Agrawal, founder of India Retraction Watch offered a striking analogy to the use of publication counts as prerequisites for bonuses or promotions – “the Cobra effect”. In colonial India, a bounty was introduced to reduce venomous cobras. Initially, the program worked to reduce numbers, but soon people began breeding cobras to claim rewards. When the program ended, breeders released their snakes, leaving the region with an even bigger problem.
This tale mirrors what happens in academia when incentives prioritise quantity over quality. Achal shared examples of universities offering direct payment for papers and patents, alongside punitive measures for those who fail to meet unrealistic research goals. Combined with rankings that heavily weight research output, researchers face immense pressure to publish, sometimes at any cost.
These pressures, he argued, are driving misconduct and inflating retraction rates worldwide.
Achal warned: “When you reward papers, people start gaming the system”
“Incentives are powerful” he added, “if we don’t monitor and evolve them, they will lead to unintended consequences”.
Achal’s call to action included:
Many of Achal’s recommendations align closely with our work to promote research integrity, transparency and accountability.
Through Springer Nature’s India Research Tour, in collaboration with the Ministry of Education, Government India, we delivered workshops, training across institutions to promote ethical, transparent, inclusive research practices and strengthening research integrity awareness.
We also collaborate with organisations such as ORCID, DORA, STM and COPE, to develop shared solutions and resources, including guidance for publishers on how to clearly communicate retractions, which are a neutral mechanism to correct the scholarly literature. These collaborations help create the interconnected systems that are required for incentives which promote integrity and good practice.
Moving beyond Metrics
Sanli Faez, national manager of the Dutch Recognition and Rewards programme, challenged the reliance on traditional metrics which tend to focus on counting papers and citations, ignoring the many different contributions that drive discovery and advance knowledge.
“What we aim for is concrete benefit to society and reliable science. But that’s not what we’re rewarding”
Drawing on Bruno Latour’s work, Sanli explained that the day-to-day incentives are social rather than philosophical. "What scientists want is credit - from peers - because credit unlocks resources to keep doing the work.”
This is the traditional credit cycle, where initial recognition leads to resources, which enable experiments and data collection, which turn into publications, presentations, generating more recognition and resources. However, this model does not reflect the reality of modern science today, with the growth of multi-disciplinary work, and blurred discipline boundaries.
Sanli introduced a compelling idea, moving away from the traditional credit cycle to a credibility network, a cloud of contributions where value is distributed across people, teams and institutions.
He added “We now conduct research in a more diverse and generally collaborative way, through countless micro-contributions: datasets, code snippets, preprints, peer reviews, blog posts, mentorship and collaborative discussions”.
These smaller, interconnected outputs reflect the reality of interdisciplinary research and open science. However, the current reward systems struggle to aggregate and meaningfully reward these contributions.
The Netherlands are considering bold steps: removing H-index reporting in grant proposals and attempting to measure the qualities of the types of work in academia, across different disciplines. Sanli said, “This moves us away from marketing or promoting the individual researcher and towards honouring the collective research output – the cloud of micro-contributions, advancing discovery”.
At the institutional level, the Dutch Recognitions and Rewards programme is creating room for diverse talent profiles, valuing the contributions of teachers, researchers, managers, data scientists and more, and rewarding team science and cross-disciplinary collaboration. The Dutch Recognitions and Rewards culture barometer give an insight into the success of the programme and how this has been recognised, shared and experienced in the workplace by surveying over 8,000 researchers.
These approaches resonate with my experiences in integrity and with the position Springer Nature has taken. As a signatory of the Declaration on Research Assessment (DORA), we have long advocated for a more balanced approach to research assessment, one that rewards integrity, openness and collaboration. Our recent white paper on the state of research assessment reflects this commitment, drawing on insights from over 6,600 researchers to understand their experiences on how their contributions are evaluated.
We also continue to support the wider researcher community with free to access, specific training to promote good research practices, from fundamentals on research integrity, to conducting peer review. By working collaboratively across the ecosystem, we aim to shape systems that reflect the realities of modern science and strengthen trust in science.
Recognising micro-contributions means valuing:
Building systems to capture and credit these contributions is essential. It’s not just about fairness; it’s about creating incentives that align with the realities of modern research.
Redesigning Academic Rewards:
Caitlin Schleicher introduced the MA³ Challenge, a $1.5 million initiative to help institutions rethink hiring, promotion and tenure systems, with a goal of rewarding bold strategies to foster a culture of openness, collaboration and transparency. She highlighted Stanford’s School of Medicine as a case study: faculty can now include an optional CV addendum showcasing open practices - data sharing, reproducibility, and more - while impact factors are explicitly excluded.
“Show us your open data practices, rigor and reproducibility, not your H-index,” Caitlin emphasised.
Key points of agreement
All the speakers made compelling cases and, unsurprisingly, converged on some key points:
What struck me most was the sense of urgency and optimism. Reforming incentives is complex, but the examples our panellists shared show that progress is possible, when institutions, funders, publishers and other stakeholders across the sector work together. To build trust in research, we need systemic reforms that make ethical conduct the rational choice. At Springer Nature, we are committed to supporting this shift through advocacy, resources and training that empower researchers to conduct research with integrity. Learn more about our work here.