Researchers embrace AI. The disclosure gap is the challenge.

R
Research Publishing
By: Neha Trivedi, Cristiano Matricardi and Markus Kaindl, Tue Apr 21 2026
_

Author: Neha Trivedi, Cristiano Matricardi and Markus Kaindl

As AI becomes more embedded in day‑to‑day research practice, the question for the scholarly community is no longer whether it will be used, but how it should be governed. To better understand how researchers are navigating this shift, last year, Springer Nature and TBI Communications surveyed over 1,000 researchers. The results offered useful insights into where confidence in AI use is growing, where it is lacking, and what role publishers can play in supporting responsible use.   


Cureus TBI survey © Springer Nature


AI has moved from a topic of debate to a tool of daily practice for most researchers. Our survey found that 69% of respondents already use AI tools frequently or occasionally in their research or publishing workflow.  

Only a small minority now expect to not use AI at all in the future, with researchers in mathematics more likely than those in other fields to say they do not plan to use AI. 

Usage also varies by career stage. Early-career researchers are the most engaged. PhD students (87%) and early-career academics (82%) are the most likely to follow, recommend, or submit to an AI-integrated journal. Senior academics remain more cautious in their adoption and use. 

New Content Item © Springer Nature


Fig. 1: How often researchers use AI in their research or publishing process (n = 1,017) 

The primary use case for AI was typically translation and summarisation, followed closely by manuscript writing and editing along with data analysis and modelling. What is interesting is that AI is no longer limited to administrative tasks. It now extends across most core research activities, from study design to data analysis.  

 

Fig 2 cureus © Springer Nature

Fig. 2: Ways researchers have previously used AI in research or publishing (n = 825) 


Researchers also see potential for AI in peer review. In our survey, 59% said AI could speed up feedback, and 41% expected it to improve the clarity of reviewer comments.  

Comfort with AI in early-stage tasks, for example reviewer matching and initial manuscript screening, is notably high. There is less support, however, for AI generating substantive analytical content without human oversight. 

There is also a notable pattern: Humanities and social science researchers showed higher enthusiasm for AI-assisted publishing than many STEM fields, a finding that may reflect different disciplinary needs and workflows. 

AI adoption is increasing but transparency over use is lagging 

Despite growing adoption, one third of respondents told us that they have never disclosed their AI use when submitting or publishing. Patterns again vary by career stage and discipline: 

  • Senior academics were more likely to always disclose their use of AI compared to other career stages (47% vs. 39% average). 

  • Respondents in STEM are less concerned with disclosing their use of AI (37% sometimes and 34% always).   

  • Respondents in Medicine are particularly strict on disclosure (only 9% never disclose, and 70% always do) 

With other surveys showing one of the largest barriers around AI use being the lack of transparency, this data points to a shared gap in expectations and guidance that needs to be addressed collaboratively with the research community.  

Fig 3 cureus © Springer Nature


Where we are now 

These findings reflect a wider sector challenge. AI adoption is scaling faster than the frameworks designed to govern it. Across publishing, the response has been gathering pace. Publishers have responded through a combination of tools, policies, and shared standards. Alongside investments in detection technologies and updated author guidelines, industry bodies are also stepping up. In September 2025, STM introduced recommendations for classifying and labelling AI use in research outputs. This work has since expanded into a broader consultation on the responsible use of scholarly content in generative AI (March 2026), opening dialogue across publishers, researchers, and technology providers. In parallel, the World Conference on Research Integrity in Vancouver (May 2026) has established a dedicated track to advance a global standard for AI disclosure in research. 

At Springer Nature, we have been working to build that infrastructure across our portfolio. We have introduced AI-assisted tools at multiple stages of the publishing process, from manuscript preparation and peer review administration through to author support services, with a clear principle: AI should augment human editorial judgement, not replace it. This work is underpinned by our five AI principles covering dignity, fairness, transparency, accountability and privacy, which inform our author guidance and AI policies across the portfolio. 

Our full approach is set out on our AI hub. 

We know this is work in progress. The survey data are useful precisely because they tell us where the gaps remain, and where researchers need publishers to do more. 


What comes next 

One practical response to these findings is the new Cureus Journal of AI-Augmented Research. When surveyed, 61% of researchers said they would consider submitting to an AI-integrated journal, and 75% would follow it for updates. Researchers were also clear about what makes a venue credible: indexation in major databases (58%), a recognised editorial board, transparent AI policies, and human oversight of peer review (29%). These expectations are widely shared. They reflect a consistent message: AI is welcome in scholarly publishing, but only under conditions of transparency, human control, and clear accountability. These signals have a direct bearing on how publishers design journal offerings going forward. 

 

Fig 4 Cureus © Springer Nature

Fig. 4: What would give researchers confidence in a credible, high-quality journal (n = 714) 

This new publishing venue is designed to offer a ‘sandbox’ approach to AI research and a space for researchers to explore models of AI-supported peer review, reproducibility, and open science, alongside publishing AI-augmented peer reviewed research, with human editorial oversight at every stage. 

Researchers are actively shaping how AI fits into their work. At Springer Nature, we see our role as working alongside the research community, offering clear standards, practical author guidance, and processes that are transparent about where AI is used and how.  

The aim of this journal is not to provide a definitive model, but to test and refine approaches collaboratively in the open. We’re excited about this journey and plan to share what we learn along the way. 

_

Author: Neha Trivedi, Cristiano Matricardi and Markus Kaindl


Neha Trivedi, Publishing Director, Springer Nature

As Publishing Director, Neha leads the strategic growth of the open access journals portfolio, driving high‑quality, accessible publishing across diverse scientific disciplines. She oversees journal development, supports global partnerships, and helps deliver scalable, cost‑effective publishing solutions that meet researchers’ needs worldwide. Previously, as Head of Scientific Content Curation, she led a team of 100+ writers producing authoritative scientific content. She is passionate about expanding access to trusted research and advancing global scholarship.


Cristiano Matricardi, Head of Content Innovation, Springer Nature
Christiano holds a PhD in Materials Science, with a strong foundation in physics and a passion for science communication. He specialises in translating complex scientific and technological concepts into clear, compelling narratives for diverse audiences, from academic experts to the wider public. With over four years’ experience in editorial and content strategy, he works closely with internal teams and external partners to develop and implement AI‑enabled approaches that deliver practical value for research and publishing.


Markus Kaindl, Director, Content Innovation, Springer Nature

Markus is the Director of Content Innovation at Springer Nature, where he leads efforts in content innovation and research publishing. His expertise spans various fields, including computational linguistics, data mining, and natural language processing. Kaindl has been instrumental in developing the SN Insights internal analytics solution and the SN SciGraph Linked Open Data knowledge graph. He has also been involved in significant metadata migration projects and has a strong background in semantic enrichment, including Named Entity Recognition (NER) and automatic document classification. Kaindl's work at Springer Nature has been pivotal in fostering a data-driven approach to content management and has contributed to the development of the Springer Nature Hack Day series, which encourages developers to utilize the publisher's data proposition.

Related Tags: