AI in research and publishing: What institutions need to know from KRAF 2025

T
The Link
By: Soon Kim, Tue Feb 24 2026
Soon Kim

Author: Soon Kim

Artificial intelligence (AI) is now a central part of the research landscape, actively shaping how studies are written, reviewed and shared. For research institutions, this evolution opens up exciting opportunities to boost efficiency, advance discovery, and offer researchers powerful new forms of support. It also introduces important areas for consideration, including policy, ethics, integrity and training. With thoughtful planning, institutions can make the most of AI’s strengths, reinforce trust in science, and empower researchers to thrive.

If you’re shaping AI strategies for your institution, you’re facing both an enormous opportunity and a fast-moving set of challenges. That’s why global experts from academia, publishing and research policy came together at Springer Nature’s Korea Research Advisory Forum (KRAF) 2025, to share practical insights on how AI can be integrated into scholarly communication responsibly and effectively. This blog turns those insights into clear guidance for research institutions, helping you confidently shape AI strategies that support your researchers, strengthen trust and prepare your organisation for the future.

What AI means for institutional integrity

We all know AI is speeding up research workflows, from literature searches to drafting manuscripts, and that speed isn’t just a challenge; it’s a major opportunity. Opening the discussion at KRAF 2025, Nick Campbell, Springer Nature’s Vice President for Academic Affairs, reminded everyone that AI isn’t new to publishing, though we’ve never seen it at this scale or level of sophistication before. ‘We’ve been thinking about AI since the 1960s,’ he noted, pointing out that the company’s first experiment, a book generated by an early summarisation tool in 2019, predates today’s large language models. Now, those models can provide broader support, for example by helping researchers summarise literature, structure arguments, or polish text, although it's important to note that responsibility and final control must remain entirely with the author.

With AI advancing so quickly and the pros and cons becoming more prevalent, the rise in AI-generated submissions is prompting a wave of new editorial tools, policies and detection methods. Rather than overwhelming editors, this shift can empower them with better systems for identifying low-quality or fabricated work, ultimately raising the bar for what gets published. Campbell emphasised that the goal isn’t about limiting AI’s potential, but about using it responsibly. Springer Nature’s stance focuses on detecting misuse, reinforcing trust, and ensuring accountability throughout the research process. “AI is an assistant; humans make the decisions,” he said, where AI continues to enhance human expertise without replacing it, and where the industry can build more rigorous and efficient publishing workflows than ever before.

But what does this mean for institutions? It’s a chance for institutions to lead. As highlighted in our Perspectives on AI in scholarly communications report, providing researchers with clear, practical guidance on when and how to use AI responsibly helps strengthen research integrity, build confidence across your organisation, and support better collaboration with publishers. By setting thoughtful AI policies now, institutions can position themselves as proactive leaders shaping a more efficient, collaborative and trusted research future.

How editors are responding and what institutions can learn

Editors from Korean society journals have observed a noticeable rise in AI‑supported submissions, reflecting how quickly generative tools are being adopted across the research community. Many of these papers draw on readily available open datasets, creating new challenges for editorial teams as they assess quality and rigour. Institutions can help ease this pressure by updating internal review processes and giving researchers the training they need. It’s also worth exploring publisher tools, like AI‑powered integrity checks, to catch problems early and prevent poor‑quality submissions from slipping through the net.

The evolving needs of researchers in a changing landscape

Laura Schmid, Editor at Nature Communications, shared some revealing survey results that highlight just how quickly researcher behaviour is shifting. Almost every researcher who has experimented with AI reports finding it genuinely helpful, whether for speeding up literature searches, supporting coding tasks, or helping with early stage writing and structuring ideas. And even those who haven’t taken the leap yet recognise its growing relevance; many expect AI to become a routine part of research workflows in the near future. Enthusiasm for AI is accompanied by a growing awareness of key considerations, including bias in the data fed into AI models, bias that can become embedded in those models and reflected in their outputs, as well as concerns around data quality, accuracy, and environmental impact. These insights point to a research community that approaches AI with curiosity and optimism, embracing its potential while maintaining a strong commitment to responsible use.

“Researchers find AI useful, but integrity and sustainability must lead.” - Laura Schmid, Editor at Nature Communications

Experts at KRAF highlighted that effective and responsible AI use is strengthened by context. As Prof. JK Seong, Seoul National University explained, biological and environmental variability adds important nuance to scientific findings, and when AI tools are designed to reflect this complexity, they can deepen insights and support strong reproducibility.

These perspectives show that as researchers become more aware of both the benefits and the complexities of AI, from embedded bias to the need for contextualised models, institutions have a key role in helping them use these tools responsibly and in ways that strengthen scientific integrity and open new avenues of discovery.

Envisioning interactive and open research

The forum concluded by highlighting Springer Nature’s vision for more open and interactive research, including the possibility for readers to dynamically interrogate research data, such as filtering results by specific conditions.

“AI can help us move toward more open, reproducible, and interactive research, but integrity must lead the way.” - Nick Campbell, Vice President, Academic Affairs at Springer Nature

Institutions have a meaningful opportunity to shape the future of open science by using publisher tools that bring transparency into everyday workflows. This dedication helps strengthen trust and supports research in advancing with clarity and momentum.

The bigger picture of building trust in a digital age

Researchers, publishers and institutions all share a commitment to maintaining strong confidence in scholarly communication, which calls for embedding human‑centred values such as fairness, accountability and transparency throughout the research process and throughout the implementation of AI. When institutions champion these principles, they strengthen research integrity and open the door to new possibilities for discovery, collaboration and innovation.

As institutions shape their AI strategies, they have a meaningful opportunity to guide the research community toward responsible and confident adoption. Springer Nature’s AI & Integrity hubs offer practical frameworks, tools and shared insights to support this work. The Link gives institutions access to peer insights that help refine and improve research workflows.

To support this evolution, institutions may wish to focus on the following areas:

  • Develop clear, institution-wide AI policies covering authorship, disclosure, peer review, and data governance.
  • Provide practical training and workshops to build researchers’ confidence in responsible AI use, transparent reporting, and critical evaluation.
  • Equip researchers with guidance on ethical standards, including quality, bias mitigation, reproducibility, and best practices for AI-assisted research.
  • Promote transparency by encouraging detailed reporting and clear disclosure of AI-assisted writing, analysis, and workflow steps.
  • Collaborate closely with publishers to stay informed about emerging integrity risks, expectations, and evolving AI policies.
  • Use the outputs from publisher screening and integrity checks, such as plagiarism reports or data‑availability assessments, to help strengthen internal review processes.
  • Champion context-aware, scientifically grounded AI tools that support robust research design and reflect real-world complexity.
  • Advance open science practices by embedding transparency and reproducibility into everyday research workflows.

For institutions looking to evolve their AI thinking with confidence, The Link and the AI & Integrity hubs provide supportive places to gather insights, hear from peers and deepen understanding at your own pace.

About the Korean Research Advisory Forum (KRAF)

Meeting the evolving expectations of the research community requires ongoing collaboration with institutional leaders. After launching the US Research Advisory Council in 2021, this collaborative model expanded in 2024 with the Korean Research Advisory Forum, uniting experts from across academia to share perspectives and shape future priorities. The forum members are listed below (in alphabetical order):

  • Changmo Sung, Director, Mission PM Center, Korea ARPA-H
  • Chulhong Kim, Professor, Pohang University of Science and Technology
  • Heisook Lee, President, GISTeR
  • Je Kyung Seong, Professor, Seoul National University
  • Jooyoung Park, Associate Professor, Seoul National University
  • Mijin Yun, Professor, Yonsei University College of Medicine
  • Sang Yup Lee, Professor, Korea Advanced Institute of Science and Technology
  • Sun Huh, Professor, Hallym University 
  • William Jo, Professor, Ewha Womans University
  • Woojung Jang, CEO, AI Star
  • Wooyoung Shim, Professor, Yonsei University

Related content

Don't miss the latest news & blogs, subscribe to The Link Alerts today!


Soon Kim

Author: Soon Kim

Soon Kim is a Senior Strategic Partnership Manager at the Nature Portfolio, where she leads strategic collaborations that strengthen the scientific community. In this role, she leverages her publishing expertise to build meaningful partnerships with academic and corporate stakeholders and support the advancement of research across Korea.