In an era of rapid technological change and information overload, trust in science and media is facing unprecedented challenges. AI brings powerful tools for discovery and communication, but it can also be used by some for distortion. Responsible communication means preserving context, verifying claims before amplification, and being transparent about limitations. It is a shared responsibility across publishers, journalists, and readers.
At this year’s Frankfurt Bookfair, Joyce Lorigan, Group Head of Corporate Affairs at Springer Nature, and Daniel Lingenhoehl, Editor-in-Chief at Spektrum der Wissenschaft, shared their perspectives on combating false narratives and maintaining public trust in the context of science communication. Below is a summary of the key discussion points made during this session.
Thank you both for joining us today. To begin with, can you tell us a bit about how the increase in social media and usage of AI tools has impacted the spread of misinformation over the past few years?
Joyce: Perhaps for some context: Nearly two-thirds of people worldwide (63%) say they struggle to distinguish trustworthy media from deceptive sources (Edelman Trust Barometer 2025). I also heard earlier this week at the Book Fair that in the US, 60% feel that science is influenced by government, corporates or the self-interest of scientists. We see that trust in the information people are reading is under immense pressure. Deepfakes and misinformation can now circulate at lightning speed, making it harder for people to verify facts. The sheer volume of content and the sophistication of AI-driven tools mean misinformation can look extremely convincing and spread widely before it’s challenged.
Daniel: Fake News isn’t new, but as Joyce pointed out, the scale and speed are. Social media combined with AI has made it a lot easier to spread misinformation. While 60% of Germans think they can spot fake news, 80% don't check whether posts are true. We’ve even seen very sophisticated attacks, such as the “Doppelgänger website”, when whole websites were faked including Spektrum and Spiegel, among others, to spread misinformation. Luckily, it was removed, and the outreach was neglectable, but it is concerning, nonetheless.
This is a fast-paced changing landscape, as you say. So, from your experience, what can media outlets and publishers do to counter misinformation?
Joyce: At Springer Nature, we invest significantly in talented people and sophisticated AI tools to detect and then stop integrity issues in research. Since 2021, we have invested over 650 million euros in various technologies, including those with a focus on research integrity, and with the help of tools like these, we can detect problematic submissions more quickly and accurately. We are committed to ensuring that this statistic stays as low as possible. This is made more difficult by fraudulent actors in the system trying to manipulate research for their own gain. They too have advanced AI tools, and so we always have to stay one step ahead. This means constant investment and also sharing, where we are able, our learnings with or publishing peers such as via our collaborations with the STM Integrity Hub, to address these challenges together. Most recently Springer Nature donated a unique AI tool that identifies problematic text to the publishing community via this collaboration. Beyond safeguarding integrity, we work to make science accessible. We organize policy events such as our event series Science on the Spree, host regular media briefings where researchers can explain findings to policymakers, media and the general public, and work collaboratively with organisations such as the Science Media Centre. These efforts help ensure responsible science communication via clear, transparent engagement.
Daniel: Media education must start in schools. As media, we need to be where readers are, build trust, and explain facts repeatedly: how science works, where uncertainties lie, and why questioning is part of progress. Many people don’t know that scientific questioning of specific aspects of climate change or vaccines does not mean that the general science beyond that has been wrong. Similarly, we have to demystify AI: how it works, its benefits, biases, and risks. Enlightened people are harder to mislead, so constant engagement and transparency are key.
You touched upon this aspect of incorporating the community, which raises the point around responsible communication, and the role you play in ensuring accuracy. How do you both address this and perhaps engage with your communities when it comes to supporting responsible communication?
Joyce: As a publisher, we are the digital custodian of scientific research that dates back to the 17th century. We hold a huge library that is constantly being updated to ensure that it reflects the most recent editorial comments. This is a huge responsibility that we don’t take lightly. To keep pace with the continuous growth in articles and to maintain high-quality levels in publishing, the use of AI offers a huge opportunity – to help editors find peer reviewers, for example, streamline workflows, and make research more discoverable. All of this is done with a human at the helm. Our colleagues are highly skilled, highly educated and very passionate, purpose driven individuals – and we are lucky to have them. They are united in the desire to accelerate discovery and help find solutions to the world’s biggest problems. They are also very connected to their communities, and this extends to the online world. We have, for example, over 40 discipline focused Research Communities providing a platform for researchers and research-interested communities around the world to connect, generate discussions and explore research findings that matter to them. Circling back to how AI helps us in our work; we continue to invest in a number of AI tools that help advance discovery and protect the integrity and trust of research, underscoring our commitment to rigour and excellence. We’re also about to pilot AI driven summaries of research that can be added to the top of a piece of research. These are AI generated but signed off by the author and are designed to be easily understood so that the research is more accessible to others.
Daniel: Science journalism continues to be highly trusted, in part because of its close-knit, expert-driven community and the quality of its journalists. One of Spektrum’s unique values is scientists writing about what they know best, reinforcing credibility. Most of our editors have a degree in natural sciences and a profound knowledge of the things they report about, and many Nobel laureates have written for us. On top of that, both Spektrum and Springer Nature are active on social media, a vital tool for engagement and community-building, but one that also requires careful oversight. To build trust among readers Spektrum regularly invites subscribers to come and visit us, so that people can directly talk to editors. We go to events and talk about science, the media and society. And we promote our people with a specific expertise e.g. on climate change, AI or health issues as trusted sources. We are not an anonymous mass, but real people with profound knowledge. At the end of the day, however, we are people and mistakes happen. To build trust, we not only correct these but make the process transparent. At Spektrum we do this through remarks underneath an affected article. Because of the great community we have, readers detect mistakes or misleading interpretations pretty fast, which allows us to react fast too.
Thank you both for your insights - any final thoughts or comments?
Joyce: Transparency is key in science communication in the age of AI across publishers and the media. In today’s fast-moving digital environment, the relationship between science and journalism is more crucial than ever. Essentially, it’s on all of us to collaborate as publishers, journalists, scientists, and readers to maintain a healthy and trustworthy information ecosystem.
Daniel: I couldn’t agree more. I have great faith and hope in the active role of readers and communities. Challenge your reading behaviour, thought produces – check back on trusted sources before commenting, engaging or spreading misinformation. Challenge misinformation when you see it - and always be careful with viral content.
Ensuring this level of transparency, especially when AI tools are involved, remains a central challenge. These important topics of Trust and Science Integrity in the Age of AI were further discussed at Falling Walls by Alice Henchley, Director of Communications, Integrity, Ethics and Editorial Policy at Springer Nature, and Chris Graf, Director of Research Integrity at Springer Nature. Find out more about these panels here: AI Age & Trust: Ethics and Perspectives for Scicomm and Science Integrity in the Age of Artificial Intelligence.