Research Skills Blog - IFIS

Trust in Science in the Age of AI: Reflections from the Frankfurt Book Fair

Written by Katy Askew | 04-Feb-2026 09:00:00

Artificial intelligence and trust are among the most talked about issues shaping the information and publishing industries today. At the intersection of these themes lie some of the most pressing challenges facing the information and publishing industries.

A discussion at the 2025 Frankfurt Book Fair Innovation Stage brought these issues into sharp focus through a conversation between Joyce Lorigan, Group Head of Corporate Affairs at Springer Nature, and Daniel Lingenhöhl, Editor in Chief of Spektrum der Wissenschaft. Together, they examined how misinformation, the rise of AI-generated content, and declining scientific literacy are eroding public confidence — and what publishers, educators, and researchers can do to rebuild it.

 A World Struggling to Tell What’s True 

Lorigan highlighted a striking global reality: “Two-thirds of people globally struggle to tell what is trustworthy.” In the United States, six in ten consumers believe scientific information is influenced by governments or corporations, she continued.

Social media and AI have accelerated this uncertainty, making it increasingly difficult to discern what can be trusted. Lingenhöhl observed that while misinformation is not a new phenomenon, “what’s new is the volume.” The reach of social media and generative AI has created a flood of falsehoods, overwhelming the public’s ability to critically evaluate content. He cited research showing that “80% of Germans don’t care if social media posts are true,” and described how even established outlets such as Spektrum der Wissenschaft have been imitated by malicious actors. 

This expanding ecosystem of manipulation underscores the need for credible, verifiable sources of scientific information. 

Education as the Antidote 

Both speakers emphasised that education and scientific literacy are central to restoring trust. 
Lingenhöhl noted, “We need to teach young people what they can trust, where they can get real facts, trusted science, trusted information.” Yet, he added, scientific literacy remains underdeveloped, leading to confusion even on topics where “the science is settled,” such as climate change or vaccination. Misunderstandings persist because disagreements on specifics are mistaken for uncertainty in the broader consensus.

The same dynamic applies to AI, now used by billions yet poorly understood. Bias, hallucination, and overconfidence in algorithmic outputs all contribute to an information landscape that demands new forms of literacy. 

Supporting that literacy is an important part of IFIS’s educational mission. Through resources such as the AI in Academic Research chapter of our Literature Searching guide, IFIS helps students and early-career researchers develop informed, responsible approaches to using AI in academic work, balancing curiosity and convenience with critical thinking. 

For a quick overview of when and how to use AI when researching, take a look at our article, AI as Your Ally: Ethical and Effective Ways to Use AI in Academic Research and Writing.

Safeguarding the Integrity of Research 

Misinformation also infiltrates the research ecosystem itself. Lorigan warned that “for publishers, our core job is making sure research can be trusted. We have really bad actors in the system that put fake research out there… It’s getting more difficult to spot and we need to continue to invest in this.” 

The rise of predatory publishing has compounded this challenge. IFIS maintains a clear position on this issue, refusing to index deceptive or non-peer-reviewed sources. FSTA® — IFIS’s flagship database — curates only verified, trustworthy research, acting as an island of trust in the sea of misinformation that is the internet. 

By maintaining rigorous editorial standards, IFIS helps ensure that researchers, educators, and industry innovators can rely on the scientific record. 

AI as a Force for Accessibility and Quality

Despite the risks, AI also offers opportunities to improve scientific communication and workflow integrity. Lorigan noted that “the research itself is very difficult to understand… This is where AI can be a positive.” For instance, she noted that Springer Nature is exploring the potential of AI research summaries. AI-driven tools can help make complex research more accessible, support peer review, and strengthen fact-checking, she suggested. 

Used ethically and transparently, AI has the potential to enhance quality, not erode it, a balance that aligns with IFIS’s commitment to combining human expertise with responsible technological innovation.

Rebuilding Trust Through Integrity and Literacy 

The discussion at Frankfurt Book Fair, an important annual event in the publishing industry’s calendar, underscored a fundamental truth: trust in science must be earned and maintained. It relies on rigorous editorial practices, transparency in communication, and investment in education at every level. As misinformation proliferates, organisations like IFIS — independent, publisher-neutral, and committed to research integrity — play a vital role in maintaining the credibility of scientific knowledge.

Through trusted curation, educational support, and clear ethical standards, IFIS continues to reinforce the foundations of reliable science in an uncertain digital age.

 

Photo by Igor Omilaev on Unsplash