The article More and more errors: AI can’t make news first appeared in the online magazine BASIC thinking. With our newsletter UPDATE you can start the day well informed every morning.

According to a recent study by the European Broadcasting Union, 45 percent of AI responses contain at least one significant error. As a result, information is often out of date or there are other inaccuracies. More and more media and publishers are demanding stricter regulations and quality mechanisms. What goes down: Enlightenment!
Bad source: AI can’t do news
- Previous Analyzes According to this, 30 to 40 percent of AI answers have serious errors. This includes: gross errors in content, incorrect or incorrect sources and missing connectionswhich make answers incomprehensible. One new study by the European Broadcasting Union (EBU) even concludes that four out of five AI responses to news topics contain inaccuracies – albeit minor ones.
- At the EBU study 22 public media organizations from 18 countries were involved. Over 3,000 AI responses from ChatGPT, Copilot, Gemini and Perplexity were examined. The key criteria: accuracy, citing sources, distinguishing between opinion and facts, and providing context. From a German perspective, took part ARD and ZDF under the auspices of the BBC.
- The Key results of the broadcast analysis: Regardless of language or region, 45 percent of all AI responses to news content had at least one significant deficiency. 31 percent contained misleading, incorrect or missing sources. 20 percent had demonstrably false statements of facts. Including: made-up quotes and outdated information.
AI reproduces errors
Chatbots suggest that AI can write, talk and research. However, she doesn’t understand her own answers. Because artificial intelligence only imitates and fakes knowledge. What may sound convincingly plausible is actually statistics based on patterns and probabilities.
The problem: AI therefore not only reproduces errors, but can also be manipulated or controlled. With the same apparent elegance with which chatbots fabricate facts, they produce false information. Supposed truths emerge from probabilities.
To ensure that reality does not fall by the wayside, large corporations should, on the one hand, be held responsible. On the other hand, one thing is needed above all: education! Because the biggest risk is not necessarily AI itself, but rather the human convenience of not questioning it.
Voices
- Peter Archer, BBC program director for generative AIin one statement: “We’re excited about AI. But people need to be able to trust what they read, see and watch. Despite some improvements, it’s clear that there are still significant problems. We want these tools to be successful and are open to working with AI companies to bring value to society at large.”
- Katja Wildermuth, director of the Bavarian Radiolooks independent of the study results in AI a great danger: “Such AI-generated summaries are convenient – and at the same time very dangerous. If AI systems determine what is visible, what we consider important and right, then it is about power in the information space. Politics gives the few large corporations too much unregulated leeway.”
- ZDF director Norbert Himmler explained: “The study also proves the importance of public information offerings. There, people can find reliable information and journalistic classification that AI tools alone cannot provide. The study also underlines the need to continuously check the quality of AI-generated content.”
AI message integrity tool
Regardless of the error rate, an inconvenient truth remains: AI chatbots don’t understand what they say, they only simulate answers. The pitfall: Media often reports on the unexpected or unknown that cannot be calculated.
A race for the interpretative sovereignty of AI has long since broken out: politicians are calling for control, media companies are insisting on responsibility and the tech companies are demanding freedom to innovate – with society as a spectator and guinea pig.
The EBU therefore has one AI Assistant Message Integrity Toolkit developed to educate users and help companies improve their AI models. However, this can only be the beginning.
Also interesting:
- Vandalism against charging stations – an ideologically motivated act
- New e-car funding – just not for everyone
- Chat control: Germany torpedoes EU plans
- Controversial AI mode: Google becomes a chatbot
The post More and more errors: AI can’t deliver news appeared first on BASIC thinking. Follow us too Google News and Flipboard or subscribe to our newsletter UPDATE.
As a tech industry expert, I believe that the increasing number of errors in AI-generated news articles is a clear indication that AI still has limitations when it comes to producing accurate and reliable news content. While AI has made significant advancements in natural language processing and content generation, it still lacks the nuanced understanding and critical thinking abilities that human journalists possess.
AI systems can struggle to distinguish between factual information and misinformation, leading to inaccuracies and biases in the news articles they produce. Additionally, AI lacks the ability to verify sources, fact-check information, and provide context and analysis that human journalists can offer.
As AI continues to evolve and improve, it is important for developers and companies to prioritize transparency, accountability, and ethical standards in the development and deployment of AI-powered news generation tools. It is also crucial for news organizations to integrate AI tools into their workflows in a way that complements and enhances the work of human journalists, rather than replacing them entirely.
Ultimately, while AI has the potential to revolutionize the news industry, it is essential to recognize and address its current limitations in order to ensure the production of high-quality, reliable news content.
Credits