Chatbots: Why you can’t trust AI

The Chatbots article: Why you can’t trust AI first appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.

Chatbots AI trust artificial intelligence

The area of artificial intelligence has recorded enormous speed in recent years. AI chatbots are also becoming increasingly popular. But can you really trust these bots?

Whether in the browser, via app or even in the Messenger WhatsApp: contact with AI chatbots is now possible almost anywhere. In the meantime, the Facebook group has also integrated a AI into its apps with META AI. In his search, on the other hand, Google, among other things, relies on the new function “Overview with AI”, which summarizes search results Ki-based.

Artificial intelligence is now used in almost all areas of everyday life. One Survey from May 2025 It also showed that trust in AI-based addiction tools rose sharply in 2025. In 2024, only 58 percent of the AI Chatbots surveyed and 47 percent of the AI search engines surveyed trusted.

Just a year later, the number in both categories was 79 percent. This means that the AI results are still behind the conventional search engines such as Google, Bing and Co. in 2024 and 90 percent in 2025. Nevertheless, there is a clear trend that shows the increase in trust compared to AI-based addiction tools.

Can you trust the results of AI chatbots?

However, this trust in the supposedly omniscient AI tools can also have negative effects. Because AI chatbots can be influenced in different ways and can also spend falsified content, among other things.

The results of AI tools can be influenced in different ways. One of the best-known problems in this context is the bias-the AI distortion.

This type of bias of AI systems can occur, among other things, through biased training data. But incorrect or incomplete data records in training the AI can also lead to falsified results.

See also  AI has to go to data diet, otherwise it will ruin the internet

In addition, phenomena such as the clever-Hans effect can occur, in which AI models provide results without actually “understanding” them. The way of working of AI tools with the help of so-called tokens can also lead to the fact that you cannot solve supposedly simple tasks-such as counting the letter R in the word Strawberry.

Examination proves distortions

But can AI chatbots actually mislead people with their answers? This question has the BBC based on your own news content illuminated.

The British Broadcasting Corporation, the United Kingdom’s public service broadcaster, came to the conclusion that the AI assistants have reproduced “considerable inaccuracies and distorted content of the BBC”.

The BBC examined the four publicly accessible AI assistant Chatgpt from Openai, Copilot from Microsoft, Gemini from Google and Perplexity. These received unrestricted access to the BBC website for the duration of the examination.

The AI answers were checked by BBC journalists, who were all experts for the questions, based on criteria such as accuracy, impartiality and the presentation of the BBC content.

The results of the evaluation show that more than half of the answers (51 percent) have been rated as much problematic to questions about the news in any form. 19 percent of the answers in which BBC contents were cited also contained factual mistakes, such as incorrect factual statements or incorrect numbers and data.

Direct quotes from BBC articles also contained errors in 13 percent of cases. So they had either been changed compared to the original source or did not appear at all in the article cited.

Also interesting:

  • Artificial intelligence: Why AI scares many people
  • Study: How artificial intelligence can become more sustainable
  • Connect Chatgpt account to WhatsApp-that’s how it works
  • The best AI models in terms of data protection-and what you save

The article Chatbots: Why you can’t trust AI first appeared on Basic Thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.

See also  Without "brain": robot moves independently by means of a compressed air


As a Tech Industry expert, I understand the potential benefits of chatbots in terms of efficiency and convenience. However, it is important to recognize the limitations and potential risks associated with relying on AI for customer interactions.

One of the main reasons why you can’t always trust AI, particularly in the form of chatbots, is the lack of human empathy and understanding. While AI can be programmed to follow certain rules and guidelines, it lacks the ability to truly understand complex emotions and nuances in communication. This can lead to misunderstandings, frustration, and ultimately, a negative customer experience.

Additionally, AI is only as good as the data it is trained on. If the data is biased, incomplete, or inaccurate, the chatbot’s responses may be unreliable and potentially harmful. There have been numerous instances where AI chatbots have made inappropriate or offensive comments, highlighting the importance of human oversight and intervention.

Furthermore, AI technology is constantly evolving and improving, which means that chatbots may not always provide the most up-to-date or accurate information. This can be particularly problematic in industries where accuracy and timeliness are crucial, such as healthcare or finance.

In conclusion, while chatbots can be a valuable tool for businesses, it is important to approach them with caution and recognize their limitations. Human oversight and intervention are essential to ensure that AI is used responsibly and ethically. Trusting AI blindly can lead to negative consequences, so it is important to proceed with caution and consider the potential risks before fully relying on chatbots for customer interactions.

Credits