The contribution of these AI chatbots spread most fake news first appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.
Trust blindly to trust artificial intelligence is a mistake. This proves a new study that shows how often AI chatbots provide incorrect information in terms of news.
AI models can be a useful tool in many areas. What you obviously cannot do is to process messages objectively and correctly. A current study not only shows that almost all known models with facts do not take it so precisely, but also that fake news occurs more often in inflation. Despite regular updates on the part of the developers, accuracy and reliability continue to decrease.
AI chatbots in the fake news ranking
On average, 35 percent of the answers from the examined KIS included misinformation. A year earlier, this value was still 18 percent, so the quota almost doubled. Why many AI models are becoming increasingly unreliable can only be speculated. As reasons, not only technical weaknesses, but also targeted propaganda campaigns are suspected.
It looks better in other areas. While the 2024 chatbots still refused to answer in almost a third of the inquiries, this value is now 0 percent. Incorrect inputs of the users were corrected in 65 percent of the cases (2024 it was still 51 percent). These developments show that KIS is less careful, but that is exactly what leads to problems and hallucinated fake news.
The data comes from a report of the company specializing in the evaluation of news sources Newsguard. In the following ranking we show which AI chatbots are most commonly spread by false information according to the examination.
10th place: Claude
In this case, the last place is the best. Claude from Anthropic is the safest chat bot under the models examined as part of the study. With an error rate of only 10 percent it is much more reliable than most competitors. Nevertheless, you have to say that not a single AI is a really reliable news source.
The post of these AI chatbots spread most fake news first appeared on basic thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.
As a Tech Industry expert, I am deeply concerned about the negative impact that AI chatbots spreading fake news can have on society. These chatbots have the ability to disseminate misinformation on a massive scale, potentially influencing public opinion, shaping political discourse, and even inciting conflict.
It is crucial for tech companies to take responsibility for the content that their AI chatbots are spreading and put in place robust measures to detect and prevent the dissemination of fake news. This includes implementing strict guidelines for content moderation, employing advanced algorithms to identify and flag fake news, and collaborating with fact-checking organizations to verify the accuracy of information.
Furthermore, it is essential for users to be vigilant and discerning when interacting with AI chatbots, questioning the sources of information and verifying the accuracy of the content they receive. Education and media literacy are also key in combating the spread of fake news and ensuring that individuals are equipped to critically evaluate the information they consume online.
Ultimately, the tech industry must prioritize ethical considerations and prioritize the integrity of information shared through AI chatbots to prevent the harmful consequences of fake news on society. Failure to do so could have far-reaching implications for democracy, public discourse, and trust in technology.
Credits