Source chaos: ChatGPT cites fake news from Grokipedia

The article Source chaos: ChatGPT cites fake news from Grokipedia first appeared in the online magazine BASIC thinking. With our newsletter UPDATE you can start the day well informed every morning.

ChatGPT Grokipedia Elon Musk Source

According to an investigation by the Guardian, ChatGPT quotes and paraphrases Elon Musk’s controversial AI encyclopedia Grokipedia. The supposed Wikipedia alternative has already been convicted several times of spreading false and manipulative content on sensitive topics such as the Holocaust, political conflicts or homosexuality. A commentary analysis

ChatGPT quotes Grokipedia

  • Elon Musk has been claiming for some time that Wikipedia is not objective and follow a political left-wing orientation. With Grokipedia he wants to establish an alternative. The reality: There are both studies that left-wing political as well as legal political Have verified content on Wikipedia. Both are due to the free principle of the encyclopedia, which stipulates that the community can create content itself but also monitor it.
  • Grokipedia does not follow any free principles. The Musk Encyclopedia is criticized, right-wing populist narratives uncritically treated as facts to represent. Unlike Wikipedia, Grokipedia has no people who can create or control content. All contributions come from an AI, i.e. a pre-programmed algorithm – and that is pretty one-sided.
  • Tests by the Guardian have shown that ChatGPT has recently started citing Grokipedia as a source. Including: Questions about the political structures in Iran, same-sex marriages and Holocaust deniers. ChatGPT used Grokipedia nine times for over a dozen such questions. The problem: It has already been proven several times that the platform reproduces content that is opinionated, misleading or has long been refuted.

ChatGPT is not a reliable source of information

The Guardian’s research is another prime example of this ChatGPT is not a reliable source of information. When it comes to the “quality” of the information, citing Grokipedia is a bit like copying the worst student in the class who steals his classmates’ recess money after the bell.

See also  Agri-Photovoltaic: New Solar systems protect fruit trees

However, the problem goes beyond Grokipedia. Because Other AI chatbots are not immune to this either. The reason is the way algorithms work, which make decisions based on patterns and probabilities. Specifically, this means that ChatGPT and Co. can even be “outwitted” and deliberately infiltrated with false information by simply repeating the wrong thing often enough.

Sad example: According to the Guardian, many chatbots are already spreading Russian disinformation. These include the claim that the US is developing biological weapons in Ukraine. Completely different topic, but a good example of why sentiment often outweighs facts:

An editor of a right-wing populist German news magazine recently wrote in a Post on X (formerly Twitter): “Six days of #power failure also means six days without e-mobility. The wrong horses are standing still.” That gas pumps for combustion engines don’t work without electricity? Free!

Whether intentional or not: the problem is that More and more people are stopping critical thinking. AI is already making a negative contribution to this – especially because once nonsense is spread, it sticks with many people. Any corrections subsequently run under the wildfire radar.

Voices

  • A OpenAI speaker explained to dem Guardianthat the model’s web search “aims to draw from a variety of publicly available sources and viewpoints.” He added: “We apply security filters to reduce the risk of showing links with high potential for harm. ChatGPT uses citations to clearly show from which sources an answer comes.”
  • Disinformation researcher Nina Jankowicz admitted that it was probably not Elon Musk’s intention to influence ChatGPT. Yes: those of her and her colleagues checked Grokipedia entries based “at best on unreliable sources, at worst on poorly researched and intentional misinformation”. “Most people won’t do the work necessary to find out where the truth actually lies.”
  • The Guardian has too Grokipedia Group xAI around an opinion about the AI ​​Encyclopedia’s proven false information in order to give the company the opportunity to comment on the allegations. The answer, which is as trite as it is self-revealing: “Traditional media lies.”
See also  Cheaper and more efficient: battery storage is overtaking gas-fired power plants

AI models can be influenced

Experts view the development critically. Many have been warning about the so-called for some time LLM Groomingin which large amounts of misleading content are deliberately placed online Influencing AI models.

What is particularly problematic is that sources like Grokipedia can gain additional credibility as a result. Misinformation once fed in is also almost impossible to stop.

Specifically, this means: Anyone who blindly trusts ChatGPT & Co. is surfing a wave of half-truths that will drown out subsequent corrections.

Both developers, media and politicians are therefore responsible. Not just to regulate, but to promote media literacy. Otherwise, so-called artificial intelligence will become more and more popular real stupidity care for.

Also interesting:

  • VW ID.Polo: A decoy offer that can be worthwhile
  • Algorithm dictatorship: Trump’s crooked TikTok vow
  • Reaching for Greenland: The tech billionaires’ monopoly of millions
  • Digital fortress: EU wants to ban Chinese providers Huawei and ZTE

The post Source chaos: ChatGPT cites fake news from Grokipedia appeared first on BASIC thinking. Follow us too Google News and Flipboard or subscribe to our newsletter UPDATE.


As a Tech Industry expert, I am deeply concerned about the issue of source chaos and the spread of fake news. ChatGPT citing fake news from Grokipedia is troubling because it highlights the potential dangers of relying on AI systems that are not properly vetted or monitored for accuracy.

It is crucial for companies developing AI systems to prioritize the integrity of their sources and ensure that the information being disseminated is reliable and factual. In the age of misinformation and disinformation, it is more important than ever to have safeguards in place to prevent the spread of fake news.

Additionally, it is essential for users of AI systems like ChatGPT to be vigilant and critical of the information they receive. It is important to verify sources and cross-reference information to ensure its accuracy and credibility.

See also  Solar power from batteries is one of the cheapest sources of electricity

Overall, the issue of source chaos and fake news in AI systems is a serious concern that must be addressed by tech companies, regulators, and users alike to protect the integrity of information and combat misinformation in the digital age.

Credits