Researchers use tricks to drive out hallucinations

The article Researcher uses a trick to drive out hallucinations in AI first appeared in the online magazine BASIC thinking. With our newsletter UPDATE you can start the day well informed every morning.

AI hallucinations artificial intelligence

Hallucinations are not uncommon in AI models. But a researcher has now found an approach to wean large language models from doing exactly this. This means that AI can be tied much more closely to verifiable facts – and thus provides more reliable answers.

Hallucinations are a well-known problem when using AI models. The phenomenon refers to cases in which language models invent information or misrepresent facts.

So they provide wrong answers that – thanks to their great self-confidence – are formulated in an absolutely convincing way. This happens because AI models do not have real knowledge, but simply calculate probabilities and thus compose their answers.

This becomes particularly problematic when such invented content is adopted unchecked in sensitive areas such as medicine, law or news dissemination. They can lead to misinformation and undermine trust in AI-powered systems.

A University of Arizona researchers has now developed a way to get around exactly this problem. This technique helps AI systems recognize when their predictions may be unreliable.

Can this approach drive out AI hallucinations?

The reason for the hallucinations of AI models is usually not even the lack of knowledge of the systems. This is what researchers from the Technological Institute of Israel have determined. Accordingly, the systems encode the correct answer, but produce an incorrect answer to the outside world.

Peter Behroozi, Associate Professor at the Steward Observatory at the University of Arizona, has now found a solution for this. For this purpose, he has developed a method that adapts so-called ray tracing. This is a rendering process in computer graphics that is used, for example, to create realistic lighting in animated films.

See also  Hydrogen made of methane: Plasmalysis should consume five times less electricity

With the help of ray tracing, Behroozi was now able to explore complex mathematical spaces in which AI models operate. “Current AI models suffer from incorrect but confident results,” explains the astronomer. “There are many examples of neural networks ‘hallucinating’ or inventing non-existent facts, research papers and books to support their false conclusions.”

This leads to real human suffering, said Behroozi. He gives examples such as incorrect medical diagnoses, rejected rental applications or failed facial recognition.

Byproduct of research into galaxy formation

Behroozi actually researches the formation of galaxies. The discovery to minimize hallucinations in AI was sparked by a homework assignment in computational physics. A student brought these with him to his office hours.

This is how the researcher became aware of ray tracing. “Instead of doing this in three dimensions, I figured out how to do it in a billion dimensions,” Behroozi explains.

He relies on Bayesian sampling for his newly developed method. This is a statistical procedure that continually updates probabilities based on new data.

Instead of relying on a single model’s prediction, Bayesian sampling trains thousands of different models on the same data using a special mathematical approach that allows them to explore the variety of possible answers.

According to Behroozi, it is not a single expert who is consulted, but “the entire range of experts”. For topics that these experts are not familiar with, you get a whole range of answers. From this it can be concluded that “one should not trust the results”.

Behroozi’s method would allow these systems to recognize when they are unsafe. So essentially it gives them the ability to know when they don’t know something.

Also interesting:

  • Activate temporary chat on ChatGPT: Everything you need to know
  • Claude: Everything you need to know about Anthropic AI
  • ChatGPT: Have texts read aloud – this is how it works
  • Decentralized Network: What is Fediverse?
See also  Lifespan: Batteries for electric cars last longer than expected – says study

The post Researcher uses a trick to drive out hallucinations in AI appeared first on BASIC thinking. Follow us too Google News and Flipboard or subscribe to our newsletter UPDATE.


As a Tech Industry expert, I find the research on using tricks to drive out hallucinations fascinating and promising. By leveraging technology and innovative techniques, researchers are able to better understand and potentially treat hallucinations in individuals.

The use of virtual reality, sensory stimulation, and other technological tools to create controlled environments for individuals experiencing hallucinations is a groundbreaking approach. These methods not only provide a safe space for individuals to confront their hallucinations but also offer insights into the underlying mechanisms of these experiences.

Furthermore, the integration of machine learning and artificial intelligence in analyzing and interpreting data from these experiments has the potential to revolutionize our understanding of hallucinations and pave the way for more targeted and effective treatments.

Overall, the intersection of technology and neuroscience in addressing hallucinations is a promising area of research that holds immense potential for improving the lives of those affected by these experiences. I look forward to seeing how these advancements continue to evolve and impact the field of mental health.

Credits