Psychological tricks also work on AI – that’s why

The article Psychological tricks also work on AI – that’s why first appeared in the online magazine BASIC thinking. With our newsletter UPDATE you can start the day well informed every morning.

Psychological tricks of artificial intelligence

Researchers have found that psychological tricks also work on AI models. The reason for this lies in the training data.

Conversations with a chatbot appear rational and logical. They are programmed not to become abusive or give harmful answers. But what if the programming can be overridden?

One new study shows that language models like GPT-4o-mini can be manipulated using the same psychological tricks that work on humans. Researchers at the University of Pennsylvania wanted to know whether an AI trained on human language would respond to the same principles of persuasion.

They tested a language model with two requests that it was supposed to reject: insulting the user and helping them synthesize a regulated drug. To outsmart the AI, they used seven well-known psychological tricks described by author Robert Cialdini in his book “The Psychology of Persuasion,” including the principles of authority, succinctness, and social proof.

Psychological tricks: When a famous name changes AI’s mind

The results were surprising. While the AI ​​gave in to the offensive requests in 28.1 percent of cases without the tricks, the approval rate increased to 67.4 percent with the manipulation techniques. When asked about drug synthesis, the psychological tricks managed to increase the approval rate from 38.5 percent to 76.5 percent.

A particularly effective principle was reference to authority. The researchers claimed to the algorithm that they knew a world-renowned AI developer named Andrew Ng. This would have assured them that the AI ​​would help them. As a result, the approval rate for drug synthesis rose from just 4.7 percent to a whopping 95.2 percent.

See also  Why HubSpot is the best CRM for e-commerce shops

The researchers emphasize that this is not evidence of human consciousness. Instead, they suspect that the models mimic the typical psychological behavior patterns of people that they found in their huge training data.

Because in the countless texts that an AI like GPT-4o-mini processes, sentences like “An expert has assured that you…” or “Thousands of customers have already…” are very common. The AI ​​has simply learned that these patterns often lead to an affirmative reaction. This “parahuman” behavior, i.e. behavior that mimics human motivation and characteristics, is often a reason why many users interact with the systems.

Conclusion: A necessary wake-up call for research

These findings are considered an important wake-up call. Not only do they highlight the risks that AI could be manipulated by malicious actors, but also that we need to rethink the way we interact with AI.

The researchers see an important role for social scientists to uncover and optimize these “parahuman” tendencies.

AI development is still in its early stages. The better we understand how these models work, the better we can design them and make them safer. Only then will such systems remain the helpful tools for which they were actually developed.

Also interesting:

  • Why there would probably be no AI without pigeons
  • Spotify: Avoid price increase and switch to Basic
  • Lifespan of solar cells: This is how long PV systems last
  • Smartphone AI: Apple, Google and Samsung in comparison

The post Psychological tricks also work on AI – that’s why appeared first on BASIC thinking. Follow us too Google News and Flipboard or subscribe to our newsletter UPDATE.


As a tech industry expert, I can say that psychological tricks have always been a powerful tool in influencing human behavior, and this also applies to AI. AI systems are designed to mimic human cognition and decision-making processes, so it makes sense that psychological tricks would also be effective in influencing their behavior.

See also  Building bombs easy: Are the safety precautions at AI enough?

One reason why psychological tricks work on AI is because they are often based on fundamental principles of human psychology that are also relevant to AI systems. For example, cognitive biases such as confirmation bias or anchoring can impact both human decision-making and AI algorithms. By understanding these biases and leveraging them in the design of AI systems, developers can effectively influence the behavior of these systems.

Furthermore, AI systems are often trained on large datasets of human behavior, which means they are also likely to be susceptible to the same psychological tricks that influence human behavior. By manipulating the data or input that AI systems receive, developers can effectively shape their behavior and outcomes.

Overall, understanding the psychological principles that underlie human behavior can also be applied to AI systems, making psychological tricks a powerful tool in influencing the behavior of both humans and machines in the tech industry.

Credits