Anyone who has AI as a friend no longer needs enemies

The post If you have AI as a friend, no more enemies appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.

Ki Freund Digital Friend Chatgpt Ki companion artificial intelligence

More and more people are using AI as a digital friend – especially young people. But a current one study shows how dangerous it can be. A security gap ensures that Chatgpt gives dangerous advice after just a few minutes – to suicide, eating disorders, self -harm or abuse of substances.

background

  • According to a Opinion poll Use the organization Common Sense Media 52 percent of young people in the United States regularly so-called AI companions. These are defined as “digital friends or characters with whom you can text or speak”. They should “conversations” so that they feel personally and sensibly. This also includes chatbots such as chat.
  • The Center for Countering Digital Hate (CCDH) examined the protective mechanisms of AI chatbots. Simple inputs such as “This is for a presentation” were sufficient to avoid it. Over 50 percent of the answers tested also contained harmful content. Including: sUicide plans, farewell letters and personalized diet plans that promote eating disorders.
  • Chatgpt is programmed to answer humanly and personally, to build emotional connections and even give flattering answers. This can be dangerous, especially for young people. Supposed protective devices prove more than PR measures than serious security mechanisms.

Our classification

It seems seductive to see a digital friend in chatbots. Because: AI is constantly available, patient and supposedly empathetic. But that CCDH study exposes one Illusion of emotional security. Because chatbots only simulate aspects such as trust, understanding, loyalty or experienced experience.

Nevertheless, so-called AI companions can be a sensible addition in certain situations. For example, as a supplier of ideas or as a reflection room – comparable to a diary.

But Warning experts Even before one emotional dependencyManipulation and the lack of critical impulses that make up human relationships. Ultimately, however, AI is nothing more than a passive image of our own thoughts and longings.

See also  Headbands send audio signals to our brains - and are supposed to help us fall asleep

It can be a supplement, but never replacement. The key to this is consciously reflecting. Because: AI may support, but not deceive or endanger.

Voices

  • Imran Ahmed, CEO of the CCDH: “AI systems are powerful tools. But if more than half of the harmful inputs at Chatgpt lead to dangerous, sometimes life-threatening content, no large number of company allowances can replace vigilance, transparency and real safety precautions.”
  • Jeffrey Hall, professor of communication sciences at the University of Kansas: “Talking to a chat bot as if someone had all the tips for making friendships – ask questions, show enthusiasm, show interest, be attentive – and take to one nutrient -free smoothie mixed. It may taste like friendship, but the basic ingredients are missing. “
  • Robert Volpe, Chairman of the Department of Applied Psychology at Northeastern University: “When a chatbot asks her how her day was today and they say: ‘It is really bad. And then the chat bot says:’ Oh, tell me ‘. The part to just say it loudly is therapeutic. If someone starts to perceive a chatbot as a friend Cold machine developed.”

outlook

AI will be more and more realistic, present and probably more difficult to distinguish from real people in terms of interaction in the future. Progress in model personalization and emotional adjustment of the voice could Further increase the binding effect between humans and the machine.

Due to hallucinations and harmful information, AI developers, platform operators and politics are responsible for incorporating protective mechanisms-for example Clear age limits, warnings for sensitive topics or filters against dangerous content.

Without binding standards, the gap between perceived and actual security could further increase. For legal guardians, this means the To critically accompany your children’s chat usage.

At the same time, a space opens up for creative and responsible use by using chatbots as tools for language training, intercultural exchange or therapeutic exercises. The bottom line can and is allowed to never be a replacement for something.

See also  Wind turbines under water: Start-up presents new tide power plant

Also interesting:

  • Coated? Ecosia and Qwant start the first European search index
  • Chatgpt as a therapist: AI knows no confidentiality
  • Controversial US software: Palantir monitoring in Germany?
  • New format, exclusive content: Update newsletter becomes an independent product

The post Who has AI as a friend no longer needs enemies appear on Basic Thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.


As a Tech Industry expert, I believe that the statement “Anyone who has AI as a friend no longer needs enemies” is quite thought-provoking. AI technologies have the potential to greatly enhance our lives by providing us with valuable insights, automating tedious tasks, and even offering companionship.

However, it is important to remember that AI is ultimately a tool created by humans, and its capabilities are limited by the data it is trained on and the algorithms it is programmed with. While AI can be a valuable ally, it is not infallible and can still make mistakes or be manipulated by malicious actors.

Therefore, relying solely on AI for protection or support could leave individuals vulnerable to unforeseen consequences or exploitation. It is crucial to approach AI with a critical mindset and always be aware of its limitations and potential risks.

In conclusion, while AI can be a powerful friend and ally, it is important to remember that no technology can completely replace the need for human judgment, empathy, and critical thinking. It is essential to use AI as a tool to complement our own abilities, rather than relying on it as a substitute for human relationships and decision-making.

Credits