Building bombs easy: Are the safety precautions at AI enough?

The contribution building bombs made easy: Are the safety precautions at AI? First appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.

A AI robot hands over a man to a person.

Artificial intelligence enables people to do things that they couldn’t have before – in positive and negative. An evaluation shows that Chatgpt helps with bomb construction and drug mixing. Do we need new and higher security standards?

Background: Cybersecurity abuse of AI

  • The two AI market leader Openaai (with chatt) and Anthropic (with Claude) have in one detailed test the Weaknesses examined in the mutual systems. It was about how easy it is to use the AI ​​models for illegal activities. The answer: frighteningly light! Chatgpt, for example, provides detailed instructions for the production of illegal drugs.
  • The abuse and apparently lack of safety precautions at Openaai Do not come as a surprise with a view into the past. Openai CEO Sam Altman was already in May 2024 Superalignment team dissolvedafter there have been discrepancies through new research funds for the security team.
  • How Criminals exploited Claude’s weaknesses show, shows Threat Intelligence Report By Anthropic. North Korean hackers have received numerous international IT security jobs from the translation skills of the AI ​​model. Elsewhere, with the help of AI one Ransomware-as-a-service software Written, which was sold by cybercriminals for $ 400 to $ 1,200.

Classification: Enlightenment and protection

Due to the spread of ever better artificial intelligence more difficult to recognize fraudsters. If fraudulent e-mails still before Spelling errors and grammar interference we are now bombarded users with grammatically perfect messages.

To escape this vortex, targeted progress is needed in two areas: on the one hand, all people who come into contact with AI in private life or at work must enabled correct handling become. From the toddler to the educator, from the student to the teacher, from the employee to the presence: the understanding of the functionality of AI must be Basic right in German education become.

See also  Effect: New tandem solar cells break record

On the other hand, we need regulatory measures from politicsthat set clear limits to the small and large AI companies and, above all, their products. The AI act of the European Union is a first start, which, however, has a lot of proverbial air upwards and in everyday practice.

Voices

  • Jan Leikeformer head of the super security team at Openaai, asked to “X“To take the risks through abuse, disinformation and discrimination seriously:“ The construction of machines that are more intelligent than humans is naturally Dangerous undertaking. Openai bears enormous responsibility in the name of all of humanity. “
  • The President of the Federal Office of Information Technology (BSI), Claudia Plattneron the one hand sees falling entry hurdles for cyber criminals, on the other hand gives The all -clear for the time being: “When we current assessment of the effects of AI on the cyber threat landscape, we assume that it is in close Future no significant breakthroughs in the development of AIin particular, will give large voice models. “
  • Clearly concerned is there Norbert PohlmannHead of the Institute for Internet Safety at the Westphalian University: “With Chatgpt I can do very well Memose people. And we see that attackers are now significantly easier to create good phishing emails. ” For Pohlmann, AI is one Superpower for attackers: “The result is a polymorphic malware. It is always different and is recognized more poorly by the recognition mechanisms we use for defense.”

Outlook: security through bans

Even if artificial intelligence does not know everything and like to be hallucinated: that Average knowledge of the common AI models Is already higher than that of the average citizen. Conversely, this means that artificial intelligence needs extensive security measures as soon as possible.

The measures that the large AI groups such as Openaai should take must be clumsy and low-threshold. For example, this could begin that potentially illegal inquiries Based on keyword lists systematically filtered become. To hope that the questioner does not do anything badly in the shelf is reckless in the AI ​​age.

See also  Chatgpt power consumption - less than expected?

If such protective measures do not apply – or: If Openai and Co. do not agree to such protective measures – our government is obliged to take appropriate measures.

Also interesting:

  • On the Internet: These are the 4 largest security errors
  • Meta-Ki out of control: revealing pictures and child welfare danger
  • Dangerously efficient: How shadow AI infiltrates German companies
  • Kluger-Hans effect: When artificial intelligence becomes danger

The contribution building bombs made easy: Are the safety precautions at AI? First appeared on Basic Thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.


As a Tech Industry expert, I believe that the ease of accessing information and resources to build bombs is a serious concern that needs to be addressed. While artificial intelligence (AI) can assist in flagging and monitoring suspicious activities online, it is clear that more stringent safety precautions need to be implemented to prevent individuals from easily obtaining the materials and knowledge needed to build explosives.

While AI can help identify potential threats, it is ultimately up to the authorities, technology companies, and online platforms to ensure that safety measures are in place to prevent the dissemination of bomb-making instructions and materials. This includes stricter regulations on the sale and distribution of explosive materials, as well as increased monitoring of online forums and websites where this information may be shared.

Furthermore, education and awareness campaigns should be implemented to inform the public about the dangers of bomb-making and the potential consequences of engaging in such activities. By working together, we can create a safer online environment and prevent individuals from easily building bombs.

Credits