AI in war: Experts warn of dangerous loss of control

The article AI in war: Experts warn of dangerous loss of control appeared first in the online magazine BASIC thinking. With our newsletter UPDATE you can start the day well informed every morning.

AI War Military Artificial Intelligence

A current analysis shows how the use of artificial intelligence is increasingly accelerating military decisions and making human control significantly more difficult. Experts therefore warn of a dangerous dynamic in which autonomous systems could trigger real escalations.

The topic of artificial intelligence has gained significant momentum in recent years and is changing the economy, society and politics at a rapid pace. Their ability to evaluate large amounts of data and support complex decisions opens up new possibilities in many areas.

At the same time, this also raises fundamental questions about control and regulation – especially when AI is used in security-relevant areas. Particularly in the military context, this can lead to a dynamic in which decisions are made ever faster and the opportunities for human intervention are increasingly dwindling.

Exactly before this development, experts from the think tank Center for European Politics (cep) were in one current analysis. Accordingly, the increasing use of AI in war could result in a dangerous loss of control.

Why military AI leaves little time for human control

According to the cep, AI-supported systems are already being used in current conflicts in the Gaza Strip, Iran and Ukraine “sometimes without functioning supervision”. Human control is now just an illusion.

The use of AI-supported systems significantly shortens analysis and reaction times in the military environment. What is considered a strategic advantage can also become a problem if decisions are prepared or made automatically under high time pressure.

Above all, this time pressure leaves little room for human control and consideration in individual cases. There is an increasing risk that incorrect data or misleading signals will quickly have far-reaching consequences.

See also  According to human feedback: AI learns to deceive people

There would also be a lack of reliable experience in dealing with language models or other AI systems in a military context. The cep therefore warns of “incalculable consequences” that could ultimately lead to a “dangerous loss of control”.

AI in war: What rules experts demand

“In many cases, operators have very little time to check an AI proposal,” explains Anselm Küsters, study author and cep AI expert. The actors often cannot understand “how the system came to its conclusion or what unintended consequences they could have”.

Under these conditions, control quickly becomes dependence, says the researcher. However, it is crucial whether human control works under operational conditions. To achieve this, the cep calls for binding standards as well as reliable and verifiable procedures.

Common rules are not only ethically necessary, but also make sense militarily, as they would reduce false attacks and prevent escalations.

The military use of AI must be based on international standards, for example through EU or NATO standards for military AI. This requires, among other things, disclosure obligations and limitations for automated systems. The cep also requires an obligation to report malfunctions.

Also interesting:

  • DFKI develops AI detector: expose fake images via Instagram
  • Study reveals massive AI weaknesses – Agents of Chaos keep researchers guessing
  • Protection for your chats: This is how you can activate Face ID for WhatsApp
  • Social media and AI: Children are becoming dumber for the first time

The post AI in war: Experts warn of dangerous loss of control appeared first on BASIC thinking. Follow us too Google News and Flipboard or subscribe to our newsletter UPDATE.


As a Tech Industry expert, I believe that the use of Artificial Intelligence (AI) in war raises significant ethical and practical concerns. While AI has the potential to revolutionize warfare by increasing efficiency, accuracy, and reducing human casualties, there is also a dangerous risk of losing control over AI systems on the battlefield.

See also  New technology: Old car engines convert methane into green fuel

One of the main concerns is the potential for AI systems to make autonomous decisions without human oversight, leading to unintended consequences and escalating conflicts. There is also the risk of AI being hacked or manipulated by adversaries, leading to unpredictable outcomes and potentially catastrophic consequences.

Furthermore, the deployment of AI in warfare raises questions about accountability and responsibility. Who is ultimately responsible for the actions of AI systems in war? How do we ensure that AI systems are programmed with ethical guidelines and adhere to international laws and norms?

It is crucial for governments and policymakers to carefully consider the ethical implications of using AI in war and establish clear guidelines and regulations to ensure the responsible and ethical use of AI technology. Collaborative efforts between tech experts, policymakers, and military officials are essential to mitigate the risks and ensure that AI is used in a way that upholds human rights and international law.

Credits