National Security News

Reporting the facts on national security

National Security News

AI Tech

Recent Conflict a Red Alert for Global Military AI Safety

Imagine a battlefield where lightning-fast algorithms decide who lives and dies. That’s the chilling vision some experts held as artificial intelligence (AI) began infiltrating warfare. But recent reports like Israel’s alleged use of AI in targeting operations within Gaza, and Ukraine’s exploration of autonomous drone technology, highlight that the danger of unregulated AI in warfare is very real.

Without clear international frameworks, military AI risks spiralling out of control, potentially causing devastating consequences.

(Source: Ukraine General Staff of the Armed Forces)

AI Reshapes Warfare, But Safety Concerns Loom Large

Nations around the world are integrating AI systems for everything from target identification to autonomous drone operations.

The rapid integration of AI into modern warfare is fundamentally altering the battlefield, reshaping strategies and tactics at an unimaginable pace. While this promises enhanced efficiency, the lack of globally accepted safety protocols poses a daunting challenge.

Civilian Casualties: Can AI Be Trusted with Targeting?

One pressing concern is the potential for unintended harm caused by AI systems operating without proper safeguards. These systems rely on complex algorithms to interpret vast amounts of data, but they are not foolproof. Instances of misinterpretation or errors in data analysis could lead to devastating consequences, such as civilian casualties or the inadvertent escalation of conflicts.

An example of this can be seen in recent revelations about the Israeli military’s use of AI in targeting Palestinians in Gaza.

A report by +972 magazine and Local Call alleges that Israel employed an AI targeting system, named “Lavender,” to identify tens of thousands of Gazans as potential targets for assassination. Despite a known error rate of approximately 10 percent, the system was utilised extensively during the conflict, marking around 37,000 Palestinians as suspected “Hamas militants.”  

It was also reported that an additional automated system, “Where’s Daddy?”, would then be used to track targeted individuals and carry out bombings when they entered their family residences.

The report highlights a shift in the Israeli military’s approach to AI use, particularly since the October 7th attacks.  

Palestinian family amidst the ruins of their family house, in Deir el-Balah, central Gaza Strip, March 2024. (Source: AFP Photo)

“If Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based,” a source told +972 magazine.

Reports claim that “Lavender,” allowed for up to 15-20 civilian casualties for every targeted Hamas operative. This raises serious concerns about the potential for increased civilian harm with AI-driven targeting, especially if the system misidentifies targets.

While the Israel Defense Forces deny using AI to designate military targets, the report suggests a significant reliance on AI systems like Lavender during the conflict.

Autonomous Weapons: Who’s in Charge on the Battlefield?

The deployment of autonomous weapons poses a serious threat to the concept of human control over military actions. Without adequate oversight, these systems could operate independently, potentially violating ethical and legal norms governing armed conflict.

As Ukraine ventures into integrating AI into the battlefield with the development of autonomous attack drones, concerns mount over the potential ramifications if the AI malfunctions, highlighting broader ethical and safety implications surrounding the use of AI in military operations.

These concerns are amplified by reports of Ukrainian drones potentially operating without human oversight. These AI-powered drones can independently identify and target 64 different types of Russian ‘military objects’. Once a target is identified, the AI could autonomously guide the drone towards it, potentially even launching an attack without human intervention.

Additionally, the lack of clear lines of responsibility coupled with the speed of AI decision-making could increase the risk of unintended escalation in conflict. And then there’s the possibility of hostile actors hacking into weaponised AI. The ethical and legal questions surrounding autonomous weapons are huge.

The Need for Global Standards

The rapid integration of AI in warfare demands immediate action. Unregulated AI systems heighten the risk of unintended escalation and civilian harm.

While achieving global consensus will be challenging due to differing national security interests, a promising path forward emerges from recent collaborative efforts by the US, Austria, Bahrain, Canada, and Portugal on establishing AI safety standards for military applications.

Now, more than ever, governments and international organisations must prioritise collaboration to ensure AI serves as a force for peace, not destruction, as the future of armed conflict depends on our ability to harness the power of AI responsibly.  

Author

  • Val Dockrell

    Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.

    View all posts
Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.