National Security News

Reporting the facts on national security

National Security News

AI Uncategorized

Double-Edged Sword: Emerging AI Technologies and the National Security Landscape 

A recent panel discussion explored the complex relationship between emerging technologies and national security, with artificial intelligence (AI), autonomous weapons, and cyber warfare at the centre of the conversation. Leading experts examined the potential benefits and the lurking threats these advancements pose.

Cybersecurity emerged as a primary concern. The sophistication of cyberattacks is likely to skyrocket as malicious actors leverage AI for more targeted and disruptive operations. 

The potential for AI to be weaponised for social control and warfare also sparked ethical concerns. “AI can be used by governments to suppress dissent,” warned a panellist, raising the spectre of authoritarian regimes using AI for mass surveillance and manipulation. The development of lethal autonomous weapons further complicates the picture, blurring the lines of responsibility in armed conflict. 

According to Ahmad Antar, a Harvard University instructor specialising in digital transformation and sustainability, AI is a “quintessential example of a dual-use technology.”  

He explains, “On the one hand it adds a lot of benefits to our everyday lives ranging from a variety of applications from medicine to military. On the other hand, it carries a lot of risks.” 

These risks can be broadly categorised into five key areas, according to Antar as follows: 

  • Cybersecurity: A top concern for CEOs and government leaders, according to Antar, due to the increased sophistication of AI-fuelled cyberattacks. 
  • Freedom and Security: Antar described AI being used to prop up dictatorships, a phenomenon known as “digital repression,” where AI is employed to coerce or oppress citizens. Ethical concerns were also raised regarding the potential weaponisation of AI for social control and warfare. 
  • Defence and Security: “That involves applications or examples like lethal autonomous weapon system, what’s known as ‘killer robots,’”, explained Antar. “Most of these machines have the ability to identify, track, detect, and even take out a target independently, without having a human in the loop.” Antar stressed that the premature deployment of such technology, when these models are still in an experimental phase, could be disastrous. 
  • Information Security: The threat of proliferation of false narratives, Antar explains, “the spread of false narratives whether with intent like disinformation or without intent like misinformation, is leading to a situation I like to call an infodemic.” 
  • Biosecurity: AI could be used to create recipes for synthetic toxins. Antar provided a hypothetical scenario: “Consider a rogue entity that was able to develop a synthetic pathogen as we’re coming out of a pandemic, that even with a built-in sophistication like being highly transmissible, easily evasive, or even resistant to common treatment, and package that pathogen and release it in a busy airport like Heathrow, its unfathomable scenario. Well thanks to AI, it’s plausible right now.” 

The potential downsides of AI, as these examples illustrate, necessitate the development of robust safeguards from environmental, ethical, and societal standpoints. 

A Multifaceted Threat 

The discussion then shifted to the challenge of addressing AI threats, with Marc Owen Jones, Associate Professor of Digital Humanities, offering a critical perspective. “When it comes to digital technology, disinformation, or AI, we need to think in terms of supply chains first,” he emphasised.  

Jones argued that AI is not the root cause, but rather a “force multiplier” that amplifies existing problems. 

Effective solutions for AI challenges require a context-specific approach, as Jones stated you need to identify “what is the problem we’re looking at, where is that problem, who are the different nodes within that supply chain or network of information, and how can that be addressed on a level that might involve regulation or it might involve other aspects, for example training.” 

Helen Zhang, Deputy Chief of Staff at the Office of Eric Schmidt agreed, saying “All these issues that we’re facing now with artificial intelligence is augmenting the existing problems we have in society.”  

The discussion called for a more interdisciplinary approach to understanding AI’s impact. Zhang noted, “In order to understand how artificial intelligence impacts technologies that then impact our global security, we need to have an understanding of how it impacts different disciplines whether that is in national security, the military or the sciences.” 

Author

  • Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.

    View all posts
Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.