National Security News

Reporting the facts on national security

National Security News

AI

Turing Institute: AI Supercharges Intelligence Analysis, But Mitigating Bias is Key

In a bid to bolster national security decision-making processes, the United Kingdom has embarked on a comprehensive assessment of the role of artificial intelligence (AI) in strategic intelligence analysis. A report authored by the independent Centre for Emerging Technology and Security (CETaS), housed at The Alan Turing Institute, found that AI as a powerful analytical tool for intelligence, but cautions against potential biases, recommending responsible implementation and training for analysts and decision-makers.

Dr Alexander Babuta, Director of CETaS, emphasises AI’s importance, stating, “Our research has found that AI is a critical tool for the intelligence analysis and assessment community.”

However, the report doesn’t shy away from potential drawbacks. Dr Babuta highlights, “[AI] also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights.”

AI: A Powerful Analytical Tool

The report, commissioned by the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ), examines the complexities of integrating AI into intelligence assessment. The report is based on extensive research conducted over a seven-month period throughout 2023-24.

The aim of the research was to assess how AI-enriched intelligence can be effectively communicated to strategic decision-makers, ensuring analytical rigour, transparency, and reliability in intelligence reporting and assessment.

Key among the report findings is the recognition of AI as a pivotal analytical tool, capable of processing vast volumes of data and identifying patterns beyond human capacity. Such capabilities, the report suggests, are crucial for maintaining comprehensive intelligence coverage.

Navigating Uncertainties

However, the report acknowledges the need for a nuanced approach. AI-derived insights, while valuable, can introduce “new dimensions of uncertainty.” Effective communication of these uncertainties to decision-makers is crucial, ensuring a clear understanding of the limitations alongside the benefits.

This aligns with the perspective of Anne Keast-Butler, Director of GCHQ, who acknowledges the rapid evolution of AI technologies. She states, “In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.”

The report proposes a series of recommendations to navigate these complexities. Standardized terminology for discussing AI limitations is advocated for, promoting clarity and transparency in communication.  Additionally, a layered approach to presenting technical information is suggested, catering to the diverse expertise of decision-makers.

Furthermore, the report emphasises the importance of training initiatives for both analysts and decision-makers.  These initiatives would range from foundational briefings on AI fundamentals to advanced technical sessions preceding critical decision points. This comprehensive training framework aims to foster confidence and proficiency in AI-enabled intelligence assessment.

Finally, the report calls for a formal accreditation program for AI systems used within intelligence analysis.  This program would be underpinned by robust policies for security, transparency, and bias mitigation, ensuring the reliability and efficacy of AI models deployed in critical national security contexts.

In response to report, Deputy Prime Minister Oliver Dowden said “We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.”

Dr. Babuta clarified, “As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.” This underlines the Institute’s commitment to supporting the UK intelligence community with evidence-based research.

The CETaS report highlights AI’s transformative potential for national security, urging responsible implementation and clear communication of limitations. As the security landscape evolves, the UK can harness this power to safeguard decision-making through international collaboration and responsible AI development.

Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.