Lindy Cameron, the Head of the United Kingdom’s National Cyber Security Centre (NCSC), recently made a groundbreaking announcement in the tech realm. Unveiling the world’s inaugural set of security guidelines tailor-made for Artificial Intelligence (AI), this move signifies a significant leap forward in the field.
What renders this announcement truly remarkable is the robust backing these guidelines have received from agencies representing 17 countries, including the influential support of the United States.
These pioneering guidelines have been meticulously crafted through the collaborative efforts of NCSC, the US’s Cybersecurity and Infrastructure Security Agency (CISA), and various international partners. Cameron emphasised the urgency for unified global action, both from governments and industry sectors, to match the rapid evolution of AI.
Cameron highlighted, “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.
“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.
“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities.”
Summary of Guidelines: Aimed at providers involved in crafting AI systems, regardless of whether these systems are built from scratch or based on existing tools and services.
This document underscores the paramount importance of securing AI systems to ensure their desired performance, continuous availability, and safeguarding sensitive data from unauthorised access, ensuring an encompassing security protocol throughout the AI system development life cycle.
The release of these guidelines comes in the wake of the NCSC’s recent review highlighting the most significant threats to national security, notably emphasizing the risks posed by AI to critical infrastructure and electoral processes.
The guidelines state, “AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”
Key Areas of Focus: Divided into four pivotal domains, these guidelines traverse the entire AI system development life cycle.
- Secure Design: Guidelines for design phase cover risk assessment, threat modelling, and key considerations for system and model design.
- Secure Development: Addresses supply chain security, documentation, and effective asset management during development.
- Secure Deployment: Focuses on protecting infrastructure and models, incident management, and responsible release during deployment.
- Secure Operation and Maintenance: Covers post-deployment actions like logging, monitoring, updates, and information sharing for system integrity.
Guiding Principles: In line with the NCSC’s principles of secure development and deployment, the guidelines echo a ‘secure by default’ approach. Aligned with globally recognised frameworks like NIST’s Secure Software Development Framework and principles of ‘secure by design’ from CISA, NCSC, and international cyber agencies.
As AI continues to integrate into various industries, its secure development and deployment have emerged as paramount concerns. These inaugural guidelines from the NCSC mark a pivotal moment in steering AI system providers toward a path where innovation converges harmoniously with stringent security measures.