Today sees the publication of a new guidance document from the NCSC – Guidelines for secure AI Systems development.

Unless you’ve been living in a cave somewhere for the past 12 months, you will not have failed to notice the explosion of new AI systems. The AI Tools directory list hundreds of AI systems covering features such as Web design, Translation services, Audio, Chatbots, Coding, Education, Gaming, Video editing, Business intelligence, and much more.

Users of Microsoft Windows can now have AI directly on their desktop with the roll-out of Microsoft Co-pilot. Currently co-pilot is an optional tool included for those on the MS Windows Insider program, but it will be a permanent feature of Windows in the not-too-distant future.

Even if you haven’t experienced these tools directly, you can’t fail to have seen the news reports about tools such as ChatGPT, Dalle-3, and MidJourney – these tools have taken the AI stage by storm.

AI systems have the potential to completely change the world in which we live, and there is much debate surrounding the pros and cons of these new breeds of AI tools. The UK has just hosted the AI Safety Summit 2023 at Bletchley Park where leaders in the field from Academia, World Governments, Industry, and Multilateral organisations met to discuss how the world should handle this emerging technology in a safe, proportionate, legal, and moral manner.

World leaders at the UK AI Safety Summit 2023

Summit program of events

The summit ran for 2 days and delegations were invited to participate in the following discussions:

  1. Risks to Global Safety from Frontier AI Misuse
    Discussion of the safety risks posed by recent and next generation frontier AI models, including risks to biosecurity and cybersecurity.
  2. Risks from Unpredictable Advances in Frontier AI Capability
    Discussion of risks from unpredictable ‘leaps’ in frontier AI capability as models are rapidly scaled, emerging forecasting methods, and implications for future AI development, including open-source.
  3. Risks from Loss of Control over Frontier AI
    Discussion of whether and how very advanced AI could in the future lead to loss of human control and oversight, risks this would pose, and tools to monitor and prevent these scenarios.
  4. Risks from the Integration of Frontier AI into Society
    Risks from the integration of frontier AI into society include election disruption, bias, impacts on crime and online safety, and exacerbating global inequalities. Discussion will include measures countries are already taking to address these risks.
  5. What should Frontier AI developers do to scale responsibly?
    Multidisciplinary discussion of Responsible Capability Scaling at frontier AI developers including defining risk thresholds, effective model risk assessments, pre-commitments to specific risk mitigations, robust governance and accountability mechanisms, and model development choices.
  6. What should National Policymakers do in relation to the risk and opportunities of AI?
    Multidisciplinary discussion of different policies to manage frontier AI risks in all countries including monitoring, accountability mechanisms, licensing, and approaches to open-source AI models, as well as lessons learned from measures already being taken.
  7. What should the International Community do in relation to the risk and opportunities of AI?
    Multidisciplinary discussion of where international collaboration is most needed to both manage risks and realise opportunities from frontier AI, including areas for international research collaborations.
  8. What should the Scientific Community do in relation to the risk and opportunities of AI?
    Multidisciplinary discussion of the current state of technical solutions for frontier AI safety, the most urgent areas of research, and where promising solutions are emerging.

NCSC guidance

The new guidance has been released in the UK by the NCSC in association with many other international partners, including:

  • National Security Agency (NSA)
  • Federal Bureau of Investigations (FBI)
  • Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • Canadian Centre for Cyber Security (CCCS)
  • New Zealand National Cyber Security Centre (NCSC-NZ)
  • Chile’s Government CSIRT
  • National Cyber and Information Security Agency of the Czech Republic (NUKIB)
  • Information System Authority of Estonia (RIA)
  • National Cyber Security Centre of Estonia (NCSC-EE)
  • French Cybersecurity Agency (ANSSI)
  • Germany’s Federal Office for Information Security (BSI)
  • Israeli National Cyber Directorate (INCD)
  • Italian National Cybersecurity Agency (ACN)
  • Japan’s National center of Incident readiness and Strategy for Cybersecurity (NISC)
  • Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • Nigeria’s National Information Technology Development Agency (NITDA)
  • Norwegian National Cyber Security Centre (NCSC-NO)
  • Poland Ministry of Digital Affairs
  • Poland’s NASK National Research Institute (NASK)
  • Republic of Korea National Intelligence Service (NIS)
  • Cyber Security Agency of Singapore (CSA)
  • Alan Turing Institute
  • Amazon
  • Anthropic
  • Databricks
  • Georgetown University’s Center for Security and Emerging Technology
  • Google
  • Google DeepMind
  • Hugging Face
  • IBM
  • Imbue
  • Inflection
  • Microsoft
  • OpenAI
  • Palantir
  • RAND
  • Scale AI
  • Software Engineering Institute at Carnegie Mellon University
  • Stanford Center for AI Safety
  • Stanford Program on Geopolitics, Technology and Governance

The guidance is designed for providers of any systems that use AI, whether those systems have been created from scratch or built on top of tools and services provided by others.

Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

The guidance covers 4 areas of the AI development lifecycle:

  1. Secure design
    Applicable to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
  2. Secure development
    Applicable to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
  3. Secure deployment
    Applicable to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
  4. Secure operation and maintenance
    Applicable to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.