top of page

What to Expect from the AI Cybersecurity Code of Practice?

  • my2212
  • Apr 16
  • 9 min read


As we approach 2025, complex threats like AI-powered (phishing) attacks, ransomware (extortion-focused), deepfake technology, supply chain attacks, and IoT device exploitation keep organisations around the world worried. This makes it vital for cyber leaders to recognise and handle these risks to secure operations, data, and trust. As well as these threats, organisations must deal with threat actors using easily accessible AI tools to increase their destructive efforts. To stay safe, companies must be vigilant and continuously improve their cyber defences. Recognising these challenges, governments have begun to develop frameworks to guide the secure development and implementation of AI technologies.


In response to the above issues, the UK government has created a voluntary Code of Practice for the Cyber Security of AI (the «Code»). This establishes basic security criteria. While it lacks the weight of the EU AI Act, it is one of the first government-backed security standards for AI. So, developers of AI systems are now encouraged to adhere to the voluntary Code, which establishes baseline security requirements for AI while addressing cyber security vulnerabilities. The Code will help develop a global standard at the European Telecommunications Standards Institute. It focuses on AI systems, including generative AI.


The Code isn't for academics who create and test AI systems solely for research purposes. Instead, it is for people who create AI systems such as bots based on generative AI models. The Code outlines cyber security criteria for the AI lifecycle and identifies five development phases: design, development, deployment, maintenance, and end-of-life. It focuses on developers, system operators, data custodians and end users. Below, we examine its structure, principles and implications.


The article helps to structure principles and implications of the Code for companies in the United Kingdom to make the processes more transparent and secure.


How does the Code establish AI-powered decision-making?


In the era where AI increasingly drives critical business decisions, safeguarding these intelligent systems is no longer a peripheral concern – it's a fundamental imperative. To succeed, companies must embrace a comprehensive framework, such as the Principles of the Code. They provide an invaluable framework, guiding organizations towards building resilient, trustworthy, and secure AI-powered decision-making capabilities. 


So, please, think of these principles not as a checklist, but as guiding lights illuminating the path to responsible AI deployment:


  • Raising staff awareness of AI Security Threats and Risk (Principle 1)

The journey begins with equipping your staff with a deep understanding of potential AI-specific security vulnerabilities, which is the first line of defense. This proactive education transforms employees from potential weak links into vigilant guardians of your AI ecosystem.

  • Design your AI system for security as well as functionality and performance (Principle 2)

Next, security cannot be an afterthought; it must be ensured that functionality and performance don't overshadow crucial safeguards. Security becomes an integral part of the blueprint, not a last-minute addition. Functionality and performance, while crucial, should never overshadow robust security requirements. This "security by design" approach ensures that potential vulnerabilities are addressed proactively, rather than retroactively patching weaknesses. 

  • Evaluate the threats and manage the risks to your AI system (Principle 3)

A vital part of this proactive approach is to carry out a full evaluation of potential threats and to manage the associated risks of your AI systems. Employing comprehensive threat modelling techniques allows organisations to anticipate potential dangers, understand attack vectors, and implement pre-emptive measures to minimise their impact.

  • Enable human responsibility for AI systems (Principle 4)

The intelligence of AI should never overshadow the need for human oversight. Developers and system operators must embed robust human monitoring capabilities, ensuring human judgment can intervene when necessary, especially in decision-making. The Code emphasizes this role. Embedding and maintaining human monitoring capabilities ensures the ability to interpret, intervene, and course-correct AI decisions, especially in critical scenarios.

  • Identify, track and protect your assets (Principle 5)

In order to protect your AI assets effectively, it is first necessary to meticulously identify, track and safeguard them, as well as their intricate dependencies. A comprehensive understanding of the interconnectedness of your AI ecosystem is paramount to the development of a holistic and effective security strategy. A comprehensive understanding of the digital landscape is also essential for effective security. By identifying, tracking, and protecting AI assets and their dependencies, we can create a comprehensive inventory of what needs safeguarding.

  • Secure your infrastructure (Principle 6)

Building a secure foundation for your AI systems necessitates the creation of robust development and testing environments. These secure environments act as a bulwark against unauthorized access and potential compromises, ensuring the integrity of the AI lifecycle from inception to deployment.

  • Secure your supply chain (Principle 7) 

Recognizing the interconnected nature of modern AI development, organizations must meticulously analyze and proactively manage the security risks inherent in the usage of third-party AI components. Maintaining overall system security requires a vigilant approach to the entire supply chain.

  • Document your data, models and prompts (Principle 8) 

Transparency and security are inextricably linked in the realm of AI. Companies must maintain detailed and comprehensive records of the data utilized for training, the AI models generated, and the prompts employed. This meticulous documentation is crucial for ensuring accountability, facilitating auditing, and bolstering overall security posture.

  • Conduct appropriate testing and evaluation (Principle 9) 

Before entrusting AI systems and models with critical tasks, rigorous testing and evaluation are paramount. This process must encompass a thorough assessment of potential vulnerabilities, the identification of inherent biases that could lead to unfair outcomes, and a comprehensive evaluation of overall performance to ensure reliability and trustworthiness.

  • Communication and processes associated with End-users and Affected Entities (Principle 10) 

Building trust and fostering understanding necessitates the establishment of clear and transparent communication channels. Organizations should proactively keep end-users and affected parties well-informed about the behaviors of AI systems, any associated risks they may pose, and the implementation of relevant updates.

  • Maintain regular security updates, patches and mitigations (Principle 11) 

The dynamic framework of cybersecurity demands constant evaluation. AI systems must undergo regular updates with the latest security patches and mitigation strategies to effectively address newly discovered vulnerabilities and maintain a robust defense against evolving threats.

  • Monitor your system’s behaviour (Principle 12) 

Continuous and vigilant monitoring of AI systems is an indispensable practice for the early detection of any deviations from expected behavior, the identification of potential security problems, and the flagging of any unusual or anomalous activity that could indicate a compromise.

  • Ensure proper data and model disposal (Principle 13) 

Responsible data stewardship extends to the secure retirement of AI assets. Organizations must establish and adhere to robust protocols for the secure destruction of outdated data and the proper disposal of retired AI models to prevent unauthorized access and potential misuse.


Adherence to these principles will enable businesses to move beyond the mere leveraging of the power of AI, towards the creation of AI-powered decision-making systems that are both resilient and trustworthy. This commitment to security safeguards valuable assets and ensures regulatory compliance, while also cultivating the trust of users and stakeholders – the bedrock of sustainable AI adoption and innovation.


UK vs EU on AI cybersecurity


While the UK's Code serves as a valuable compass, guiding AI developers and operators towards robust cybersecurity practices through voluntary recommendations, the EU AI Act charts a broader, legally binding course for the governance of AI.


One key distinction lies in their scope. The UK's Code offers a set of security-focused guidelines, primarily aimed at those building and managing AI. In contrast, the EU AI Act establishes a comprehensive regulatory landscape, legally obligating organizations to adhere to stringent requirements based on the perceived risk level of their AI applications, particularly those deemed high-risk. Their focus also diverges. The UK guidelines place a strong emphasis on the technical fortifications of AI, ensuring resilience against the ever-evolving threats of cyberattacks, data manipulation, and system vulnerabilities. While the EU AI Act acknowledges the importance of security, its gaze extends further, encompassing critical ethical considerations such as fairness, the imperative to mitigate bias, and the fundamental preservation of individual rights.


The weight of regulatory enforcement differs significantly. Adherence to the UK's Code is encouraged as a best practice, fostering a culture of security awareness. However, the EU AI Act carries the force of law, with substantial penalties, potentially reaching up to 6% of a company's global annual turnover for significant breaches, underscoring the seriousness of non-compliance. Furthermore, the EU AI Act introduces a nuanced system of AI system categorization, classifying AI into four distinct risk tiers – unacceptable, high-risk, restricted risk, and minimal risk – each carrying its own specific set of regulatory obligations. The UK's approach, while emphasizing overarching cybersecurity principles, does not adopt this risk-based categorization, offering instead a more universally applicable set of security recommendations.


The business impact of these frameworks varies accordingly. Companies operating within the United Kingdom can leverage the Code to proactively enhance their AI security protocols without the immediate threat of legal repercussions. However, organizations conducting business within the European Union must navigate the more rigorous and legally binding requirements of the AI Act, especially when developing or deploying AI systems deemed to carry a high level of risk.


Both the UK's Code and the EU AI Act influence the evolving landscape of AI security and governance. The UK fosters an adaptable, industry-driven model for fortifying AI against cyber threats, while the EU's AI Act establishes a regulatory structure mandating compliance, especially for applications with the potential for greater societal impact. Understanding these distinctions is crucial for businesses navigating this complex and increasingly regulated world of AI.


Action steps for businesses


To proactively seize control of the evolving AI regulatory landscape and strategically fortify your organization against potential legal exposures, transforming uncertainty into a powerful competitive advantage, you are hereby empowered and directed to implement the following mandatory and legally prudent action steps with immediate effect:


  • Conduct comprehensive AI system audits

Implement a targeted and legally defensible audit of all deployed and in-development AI systems against pertinent regulatory frameworks, including but not limited to the UK's Code and the EU AI Act, with specific attention to identifying potential areas of non-compliance and meticulously documenting all findings. Note that adherence to even voluntary codes may be interpreted as establishing industry best practice and thus significantly influence liability determinations in future legal disputes.


  • Establish clear AI Governance frameworks

Formally define, document, and disseminate clear and unambiguous AI governance responsibilities throughout the organizational structure, assigning specific and accountable oversight roles to designated personnel or departments with the requisite authority and expertise. This unequivocal assignment of responsibility is paramount to prevent compliance gaps, ensure diligent adherence to all applicable legal and regulatory obligations, and establish a clear chain of accountability.


  • Maintain documentation of Due Diligence

Create, maintain, and regularly update comprehensive and auditable documentation encompassing all security assessments conducted, detailed risk evaluations performed (including cutting-edge threat modeling and rigorous vulnerability assessments), and specific mitigation strategies implemented for each AI system. This meticulous record-keeping is not merely a bureaucratic exercise; it is the bedrock of your defense, providing irrefutable and legally defensible evidence of your unwavering commitment to due diligence and proactive risk management, fostering trust with stakeholders and regulators alike.


  • Develop jurisdiction-specific cross-border compliance strategies

For companies engaged in or contemplating cross-border operations, formulate, document, and implement tailored and legally sound compliance strategies that explicitly address the nuanced legal requirements of all relevant jurisdictions, including but not limited to the UK's principles-based approach to AI cybersecurity and the EU's more prescriptive and legally binding stipulations. This comprehensive, multi-jurisdictional approach transforms regulatory complexity into a strategic advantage, enabling you to navigate global AI markets with unparalleled confidence and legal certainty.


  • Evaluate and secure specialised AI insurance coverage

Conduct an immediate and thorough review of existing insurance policies to definitively ascertain the scope and limitations of coverage for AI-specific risks, explicitly recognizing that traditional cyber insurance policies may contain ambiguities or exclusions for novel and evolving AI-related liabilities. Proactively explore, procure, and maintain specialised insurance coverage specifically designed to address these unique risks, providing an essential and forward-thinking layer of financial protection against potential legal judgments and financial repercussions arising from AI-related incidents.


  • Establish AI incident response Protocols

Develop, document, and formally implement dedicated and legally robust incident response protocols specifically tailored for AI security breaches and incidents. These protocols shall include clearly defined jurisdiction-specific reporting requirements, mandatory and legally binding notification timelines, and meticulously established procedures for swift containment, comprehensive remediation, thorough post-incident analysis, and legally defensible documentation. This proactive preparedness transforms potential crises into opportunities to demonstrate resilience and unwavering commitment to security and legal compliance.


  • Implement legislative monitoring and adaptation processes

Establish a proactive, systematic, and legally mandated process for the continuous and diligent monitoring of current and emerging legislation in the rapidly evolving field of AI, as well as potential upcoming regulations at both national and international levels. Implement formal and legally enforceable procedures for the timely and comprehensive adaptation of internal policies, technical controls, and operational practices to ensure ongoing and demonstrable compliance with the dynamic legal landscape. This strategic foresight positions your organization not as a follower, but as an agile and adaptable leader in the age of AI regulation.


By embracing and resolutely executing these empowering and legally sound action steps, with the strategic guidance and deep expertise of our in-house legal team, your organization will not only navigate the complexities of AI regulation but will also establish a powerful foundation for innovation, trust, and sustained success in the transformative era of AI. This is not merely about compliance; it is about seizing the future with confidence and legal foresight, with our legal counsel as your trusted strategic partner.


Conclusion


In conclusion, the UK's Code represents a pivotal stride towards establishing clear and impactful AI security benchmarks. By proactively embracing and embedding these principles into your operational DNA, your organization will not only fortify the security posture of its AI systems but will also cultivate an environment of unwavering trust and demonstrable reliability in your AI technologies – a cornerstone of future success.


At Icon.Partners, we are not merely observers of these transformative frameworks; we are active partners in your journey. We are deeply committed to empowering your company to not only meet but exceed these standards. Our bespoke solutions are engineered to align seamlessly with the Code's technical principles for secure AI development and deployment. Together, we can forge a path towards a secure and prosperous AI-driven future.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page