The ETSI EN 304 223 commonplace introduces baseline safety necessities for AI that enterprises should combine into governance frameworks.
As organisations embed machine studying into their core operations, this European Normal (EN) establishes concrete provisions for securing AI fashions and techniques. It stands as the primary globally relevant European Normal for AI cybersecurity, having secured formal approval from Nationwide Requirements Organisations to strengthen its authority throughout worldwide markets.
The usual serves as a needed benchmark alongside the EU AI Act. It addresses the truth that AI techniques possess particular dangers – corresponding to susceptibility to information poisoning, mannequin obfuscation, and oblique immediate injection – that conventional software program safety measures usually miss. The usual covers deep neural networks and generative AI by means of to fundamental predictive techniques, explicitly excluding solely these used strictly for educational analysis.
ETSI commonplace clarifies the chain of accountability for AI safety
A persistent hurdle in enterprise AI adoption is figuring out who owns the chance. The ETSI commonplace resolves this by defining three main technical roles: Builders, System Operators, and Knowledge Custodians.
For a lot of enterprises, these traces blur. A monetary providers agency that fine-tunes an open-source mannequin for fraud detection counts as each a Developer and a System Operator. This twin standing triggers strict obligations, requiring the agency to safe the deployment infrastructure whereas documenting the provenance of coaching information and the mannequin’s design auditing.
The inclusion of ‘Knowledge Custodians’ as a definite stakeholder group immediately impacts Chief Knowledge and Analytics Officers (CDAOs). These entities management information permissions and integrity, a task that now carries specific safety duties. Custodians should be sure that the supposed utilization of a system aligns with the sensitivity of the coaching information, successfully inserting a safety gatekeeper throughout the information administration workflow.
ETSI’s AI commonplace makes clear that safety can’t be an afterthought appended on the deployment stage. Through the design section, organisations should conduct risk modelling that addresses AI-native assaults, corresponding to membership inference and mannequin obfuscation.
One provision requires developers to limit performance to cut back the assault floor. As an example, if a system makes use of a multi-modal mannequin however solely requires textual content processing, the unused modalities (like picture or audio processing) characterize a danger that have to be managed. This requirement forces technical leaders to rethink the frequent follow of deploying large, general-purpose basis fashions the place a smaller and extra specialised mannequin would suffice.
The doc additionally enforces strict asset administration. Builders and System Operators should preserve a complete stock of property, together with interdependencies and connectivity. This helps shadow AI discovery; IT leaders can’t safe fashions they have no idea exist. The usual additionally requires the creation of particular catastrophe restoration plans tailor-made to AI assaults, guaranteeing {that a} “identified good state” will be restored if a mannequin is compromised.
Provide chain safety presents an instantaneous friction level for enterprises counting on third-party distributors or open-source repositories. The ETSI commonplace requires that if a System Operator chooses to make use of AI fashions or elements that aren’t well-documented, they have to justify that call and doc the related safety dangers.
Virtually, procurement groups can now not settle for “black field” options. Builders are required to supply cryptographic hashes for mannequin elements to confirm authenticity. The place coaching information is sourced publicly (a standard follow for Massive Language Fashions), Builders should doc the supply URL and acquisition timestamp. This audit path is important for post-incident investigations, notably when making an attempt to establish if a mannequin was subjected to information poisoning throughout its coaching section.
If an enterprise provides an API to exterior clients, they have to apply controls designed to mitigate AI-focused assaults, corresponding to fee limiting to stop adversaries from reverse-engineering the mannequin or overwhelming defences to inject poison information.
The lifecycle method extends into the upkeep section, the place the usual treats main updates – corresponding to retraining on new information – because the deployment of a brand new model. Underneath the ETSI AI commonplace, this triggers a requirement for renewed safety testing and analysis.
Steady monitoring can be formalised. System Operators should analyse logs not only for uptime, however to detect “information drift” or gradual adjustments in behaviour that would point out a safety breach. This strikes AI monitoring from a efficiency metric to a safety self-discipline.
The usual additionally addresses the “Finish of Life” section. When a mannequin is decommissioned or transferred, organisations should contain Knowledge Custodians to make sure the safe disposal of information and configuration particulars. This provision prevents the leakage of delicate mental property or coaching information by means of discarded {hardware} or forgotten cloud cases.
Govt oversight and governance
Compliance with ETSI EN 304 223 requires a overview of current cybersecurity coaching programmes. The usual mandates that coaching be tailor-made to particular roles, guaranteeing that builders perceive safe coding for AI whereas basic workers stay conscious of threats like social engineering by way of AI outputs.
“ETSI EN 304 223 represents an necessary step ahead in establishing a standard, rigorous basis for securing AI techniques”, mentioned Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Synthetic Intelligence.
“At a time when AI is being more and more built-in into crucial providers and infrastructure, the supply of clear, sensible steerage that displays each the complexity of those applied sciences and the realities of deployment can’t be underestimated. The work that went into delivering this framework is the results of intensive collaboration and it signifies that organisations can have full confidence in AI techniques which might be resilient, reliable, and safe by design.”
Implementing these baselines in ETSI’s AI safety commonplace supplies a construction for safer innovation. By imposing documented audit trails, clear position definitions, and provide chain transparency, enterprises can mitigate the dangers related to AI adoption whereas establishing a defensible place for future regulatory audits.
An upcoming Technical Report (ETSI TR 104 159) will apply these ideas particularly to generative AI, focusing on points like deepfakes and disinformation.
See additionally: Allister Frost: Tackling workforce nervousness for AI integration success

Wish to study extra about AI and large information from business leaders? Take a look at AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is a part of TechEx and is co-located with different main know-how occasions. Click on here for extra data.
AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.
