EU-funded initiative CERTAIN goals to drive moral AI compliance in Europe amid rising laws just like the EU AI Act.
CERTAIN — quick for “Certification for Moral and Regulatory Transparency in Synthetic Intelligence” — will deal with the event of instruments and frameworks that promote transparency, compliance, and sustainability in AI applied sciences.
The challenge is led by Idemia Id & Safety France in collaboration with 19 companions throughout ten European international locations, together with the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN may function a blueprint for international AI governance.
Driving moral AI practices in Europe
In accordance with Sebastian Neumaier, Senior Researcher on the St. Pölten UAS’ Institute of IT Safety Analysis and challenge supervisor for CERTAIN, the objective is to deal with essential regulatory and moral challenges.
“In CERTAIN, we need to develop instruments that make AI techniques clear and verifiable in accordance with the necessities of the EU’s AI Act. Our objective is to develop virtually possible options that assist corporations to effectively fulfil regulatory necessities and sustainably strengthen confidence in AI applied sciences,” emphasised Neumaier.
To realize this, CERTAIN goals to create user-friendly instruments and pointers that simplify even essentially the most advanced AI laws—serving to organisations each in the private and non-private sectors navigate and implement these guidelines successfully. The general intent is to offer a bridge between regulation and innovation, empowering companies to leverage AI responsibly whereas fostering public belief.
Harmonising requirements and bettering sustainability
One in every of CERTAIN’s main goals is to ascertain constant requirements for information sharing and AI growth throughout Europe. By setting industry-wide norms for interoperability, the challenge seeks to enhance collaboration and effectivity in the usage of AI-driven applied sciences.
The hassle to harmonise information practices isn’t simply restricted to compliance; it additionally goals to unlock new alternatives for innovation. CERTAIN’s options will create open and reliable European information areas—important elements for driving sustainable financial development.
In keeping with the EU’s Inexperienced Deal, CERTAIN locations a robust deal with sustainability. AI applied sciences, whereas transformative, include vital environmental challenges—corresponding to excessive power consumption and resource-intensive information processing.
CERTAIN will tackle these points by selling energy-efficient AI techniques and advocating for eco-friendly strategies of knowledge administration. This twin strategy not solely aligns with EU sustainability objectives but in addition ensures that AI growth is carried out with the well being of the planet in thoughts.
A collaborative framework to unlock AI innovation
A novel side of CERTAIN is its strategy to fostering collaboration and dialogue amongst stakeholders. The challenge crew at St. Pölten UAS is actively participating with researchers, tech corporations, policymakers, and end-users to co-develop, check, and refine concepts, instruments, and requirements.
This practice-oriented change extends past product growth. CERTAIN additionally serves as a government for informing stakeholders about authorized, moral, and technical issues associated to AI and certification. By sustaining open channels of communication, CERTAIN ensures that its outcomes will not be solely sensible but in addition extensively adopted.
CERTAIN is a part of the EU’s Horizon Europe programme, particularly underneath Cluster 4: Digital, Business, and House.
The challenge’s multidisciplinary and worldwide consortium consists of main tutorial establishments, industrial giants, and analysis organisations, making it a strong collective effort to form the way forward for AI in Europe.
In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The 2-day assembly set the tone for the challenge’s formidable agenda, with companions devising methods for tackling the regulatory, technical, and moral hurdles of AI.
Making certain compliance with moral AI laws in Europe
Because the EU’s AI Act edges nearer to implementation, pointers and instruments like these developed underneath CERTAIN will likely be pivotal.
The Act will impose strict necessities on AI techniques, notably these deemed “high-risk,” corresponding to purposes in healthcare, transportation, and regulation enforcement.
Whereas these laws intention to make sure security and accountability, in addition they pose challenges for organisations looking for to conform.
CERTAIN seeks to alleviate these challenges by offering actionable options that align with Europe’s authorized framework whereas encouraging innovation. By doing so, the challenge will play a vital position in positioning Europe as a worldwide chief in moral AI growth.
See additionally: Endor Labs: AI transparency vs ‘open-washing’

Wish to be taught extra about AI and large information from {industry} leaders? Take a look at AI & Big Data Expo happening in Amsterdam, California, and London. The great occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.