This article originally appeared on the Forrester Blog
According to Forrester data, about one in three global organizations identify risk and governance as barriers for generative AI adoption, but those concerns are set to grow even more now that the EU AI Act is becoming a reality.
With little room left for further changes to the text and the legislative process drawing to an end, we expect the EU to officially pass the Act in the next few months. While imperfect, the impact of this legislation can’t be overstated. The EU AI Act will be the first binding legislation on AI.
What Should You Know About It?
Here’s my main takeaways on the AI Act:
- The Act is designed to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values, regardless of where they come from. This means that, no matter your headquarters’ location, if your organization is part of an “AI value chain” that leverages AI to craft products and/or services touching the EU market, you must comply with the rules.
- The regulation is based on a “risk-based” approach. This means that the rules aim to govern the risks entailed in business use cases rather than the underlying technology of such – the higher the risk, the stricter the rules. When it comes to high-risk systems, there are four critical areas: transparency, IT security, data governance, and risk management. This risk-based approach also extends to cover high-impact, general-purpose AI models.
The provisions of the Act will not all come into force at the same time. Organizations must closely look at the enforcement timeline. Generally speaking, the Act will enter into force on the 20th day after publication in the Official Journal of the EU and will become applicable 24 months after entry into force.
But some provisions could be enforced earlier, such as prohibitions on unacceptable-risk AI systems that would apply only six months after entry into force. Other provisions, such as obligations for high-risk AI systems, may require a 36-month window after publication before enforcement.
What Should You Expect Next?
The Act is a big milestone, effectively addressing a global challenge in a fast-evolving technological environment that is a key area for the future of our societies and economies. But there is much more that organizations must prepare for:
- Technical details: With the collaboration of the EU, member states will lead the creation of harmonization legislation, code of conducts, and standards, which will contain a great deal of technical details and have an outsized impact on AI compliance overall. These also include sector-specific rules, especially higher-risk ones. As some member states might prioritize certain aspects of the law over others, we can expect some fragmentation to emerge over time.
- Enforcement: The Act also establishes the creation of a new network of agencies in charge of enforcement. The plan is for this network to replicate what we currently have for the enforcement of privacy legislation, with independent authorities in each country and a central body to provide guidance, direction, and support. Despite the strong desire to achieve homogeneity, it’s fair to expect potential differences in enforcement styles and priorities.
- The AI legislation framework will come to life: The AI Act is the tip of the iceberg when it comes to the EU governing the risks of AI. In the near future, the updated directives on product safety will kick in, and new liability requirements will emerge that will have a significant impact on the way organizations govern AI and their liability for it. The enforcement of privacy requirements in light of AI will become stricter, too, as IP laws and consumer protection requirements will continue to evolve to better address AI-related use cases.
What Should You Do Now?
Before diving into the details, organizations must compile an inventory of their AI systems. This is a necessary step to get the compliance process started. The ability to categorize AI systems and the use cases that they support in line with the risk-based approach of the Act is a fundamental action. This means that organizations must start designing their own processes to build, execute, and optimize their own approach for classifying AI systems and assessing the risks of the use cases at stake. There are resources available to support companies in these endeavors, including in the Act itself, but not official tools or approaches. Before organizations can get comfortable with an approach that they think will deliver on the requirements of the Act, there will be a lot of trial and error.
Enza Iannopollo is a Principal Analyst at Forrester.