A Legal Dialogue on the AI Trilogue

admin
5 Min Read

The European Union has taken a historic step forward with political agreement on the Artificial Intelligence Act (AI Act or Act), heralding a new era of digital governance.

This landmark legislation is poised to establish the most comprehensive AI regulatory framework to date, with profound implications for Artificial Intelligence (AI) development and deployment within the EU and beyond. We await sight of the final approved text of the AI Act, but the political agreement reached on 8 December 2023, represents a very significant step forward. The following summary is based on our current understanding of the AI Act and is subject to change pending the publication of the final version.

Prohibited AI Systems

The following systems will be prohibited, with just six months for companies to ensure compliance:

The first type, identified in Annex II, is where the AI system is a safety component of a product which is subject to specified EU regulations.

The second type comprises the activities listed in Annex III. This includes:

High-Risk AI Systems Provider Obligations

High-risk AI systems providers will be subject to several key requirements:

High-Risk Systems User Obligations

Under the AI Act, users of high-risk AI systems also have obligations. While the Act primarily targets providers of AI systems, users are also subject to certain rules, particularly when they are utilising high-risk AI applications. Many organisations may erroneously believe that just because they are using an AI product as a subscriber or user, the AI Act will not apply to them. This is not the case. Users of high-risk AI systems will have the following obligations under the AI Act:

Foundation Models and General Purpose AI (GPAI)

GPAI and foundation models must adhere to specific and rigorous standards reflecting their wide-ranging applications and possible effects. This includes comprehensive transparency protocols, the necessity for models with high-risk functionality to be evaluated for systemic risks, and the duty to clearly communicate to users when they are engaging with generative AI systems. This represents a big step back from what was initially proposed by the EU Parliament in June 2023. Foundation models will be regulated based on compute power. Following President Biden’s Executive Order approach, it will apply to models whose training required 10^25 flops of compute power – the largest of the large language models.

Penalties and Enforcement: Upholding the AI Act

The AI Act introduces a stringent penalty regime for non-compliance, with fines of up to 7% of global annual turnover or €35 million for prohibited AI violations. Lesser, yet substantial, fines apply to other violations, with caps in place to protect small and medium-sized enterprises (SMEs).

Business Impacts and Strategic Shifts

Businesses that are heavily invested in prohibited technologies, such as biometric categorisation and emotion recognition, may need to consider major strategic shifts. Additionally, enhanced transparency requirements might challenge the protection of intellectual property, necessitating a balance between disclosure and maintaining trade secrets.

Companies may also need to invest in higher-quality data and advanced bias management tools, potentially increasing operational costs but enhancing AI systems’ fairness and quality.

The documentation and record-keeping requirements will impose a significant administrative burden, potentially affecting the time to market for new AI products.

Integrating human oversight into high-risk AI systems will require system design and deployment changes, along with potential staff training.

The substantial fines for non-compliance represent a significant financial risk.

Timelines

Implementation periods will commence when the final wording of the text is approved by the EU, which is expected to happen in early 2024. The timelines currently suggested are six months for compliance with Prohibited AI Systems, 12 months for GPAI and foundation models, 24 months for high-risk systems based on Annex III, and 48 months for high-risk systems based on Annex II.

Conclusion

The AI Act sets a new global standard for the ethical development and use of AI technologies. With its comprehensive scope, explicit prohibitions, and strong enforcement mechanisms, the Act not only reshapes the European AI landscape but also signals a shift in the global dialogue on AI governance. As companies prepare for the changes necessitated by the Act, and as the EU moves from agreement to implementation, finalising the text by early 2024, the AI Act promises to usher in a future where AI is developed and used with the highest regard for fundamental rights.

Share This Article
By admin
test bio
Please login to use this feature.