ARTICLE
9 August 2024

The EU Artificial Intelligence Act: What Businesses Should Know

F
FRKelly

Contributor

The EU introduced its first AI Regulation (EU Regulation No. 2024/1689 on artificial intelligence) on 1 August, 2024.
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The EU introduced its first AI Regulation (EU Regulation No. 2024/1689 on artificial intelligence) on 1 August, 2024. This Act aims to strike a balance between reducing AI-related risks and promoting the use of AI, whilst ensuring the EU remains a global leader in AI innovation and investment. Businesses have 3 years to fully comply with the legislation, with all key obligations to be in place within the next 24 months.

What it does

The Act focuses heavily on protecting people's health, safety and rights from the risks associated with AI. It categorises AI based on risk, breaking this down into four risk categories. This risk-based approach represents a key objective of the EU to remain competitive and have proportionate regulation of AI. It is designed to ensure that AI is applied safely and that the obligations that are required under the legislation does not act as a deterrent against the safe application of AI.

Who is affected?

The Act applies to creators, distributers and users of AI systems who deploy AI in a professional capacity in the EU, as well as to third-country providers of AI (if their AI outputs affect the EU).

What Risk Categories are Involved?

Unacceptable Risk

AI categorised under the Unacceptable-Risk category are prohibited. The category includes systems that could pose a threat to the safety, livelihoods or fundamental rights of citizens. Examples given in the Act include:

  • Public or private social scoring systems.
  • AI that manipulates users' decision making.
  • Predicting criminal behaviour based solely on profiling.
  • Untargeted facial recognition databases.
  • Inferring emotions in workplaces or educational institutions, except for medical or safety reasons.

High Risk

The majority of the Act focuses on high-risk AI systems, which face strict regulations. These include AI used in:

  • Critical infrastructure (e.g. management and operation of critical digital infrastructure).
  • Non-prohibited biometrics (e.g. emotion recognition systems).
  • Education and vocational training (e.g. determining access to training).
  • Employment (e.g. workers management and access to self-employment).
  • Essential private and public services (e.g. assessing eligibility to benefits and their services).
  • Law enforcement (e.g. profiling during criminal detections, investigations or prosecutions).
  • Border control (e.g. polygraphs).
  • Health or life insurance (e.g. risk assessments and pricing).
  • Administration of justice and democratic processes (e.g. systems used in researching and interpreting facts).

Limited Risk

This category is smaller in comparison to the other three and obligations are lesser, focusing on transparency. It provides that developers and providers must ensure that end-users are aware that they are interacting with AI. For example, users should be knowledgeable that they are interacting with AI when engaging with a chatbot or deep-fake.

Minimal Risk

Systems which do not fall under any of the other three categories are unregulated. This category includes the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters.

Obligations of High Risk AI providers

High-Risk AI is subject to strict regulations before it may be put on the market including the following:

  • Implement a detailed risk management system and testing throughout the AI system's lifecycle;
  • Ensure high quality of datasets and data governance to ensure the accuracy and reliability of the data used by AI;
  • Maintain detailed technical documentation and record keeping to facilitate audits and traceability;
  • Ensure appropriate human oversight to minimise risk;
  • Provide instructions for use and compliance to downstream deployers;
  • Ensure accuracy, robustness and cybersecurity.

Enforcement and Penalties

The European AI Office was established by the European Commission to the AI Act's enforcement at EU level. At a national level, EU Member States must establish their own national competent authorities to enforce the rules in their countries before next August. Companies found not to be in compliance with the Act could face fines potentially amounting up to €35 million or 7% of global annual turnover.

Conclusion

This legislation will be welcomed by a wide variety of businesses seeking to make use of the benefits that come with AI. The balance struck in the Act between risk mitigation and promotion of the use of AI systems is designed to ensure legal certainty for developers and users while also encouraging market uptake of AI. Companies intending to use AI in the EU must familiarise themselves with the Act, understand the risk categories, and comply with all obligations to avoid hefty penalties and successfully leverage AI technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More