EU Artificial Intelligence Regulations Take Effect Next Month

TS
Taft Stettinius & Hollister

Contributor

Established in 1885, Taft is a nationally recognized law firm serving individuals and businesses worldwide, in both mature and emerging industries.
On May 21, 2024, European Union Member States voted to endorse the AI Act, which requires strict transparency with respect to artificial intelligence...
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On May 21, 2024, European Union Member States voted to endorse the AI Act, which requires strict transparency with respect to artificial intelligence (AI) systems based on the severity of risk. The AI Act is the world's first comprehensive regulation with respect to providers of AI systems.

The AI Act adopts a risk-based approach to regulation. AI systems are designated into four categories: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. The higher the risk designation, the more restrictive the regulation. For example, "unacceptable" uses, such as using AI to assess the risk of an individual committing criminal offenses (think Minority Report), are strictly prohibited; however, minimal risk uses, such as an email provider's spam filter, are unregulated.

The table below outlines how each risk level is categorized and the regulatory requirements for providers of such AI systems.

Risk Level Use Examples Regulatory Requirements AI Act Reference
Unacceptable Social scoring systems; "manipulative" AI systems or programs; compilation of facial recognition databases; inferring emotions in workplaces or educational institutions; assessing risk of an individual committing criminal offenses; exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior. Strictly prohibited. Title II, Article 5.
High Use impacts health, safety, or fundamental rights of a natural person; critical infrastructure; education; employment; migration; democracy; elections; rule of law; the environment. Establish a risk management system.

Conduct data governance of system inputs and outputs.

Design system for record-keeping.

Provide instructions for use.

Implement human oversight.

Design system to achieve appropriate levels of accuracy, robustness, and security.

Establish a quality management system to ensure compliance.

Register AI system in the EU database established under the AI Act.

Title III, Articles 6 and 8-15.
Limited Chatbots, shallow-fake and deep-fake generation. Ensure end-users are on notice that they are interacting with AI. Recitals 53 and 134 – 137.
Minimal AI-enabled games, spam filters, automated. Unregulated. N/A.


General purpose AI (GPAI) models and systems could fall into any of the four categories of risk. GPAI systems used as high risk AI systems or integrated into them will require providers to take additional steps. For example, providers of GPAI models must:

  • Document training, testing, and evaluation results.
  • Provide downstream providers with information and documentation so that providers understand the capabilities and limitations of integrating the GPAI model into the downstream provider's AI system.
  • Adhere to internal policy devoted to honoring the Copyright Directive.
  • Publish a detailed summary of the content used for training.

In addition, all GPAI models with systemic risk must track, document, and report incidents and possible corrective measures to the EU AI Office, and relevant national competent authorities without "undue delay." Such incidents may include instances where the AI system generates discriminatory results or inadvertent manipulative content.

As a result of the AI Act, AI providers looking to offer services in the EU will be required to prepare for and satisfy bias testing to identify algorithmic discrimination with their systems. For providers focused on offering products and services in the United States, the requirements set forth under the AI Act offer a strong preview of what can be expected for future federal or state legislation. For example, earlier this month, the Colorado General Assembly passed SB24-205, which sets forth consumer protections for AI and follows a similar risk-based approach. The bill was signed into law on May 17, 2024, and takes effect Feb. 1, 2026.

In the coming days, the AI Act will be published in the EU bloc's Official Journal of the European Union and will take effect 20 days after publication. Although implementation will largely be conducted in phases, many regulations will take effect next month.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More