A World First
The European Union's Artificial Intelligence Act (AI Act) is considered to be the world's first comprehensive horizontal legal framework for AI. It provides for EUwide rules on data quality, transparency, human oversight, and accountability. With challenging requirements, significant extraterritorial effects, and fines of up to 35 million euros or 7% of global annual revenue (whichever is higher), the AI Act will have a profound impact on a significant number of companies conducting business in the European Union.
The Time to Prepare Is Now
The AI Act was published in the Official Journal of the European Union on July 12, 2024, as "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence." While the AI Act will generally apply starting on August 2, 2026, the exact milestones are quite nuanced and complex, with some provisions already applying since February 2, 2025. Several categories of affected actors may face the need to significantly redesign their products and services, a process which should be initiated as soon as possible. NonAI companies are subject to similar time constraints, as they will need to understand the technology and establish their own risk thresholds to effectively navigate compliance.
1. Scope and Approach of the AI Act
Material Scope – What Is AI?
AI Systems. The definition of "AI system" in the AI Act is inspired by the OECD definition, which is widely accepted. It focuses on two key characteristics of AI systems: (1) they operate with varying levels of autonomy and (2) they infer from the input they receive how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
Article 3(1) of the AI Act:
"AI system" means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Recital 12 of the AI Act provides additional background regarding the intentions of the legislators with regard to the definition of AI systems:
[This] definition should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logicand knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modelling. The term "machine-based" refers to the fact that AI systems run on machines.
The Commission had also adopted guidelines on the definition of AI systems.
General-Purpose AI Models/Generative AI. During the negotiations, a chapter on generalpurpose AI models was added to the AI Act. The legislation now differentiates between "generalpurpose AI models" (GPAI Models), a subcategory "general-purpose AI models with systemic risk", and general-purpose AI models with high-impact capabilities.
- AI models are a component of an AI system and are the engines that drive the functionality of AI systems. AI models require the addition of further components, such as a user interface, to become AI systems.
- While the AI Act generally does not subject AI models to legal obligations, it defines "GPAI model" as an AI model that (1) displays significant generality; (2) is capable of competently performing a wide range of tasks; and (3) can be integrated into a variety of downstream systems or applications.
- AI models used for research, development, or prototyping activities before market release are not covered under the AI Act.
Personal Scope – Who Is Subject to the AI Act?
The AI Act identifies and defines the following key players, all of which can be natural or legal persons.
Providers develop or have developed AI systems or GPAI Models with a view to placing them on the market or putting them into service under their own name or trademark, whether for payment or free of charge. The terms "placing on the market" and "putting into service" refer to specific concepts defined in the AI Act:
- Placing on the European Union's market. A company or an individual places an AI system on the market when it first makes it available in the European Union.
- Putting into service in the European Union. A provider puts an AI system into service by supplying such a system for first use directly to a deployer or for its own use within the European Union for the system's intended purpose.
Importers are located or established in the European Union and place on the market AI systems bearing the name or trademark of a natural or legal person established outside the European Union.
Distributors are players in the supply chain, other than the provider or the importer, that make an AI system available on the EU market.
Deployers use AI under their authority in the course of their professional activities. In practice, it is likely that companies will very quickly be above this very low threshold.
Territorial Scope – Where Does the AI Act Apply?
The AI Act has significant extraterritorial effects, as it applies to providers who place or put into service AI systems on the EU market, irrespective of where they are established or located. The AI Act also applies to providers and deployers established or located outside the EU in cases where the output of the system is used in the EU. The AI Act obviously also applies to deployers who are established or located in the EU. For affected individuals, the AI Act only applies when they are in the EU. There is little clarity or precision regarding distributors.
AI Outside the Scope of the AI Act
The AI Act does not apply to AI specifically developed and put into service for the sole purpose of scientific research and development. The AI Act does not apply to any research, testing or development activity that occurs before an AI system is placed on the market or put into service — but this exemption does not apply to real-world testing. In addition, the AI Act does not apply to systems released under free and open-source licenses, unless such systems qualify as high-risk, prohibited or generative AI. Finally, the AI Act is not applicable to AI systems used solely for military, defence, or national security purposes, irrespective of the entity performing those activities.
What Is the EU Approach to AI Regulation?
The AI Act relies on a risk-based approach, which means that different requirements apply in accordance with the level of risk.
Unacceptable risk (see Chapter 3). Certain AI practices are considered to be a clear threat to fundamental rights and are prohibited. The respective list in the AI Act includes AI systems that manipulate human behaviour or exploit individuals' vulnerabilities (e.g., age or disability) with the objective or the effect of distorting their behaviour. Other examples of prohibited AI include certain biometric systems, such as emotion recognition systems in the workplace or real-time categorisation of individuals.
High risk (see Chapters 4 and 5). AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Examples of high-risk AI systems include critical infrastructures, such as energy and transport, medical devices, and systems that determine access to educational institutions or jobs.
Limited risk (see Chapter 6). Providers must ensure that AI systems intended to directly interact with natural persons, such as chatbots, are designed and developed in such a way that individuals are informed that they are interacting with an AI system. Typically, deployers of AI systems that generate or manipulate deepfakes must disclose that the content has been artificially generated or manipulated.
Minimal risk. There are no restrictions on minimal-risk AI systems, such as AI-enabled video games or spam filters. Companies may, however, commit to voluntary codes of conduct.
Relationship With the EU General Data Protection Regulation
EU laws on the protection of personal data, privacy and the confidentiality of communications continue to apply to the processing of personal data in connection with the AI Act. The AI Act does not affect the EU General Data Protection Regulation (GDPR) and the ePrivacy Directive 2002/58/EC.
2. Critical Milestones on the Road to Full Applicability of the AI Act
The AI Act was published in the Official Journal of the European Union on July 12, 2024
While the AI Act will generally apply starting on August 2, 2026, the exact milestones are quite nuanced and complex, with some provisions already applying since February 2, 2025.
Below, we set out the key dates for the various operators, especially providers and deployers, as well as the dates by which the Commission will have to prepare implementing acts, documentation and reports to help the operators ensure compliance with the AI Act.
Entry into force of the AI Act (Article 113). This means that the AI Act became part of the EU legal order. It does not mean that the provisions of the AI Act became applicable on that date.
By this date, Member States had to identify the public authorities or bodies that supervise or enforce obligations under EU law protecting fundamental rights, including the right to nondiscrimination, in relation to the use of high-risk AI systems referred to in Annex III of the AI Act (Article 77(2)). This has not been done in all EU Member States yet.
Chapters I and II of the AI Act apply from this date (Article 113(a)). These include the Act's general provisions (e.g., geographic scope, definitions) and its provisions on prohibited AI practices.
The general obligation to ensure a sufficient level of AI literacy of staff under Article 4 of the AI Act will also apply from this date.
By this date, codes of practice for the implementation of general-purpose AI models and related obligations must be ready (Article 56(9)). These codes should support providers in achieving compliance with their duties relating to general-purpose AI models.
From this date, Chapter III, Section 4 (Notifying authorities and notified bodies), Chapter V (Generalpurpose AI models), Chapter VII (Governance), and Chapter XII (Penalties) will apply (except for Article 101, which deals with fines for providers of generalpurpose AI models).
- Chapter III, Section 4 deals with notifying authorities and notified bodies, which are essential for the establishment of conformity assessment bodies.
- Chapter V contains the provisions related to general-purpose AI models introduced late in the legislative process; for example, the mandatory notification procedure for the provider (Article 52 (1)), documentation requirements (Article 53), and the appointment of an authorised representative (Article 54). Article 55 contains additional responsibilities focusing on the evaluation and mitigation of systemic risk and cyber and infrastructure security.
- Chapter VII sets out the EU's AI-related governance structure, including the AI Office, the European Artificial Intelligence Board, the advisory forum and the scientific panel. On the Member State level, the competent authorities must be appointed by this date (Article 70(2))
By the same date, the Commission must finalise its guidance to facilitate compliance with the reporting obligations in case of serious incidents (Article 73(7)).
By this date, the Commission must issue implementing acts creating a template for high-risk AI providers' post-market monitoring plans, which should serve as the basis for said monitoring system established by Article 72.
Similarly, the Commission must, by this date, provide guidelines for the practical implementation of Article 6 concerning the classification of an AI system as high risk (Article 6(5)).
This is the default date by which the provisions of the AI Act become applicable.
The obligations regarding high-risk AI systems will apply from this date, including those related to risk and quality management systems, diligent data governance, technical documentation, recordkeeping, and transparency and clear user information obligations.
Chapter IV addresses operators of AI systems directly interacting with humans, generative AI systems, and emotion recognition or biometric categorisation systems, introducing disclosure and information responsibilities.
By this date, Member States must have implemented rules on penalties and other enforcement measures and notified the Commission about them (Article 99).
Member States must have established at least one AI regulatory sandbox, which must be operational at a national level (Article 57(1)).
This is the ultimate deadline for AI systems covered by existing harmonisation legislation (Article 113(c)) and for providers of general-purpose AI models that have been placed on the market for up to 12 months after August 1, 2024, to comply with the AI Act.
To view the full article click here
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.