ARTICLE
19 August 2024

Decoding The EU AI Act: Compliance, Impact And Global Implications

I
IndusLaw

Contributor

INDUSLAW is a multi-speciality Indian law firm, advising a wide range of international and domestic clients from Fortune 500 companies to start-ups, and government and regulatory bodies.
As artificial intelligence ("AI") continues to evolve and permeate various sectors, the need for robust regulatory frameworks has become paramount.
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

1. INTRODUCTION

As artificial intelligence ("AI") continues to evolve and permeate various sectors, the need for robust regulatory frameworks has become paramount. The European Union ("EU") has taken a pioneering step in this direction by enacting the European Union Artificial Intelligence Act ("EU AI Act" or "Act") in March 2024.1 The Act came into force on August 01, 2024.2 This landmark legislation establishes a comprehensive legal framework for the use of AI within the EU and primarily aims to promote a human-centric and trustworthy adoption of AI while safeguarding health, safety and fundamental rights.3

The EU AI Act's scope is broad, applying to inter alia all providers, importers, distributors and deployers of 'AI systems'4 that are marketed or used within the EU, regardless of whether those providers or developers are established in the EU or another country.5 This extraterritorial applicability mandates that companies based outside the EU, including those in India, must comply with the Act should they wish to operate within the EU market. The Act adopts a risk-based approach, categorising AI systems into unacceptable-risk, high-risk, limited-risk and minimal-risk categories. It specifies tailored obligations for each category to ensure ethical, transparent, and safe AI usage.

This article highlights the key compliance requirements under the EU AI Act for various AI systems and examines the significant impact of the Act on companies (including Indian entities providing AI systems in the EU). Additionally, the article discusses the potential global influence of the Act as a model for AI regulation and provides a way forward for businesses to align with this comprehensive regulatory framework.

2. KEY TAKEAWAYS FROM THE EU AI ACT

2.1 COMPLIANCES

Understanding the key compliance requirements under the EU AI Act is essential for companies to ensure that they meet the Act's stringent standards. This section outlines the obligations for the various risk categories of AI systems, including 'General-Purpose AI Models'6 ("GPAI").

a. Unacceptable-risk AI Systems7

Unacceptable-Risk AI systems, including inter alia manipulative AI systems, social credit scoring systems, emotion-recognition systems at work and educational institutions, are prohibited due to their potential detrimental impact and risk of leading to discriminatory practices. Consequently, no compliance requirements have been prescribed for these systems.

b.High-risk AI Systems8

High-risk AI systems identified in areas9 such as critical infrastructure, education and vocational training, employment, workers management and access to self-employment, law enforcement, administration of justice and democratic processes, migration, asylum, and border control face stringent compliance requirements. The Act stipulates several requirements for different parties, and some of the more crucial ones are briefly outlined below:

i. Providers

'Providers' are natural or legal persons who develop an AI system or a GPAI, or those who have an AI system or GPAI developed with the intention to place it on the EU market or put it into service under their own name or trademark.10 They bear the primary responsibility of ensuring the AI system's compliance with the Act. A few material obligations include:

  • Technical Documentation: Preparing technical documentation that will be relied upon throughout the lifecycle of the AI system to demonstrate compliance with the Act, before it is placed in the market.11
  • Registration: Registering the AI system with the EU database to facilitate monitoring and compliance checks.12
  • Risk Management System: Establishing, maintaining and reviewing a risk management system throughout the lifecycle of the AI system for identifying known and reasonably foreseeable risks, evaluating emerging risks and adopting targeted measures to address these identified risks.13
  • Transparency and Human Oversight: Ensuring transparency to enable users to interpret the AI system's output, capabilities and limitations. Additionally, the design should facilitate human oversight to allow meaningful intervention when necessary.14
  • Quality Management System ("QMS"): Maintaining a quality management system to ensure compliance with the Act,15 and conducting conformity assessments16 to verify that the AI system meets the requirements before it can be placed on the EU market or put into service and obtaining the Conformité Européenne ("CE") marking.
  • Data Governance: Conducting data governance to ensure the relevance and accuracy of the training, validation, and testing datasets.17 This would also include observance of compliances stipulated under the EU General Data Protection Regulation ("GDPR") for the purposes of processing personal data.

Some other key compliances outlined in the EU AI Act vis-à-vis the providers include (a) enabling automatic recording of events (logs) over the lifetime of the AI system18, (b) ensuring adherence to appropriate levels of accuracy, robustness and cybersecurity, post-deployment monitoring,19 and (c) incident reporting to the relevant authorities of serious incidents, including taking corrective measures and cooperating with them.20

ii. Deployers21

'Deployers' are natural or legal persons, public authority, agency or any other body who use an AI system under their own authority22 and are required to adhere to certain critical obligations under the Act. Similar to providers, deployers must ensure human oversight of the AI systems by assigning this responsibility to persons who have the necessary competence, training and authority. At the same time, deployers also have similar incident reporting obligations to the relevant authorities as the providers. In addition to the above, deployers who are employers are given the responsibility of informing workers' representatives and affected workers that they would be subject to the use of a high-risk AI system in the event such an AI system is used at a workplace. Another significant obligation is to monitor the operation of the AI system basis the instructions for use provided by the provider.

iii. Importers23

An importer refers to a natural or legal person located or established in the EU who places on the market an AI system bearing the name or trademark of a natural or legal person established in a third country.24 A few material obligations of importers include:

  • Conformity Assessment: Ensuring that the provider has carried out the appropriate conformity assessment procedure and prepared the required technical documentation.
  • CE Marking and Documentation: Verifying that the AI system bears the required CE marking and is accompanied by the necessary documentation and instructions.
  • Cooperation with Authorities: Cooperating with national authorities in any actions taken by them to eliminate risks posed by AI systems placed by them on the market.
  • Limited-risk AI Systems
  • Limited-risk AI systems pose moderate manipulation risks primarily associated with a lack of transparency. Accordingly, the Act, similar to providers of high-risk AI systems, stipulates transparency related obligations for the providers and deployers of these AI systems. As part of these obligations, providers and deployers of such AI systems are required to ensure that users are informed when they are interacting with an AI model. Additionally, AI systems that either generate or manipulate images, audios or video content, such as deepfake, are also required to make necessary disclosures informing the final viewers that the content has been artificially generated or manipulated.25 This obligation, however, is not applicable to specific use-cases authorised by law to detect, prevent or investigate a criminal offense.26
  • d. Minimal-risk AI Systems

    Minimal-Risk AI systems, which include spam filters, inventory management systems, and AI-enabled video games, are not subject to any mandatory obligations under the Act. However, these systems are encouraged to follow voluntary codes of conduct to promote ethical AI use and transparency.27

    e. GPAI Models

    GPAI models include models that are trained on vast data sets and can perform a wide range of tasks, but exclude those used before release on the EU market for research, development and prototyping activities.28 Such models are inter alia required to adhere to transparency obligations, maintain technical documentation, implement policies to ensure compliance with the EU laws on copyright and related rights, and publicly share a detailed summary of the content used for training the GPAI model, among others.29 The providers of GPAI models are also required to ensure that users in downstream applications comply with the Act's requirements.30 It is also important to note that providers of GPAI models established in third countries must appoint an authorised representative established in the EU, before making their GPAI available in the EU market.31

    To view the full article please click here.

    The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More