Analysing EU's Artificial Intelligence (AI) Act: First Worldwide Rules On AI

EY
Ernst & Young

Contributor

Ernst & Young
On 21st May 2024, the EU Council approved the pioneering Artificial Intelligence Act, a groundbreaking law designed to harmonize AI regulations.
India Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On 21st May 2024, the EU Council approved the pioneering Artificial Intelligence Act, a groundbreaking law designed to harmonize AI regulations. This flagship legislation adopts a 'risk-based' approach, imposing stricter rules on AI systems that pose higher risks to society. As the first law of its kind globally, it has the potential to set an international standard for AI regulation.

The new law aims to promote the development and adoption of safe and trustworthy AI systems across the EU's single market by both private and public entities. Simultaneously, it seeks to ensure the protection of fundamental rights for EU citizens while encouraging investment and innovation in artificial intelligence within Europe.

Overview of the legislation

This Regulation primarily lays down provisions around:

  1. harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
  2. prohibitions of certain AI practices;
  3. specific requirements for high-risk AI systems and obligations for operators of such systems;
  4. harmonised transparency rules for certain AI systems;
  5. harmonised rules for the placing on the market of general-purpose AI models;
  6. rules on market monitoring, market surveillance, governance and enforcement;
  7. measures to support innovation, with a particular focus on SMEs, including start-ups.

Scope of the regulation

This legislation has a very wide scope of applicability. The Act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defence as well as for research purposes.

The wide range of stakeholders on whom this legislation applies to include (indicative list):

  1. providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
  2. deployers of AI systems that have their place of establishment or are located within the Union;
  3. importers and distributors of AI systems;
  4. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  5. authorised representatives of providers, which are not established in the Union;

Impact of the legislation on organisations deploying AI systems

The legislation broadly categorizes AI systems into different risk levels—unacceptable, high, limited, and minimal—imposing stringent requirements on higher-risk applications.

Organisations deploying AI systems, particularly those classified as high-risk, will be required to undertake several mandatory activities to ensure compliance. Some of the mandatory activities would include:

  • Risk Management: Conducting thorough risk assessments to identify and mitigate potential harms associated with the AI system.
  • Data Governance: Ensuring high-quality datasets for training, validation, and testing to minimize bias and ensure accuracy.
  • Technical Documentation: Maintaining detailed technical documentation that describes the AI system's design, purpose, and performance metrics.
  • Human Oversight: Implementing measures to ensure human intervention is possible, to prevent or correct unintended outcomes.
  • Monitoring and Reporting: Establishing continuous monitoring processes to track the AI system's performance and report significant incidents or malfunctions to relevant authorities.
  • Conformity Assessment: Performing or facilitating independent conformity assessments to verify compliance with the EU AI Act's requirements before the AI system is deployed.

Next steps

The new regulation will come into effect two years after its commencement, with certain exceptions for specific provisions. This timeframe provides entities in this sector ample opportunity to establish governance mechanisms to meet the new legislative requirements.

A strong starting point for organizations would be to develop a compliance roadmap with clear timelines and milestones for achieving compliance. This should be followed by conducting an immediate compliance audit, setting up a compliance task force, and establishing a risk management framework. By taking these steps, companies can not only ensure compliance with the EU AI Act but also build trust with customers, stakeholders, and regulators, positioning themselves as leaders in the responsible and ethical use of AI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More