ARTICLE
1 August 2024

The EU's Artificial Intelligence (AI) Act: The First Of Its Kind

MC
Marks & Clerk

Contributor

Marks & Clerk is one of the UK’s foremost firms of Patent and Trade Mark Attorneys. Our attorneys and solicitors are wired directly into the UK’s leading business and innovation economies. Alongside this we have offices in 9 international locations covering the EU, Canada and Asia, meaning we offer clients the best possible service locally, nationally and internationally.
The EU has introduced new legislation on AI, the EU AI Act, which lays the foundation for the regulation of, and responsible development of, AI across all industries within the EU.
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

At a glance

The EU has introduced new legislation on AI, the EU AI Act, which lays the foundation for the regulation of, and responsible development of, AI across all industries within the EU. The Act was published in the Official Journal of the EU on 13 July 2024 and is due to enter into force on 2 August 2024. While it will be the first of its kind to come into effect globally, it seems Colorado in the US is not far behind, being the first US state to recently pass comprehensive legislation on the issue.

This article will look at what the EU AI Act says, how it categorises AI systems, what is prohibited under the Act and what is deemed high risk. While the Act will be relevant to many industries, this article will briefly consider some of the implications for Medtech specifically and will also touch on how the Act compares with Colorado's equivalent.

What the EU AI Act says

How the Act categorises AI systems?

The Act classifies AI according to its risk:

  • Unacceptable risk: Unacceptable risk is prohibited (e.g. manipulative AI and social scoring systems);
  • High risk: Most of the Act addresses high risk AI systems, which are regulated;
  • Limited risk: A smaller section of the Act addresses limited risk AI systems, which will be subject to lighter transparency requirements (e.g. developers/deployers must ensure that end-users are aware they are interacting with AI); and
  • Minimal risk: Minimal risk is unregulated (includes various AI applications such as video games and spam filters)

What systems are prohibited?

According to the Act (quoting https://artificialintelligenceact.eu/high-level-summary/), the following types of AI systems are prohibited, those:

  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
  • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
  • 'real-time' remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
    • searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
    • preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
    • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.)."

What systems are deemed high risk?

As mentioned above, the majority of the Act addresses high risk AI systems. According to the Act, high risk systems are those that are both:

  • used as a safety component or are a product covered by the EU laws listed in its Annex I (see below); and
  • are required to undergo a third-party conformity assessment under the Annex I laws;

where Annex I includes medical devices, machinery, recreational watercraft, lifts, toy safety, equipment for potentially explosive atmospheres, cableway installations, personal protective equipment, radio equipment, pressure equipment, appliances burning gaseous fuels, civil aviation security, motor vehicles and their trailers, two- or three-wheel vehicles, agricultural and forestry vehicles, marine equipment, rail system, aircrafts.1

Other systems also considered high risk are listed under the Act's Annex III use cases (see below), except if:

  • the system performs a narrow procedural task;
  • it improves the result of a previously completed human activity;
  • it detects information related to decision-making patterns and is not to replace/influence the human assessment without human review; or
  • it only performs a preparatory task to an assessment relevant for the purpose of the Annex III use cases;

where the Annex III use cases relate to: non-banned biometrics; critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential public and private services; law enforcement; migration, asylum and border control management; and the Administration of justice and democratic processes.

Finally, any AI systems that profile individuals, i.e. involve automated processing of personal data to assess any aspect of a person's life (e.g. work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement) are also considered high risk.

Who is responsible for ensuring compliance?

The majority of obligations fall on providers (developers) of the high risk AI systems, those that intend to market, or put into service, a high risk AI system or its output within the EU. Users (deployers) of high risk AI systems, being natural/legal persons that deploy an AI system in a professional capacity (i.e. not affected end users) will also have some obligations.

The Act also details the requirements for any providers of General Purpose AI (GPAI) models and systems, particularly where they present a systemic risk.

What are the requirements for providers of high risk AI systems?

Providers of high risk AI systems must:

  • Establish a risk management system throughout the lifecycle of the AI system;
  • Prepare technical documentation to demonstrate compliance and provide the relevant authority with information to be able to assess that compliance;
  • Conduct data governance, ensuring that training, validation and testing of datasets is relevant, sufficiently representative and free of errors;
  • Design the system:
    • for record keeping so that the system automatically records events that may be relevant for identifying substantial modifications and national level risks throughout the lifecycle of the AI system;
    • to allow deployers to implement human oversight; and
    • to achieve appropriate levels of accuracy, robustness and cybersecurity;
  • Provide instructions for use to downstream deployers to enable their compliance; and
  • Establish a suitable quality management system (QMS) to ensure compliance.

How will the Act be implemented?

An AI office has been established to oversee the implementation of the Act, the office having five units: regulation and compliance; safety; AI innovation and policy coordination; robotics and AI for societal good; and excellence in AI.

What are the relevant timelines?

After its entry into force, the following deadlines will apply:

  • 6 months for prohibited AI systems;
  • 12 months for GPAI;
  • 24 months for high risk AI systems falling under Annex III (as outlined above); and
  • 36 months for high risk AI systems falling under Annex I (as outlined above).

What are the penalties for non-compliance?

The maximum penalty for non-compliance with the Act's rules on prohibited systems is an administrative fine of up to EUR 35 million or 7% of worldwide annual turnover (whichever is higher). Penalties for breaches of various other provisions are subject to a maximum fine of EUR 15 million or 3% of worldwide annual turnover (whichever is higher). For SMEs, the above numbers are the same, but the relevant penalty is that which is lower (not higher).

So what does this mean for Medtech?

As indicated above, AI systems for use in medical devices fall under the high risk category. Therefore, for AI-enabled medical technologies that are already regulated under the Medical device Regulation (MDR) or In Vitro Diagnostic Medical Devices Regulation (IVDR), the deadline for compliance with the Act is 2 August 2027 (36 months from its entry into force).

As also indicated above, providers of high risk AI systems must establish a QMS to ensure compliance. Therefore, for manufacturers of medical devices falling under the Act, they will be required to build an integrated and consistent QMS that builds in the AI aspects of the device in addition to the existing measures that are adopted as part of the MDR or IVDR. This is on top of the other requirements for providers of high risk AI systems as are outlined above.

To demonstrate conformity with both the Act and MDR/IVDR, the Act confirms that only one single combined CE marking and one single declaration of conformity will be required – so that the elements common to both acts can be handled in a single process. However, it's no secret that there are many other EU regulations overlapping with the Medtech space (e.g. the General Data Protection Act, the Data Act, the revised Product Liability Directive, the European Health Data Space Regulation (EHDS)) creating further challenges for companies and manufacturers to be able to navigate through all of the requirements, particularly for smaller companies who may only have limited resources. As such, despite the deadline being some time away, manufacturers of such high risk AI systems are being urged to start their compliance journey as soon as possible.

A brief look at the new AI Act in Colorado3

Outside of the EU, Colorado is the first US state to pass comprehensive legislation on AI. The AI Act in Colorado recently published and is scheduled to take effect from 1 February 2026.

While both the EU Act and AI Act in Colorado apply a risk based approach to regulating AI, and focus on the regulations surrounding high risk AI systems, their categorisation of these systems is slightly different. The EU Act involves a broader classification in this regard including, for example, categories such as biometrics, the administration of justice and democratic processes and law enforcement. The EU Act also additionally categorises and regulates prohibited systems, limited risk systems and certain GPAI systems.

With regard to the responsibility for compliance, while both acts impose obligations on developers and deployers of the systems, the EU Act puts more emphasis on its requirements for providers (rather than deployers).

The enforcement penalties also appear quite different, with the EU Act seemingly imposing far more significant monetary penalties for non-compliance. It will be interesting to see what approaches are taken in other territories.

Further information

Annabel Williams is a UK and European Patent Attorney at Marks & Clerk LLP specialising in physics and engineering, and with an interest in Med Tech. Annabel works very closely with the distinguished AI team at Marks & Clerk. If you have any innovations in these (or in any other) areas that you may wish to discuss with regard to potential IP (patents, trade marks, designs, copyright) protection or issues, please contact Annabel or your usual Marks & Clerk contact.

Sources of information:

Entire article: https://artificialintelligenceact.eu/high-level-summary/

Footnotes

1. https://artificialintelligenceact.eu/annex/1/

2. 'AI Act Is Officially Published: Implementation Challenges Ahead For Medtech' Medtech Insight, 15 July 2024

3. https://www.flastergreenberg.com/newsroom-articles-understanding-colorado-landmark-ai-legislation-impact-business.html#:~:text=Additionally%2C%20while%20the%20Colorado%20AI,differ%20between%20the%20two%20acts.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More