The EU AI Act Has Been Published! - Considerations For Using Artificial Intelligence

CE
Claeys & Engels

Contributor

Claeys & Engels is a specialised law firm offering a full range of legal services to both national and international clients in all areas concerning human resources. Each question is dealt with by a specialist team of lawyers experienced both in providing advice and in litigation.
The AI Act is published today in the Official Journal of the European Union. For companies using artificial intelligence in their business processes, the countdown has begun.
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The AI Act is published today in the Official Journal of the European Union. For companies using artificial intelligence in their business processes, the countdown has begun. On 1 August 2024, the AI Act will come into force and within 2 years, most of the obligations for companies must be complied with. This newsflash sets out the main features and action points for companies.

1) Advance of artificial intelligence in business processes

Artificial intelligence (AI) seems to have become omnipresent in recent times. For example, a software company recently announced that it could switch to a 4-day working week, among others, due to efficiency gains from AI. According to data from Eurostat, Belgium is among the frontrunners in the EU with 14% of respondents indicating they used AI within their company in 2023. This puts Belgium well above the European average of 8%. This was also revealed in the Claeys & Engels HR Beacon, where 14% of those surveyed also indicated they had already used AI.

2) Purpose of the AI Act

The aim of the AI Act is to create a uniform legal framework within Europe for the development, the placing on the market, the putting into service and use of AI systems. The AI Act aims to promote the introduction of human centric, trustworthy AI and ensure a high level of protection of health, safety and fundamental rights as set out in the Charter of Fundamental Rights of the European Union.

3) Risk-based approach

To achieve that goal, the AI Act adopts a risk-based approach. Thus, applicable rules are tailored to the severity and extent of risks posed by AI systems.

First, the AI Act prohibits some practices in the fieldof AI. For instance, it prohibits the use of AI systems that infer emotions from individuals in the workplace. This prohibition will already enter into force as of February 2025.

Furthermore, there are a number of AI systems that are qualified as 'high-risk'. These AI systems must meet a number of mandatory requirements. Many AI systems used for HR purposes will fall under the high-risk category. For example, in a recruitment context, these include AI systems used for recruiting or selecting natural persons, especially for posting targeted vacancies, analysing and filtering applications and assessing candidates.

Finally, there are AI systems that pose a limited risk in the absence of transparency. This category includes, for example, AI systems that serve to improve the language of previously prepared documents or to answer questions (e.g., with ChatGPT). In this context, companies will have to make it known to the data subject that the content is AI generated.

4) Actors under the AI Act

The AI Act contains different obligations depending on whether an organisation qualifies as a provider, importer, distributor, product manufacturer, authorised representatives or so-called deployer. Employers qualify as deployers in most cases.

However, a qualification as provider cannot be excluded. This will be the case, for example, when a company does not merely use an existing AI system, but further develops an existing AI system and makes it available under its own name. The threshold between deployers and providers may be small, but the consequences in terms of obligations are not.

5) Obligations of providers and deployers

Most of the obligations under the AI Act apply to providers of AI systems, especially when it comes to high-risk AI systems. In particular, providers must in such cases:

  • provide risk management and quality management systems throughout the system lifecycle;
  • take data governance measures;
  • foresee technical documentation and automatic logging;
  • foresee instructions for use for deployers;
  • incorporate human oversight into the design of the system; and
  • foresee guarantees for accuracy, robustness and cybersecurity.

Deployers of high-risk AI systems have the following obligations:

  • they must take appropriate technical and organisational measures to ensure that they use the AI systems in accordance with the instructions for use;
  • they must assign human oversight of AI systems to persons with the necessary competence, training and authority, as well as the necessary support;
  • when deployers exercise control over the input data, they must ensure that the input data is relevant and sufficiently representative;
  • deployers should also monitor the operation of the AI system and keep logs for at least six months.

Before putting an AI system into service or using it at the workplace, employers will have to inform workers' representatives and the affected workers that they will be subject to the use of a high-risk AI system.

In addition, companies must take measures to ensure a sufficient level of AI literacy among those who come into contact with AI systems. This obligation comes into effect as early as February 2025.

6) Enforcement

Supervision of the AI Act will be carried out by different authorities. At the European level, the AI Office and the AI Board will assume this role and also publish opinions and recommendations. At the national level, one or more local authorities will oversee enforcement. These can be set up within existing data protection authorities, but this is not necessary.

Violation of the provisions on prohibited AI systems can lead to fines of up to EUR 35 million or 7% of a company's annual turnover. Other violations can lead to fines of up to EUR 15 million or 3% of a company's annual turnover. Providing incorrect, incomplete or misleading information to authorities can lead to fines of up to EUR 7.5 million or 1% of a company's annual turnover. These fines may be lower for SMEs and startups.

7) Gradual application over time

There are already some dates to note regarding the application of the AI Act for companies acting as providers or deployers. Following its publication on 12 July 2024, the AI Act will come into force in August 2024.

As of February 2025, AI systems with unacceptable risk will be prohibited. In addition, from then on, companies must also take measures to provide an adequate level of AI literacy.

As of August 2026, most obligations for high-risk AI systems must be met, and the transparency obligations will enter into force.

Start preparing today

It is crucial to timely start preparing for the AI Act. It is best for companies to map out now which AI systems they use themselves and their partners. For each AI system, the company should check how it qualifies under the AI Act, especially as a provider or deployer.

It is then necessary to identify which risk category the AI system falls into. This exercise is best done before February 2025 as that is when the rules regarding prohibited AI systems come into force.

Based on this overview, the company can identify its obligations and draw up an action plan to proceed with timely implementation. Most obligations are effective from August 2026. However, transitional measures apply to certain AI systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More