What Does The AI Act Mean For Employers?

LG
L&E Global

Contributor

L&E Global logo
L&E Global is spanning the globe and our member firms are ideally situated to provide clients with pragmatic, commercial advice necessary to achieve their objectives, wherever they operate. L&E Global’s members work closely with corporate, legal, human resources departments and corporate executives across a variety of sectors and industries to address the strategic and tactical issues that arise in the workplace
The EU has recently adopted the AI Act, in full called the "Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence...
Belgium Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The EU has recently adopted the AI Act, in full called the "Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts." It aims to classify and regulate different AI systems and tools in order to identify and restrict harmful applications. Below, we will mostly look at how this can impact employers using AI e.g., to recruit, monitor or evaluate their employees.

1 Risk qualification

It is important to know that the AI-Act does not only impose obligations on AI-providers who create AI-tools but also on deployers. Meaning any company that applies the AI-tool, including employers who make use of these applications in an employment context.

The AI Act is, in general, a system for risk-assessment regarding AI. It provides a scale based on the perceived risk of an AI application:

  • Unacceptable risk: AI tools falling under this category are prohibited (Chapter II).
  • High risk: these AI tools pose a significant risk to health, safety, or the fundamental rights of persons. In this case, the AI Act imposes several measures and safeguards in order to keep the application safe and under control. (Chapter III)
  • Limited risk: this includes AI systems intended to directly interact with natural persons and AI systems, such as General Purpose AI systems that generate synthetic audio, image, video or text content, and deep fakes. In this case, there is a transparency obligation toward users. (Chapter IV and V)
  • Minimal risk: these applications do not require any regulation.

1 Prohibited AI

The prohibited category includes malignant applications like purposefully manipulative or deceptive techniques used to distort the behaviour of persons by impairing their ability to make an informed decision, thereby, causing a person to take a decision that that person would not have otherwise taken. Other prohibited systems are those that exploit vulnerabilities, disabilities or social and economic situations that cause significant harm, as well as far-reaching surveillance systems used by the authorities. In any case, most employment applications will normally not fall under the prohibited category.

1 Classification as high-risk AI

On the contrary, it is easy to imagine HR applications that fall into the high-risk category. This category includes (see Annex III of the AI-Act):

  • Biometrics, including biometric identification systems (unless the sole purpose is identification), systems for biometric categorisation (using sensitive or protected characteristics), and systems for emotion recognition.
  • Employment, worker management and access to self-employment:
    • AI systems are intended to be used for the recruitment or selection of natural persons, in particular, to place targeted job advertisements, analyse and filter job applications, and evaluate candidates.
    • AI systems are intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships.

It is safe to say that most AI tools that can be helpful for managing employees fall under these categories. Art. 6.3 AI Act allows a derogation of this classification if the system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. This is the case if one or more of the following conditions are fulfilled:

  • The AI system is intended to perform a narrow procedural task;
  • The AI system is intended to improve the result of a previously completed human activity;
  • The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
  • The AI system is intended to perform a preparatory task to an assessment relevant to the purposes listed as high risk.

Notwithstanding these derogations, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. Of course, one can expect a lot of discussion regarding this catch-all concept of profiling natural persons.

1 Requirements for high-risk AI systems

Art. 9 requires the establishment of a risk management system. This system includes a preventive identification of all the possible risks and the adoption of appropriate and targeted risk management measures designed to address the risks identified. This has consequences not only for the design of the AI tools but also for the information and training of the persons who operate them. High-risk AI systems also bring with them specific obligations regarding data management, especially when personal data is used to train the AI-system, as well as record-keeping obligations (to monitor and trace the operation of the system). Providers of high-risk systems need to inform deployers (and thus employers) of the instructions to be followed (so the risks can be mitigated), and Art. 14 of the AI Act includes the obligations to make human oversight of the operation of the AI systems possible. Art. 26 of the AI Act provides the specific obligations of deployers:

  • Take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems.
  • Assign human oversight to natural persons who have the necessary competence, training, and authority, as well as the necessary support.
  • To the extent that the deployer exercises control over the input data, ensure that the input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
  • Monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers and authorities (e.g., if they have identified a significant risk, they have to suspend the use of the system).
  • Keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control, for a period of at least six months.
  • Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers' representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on the information of workers and their representatives.
  • Where applicable, deployers of high-risk AI systems shall carry out a data protection impact assessment.

1 Enforcement

Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit reasoned complaints to the relevant national market surveillance authority. This means that anyone can lodge a complaint, even without being directly affected. The market surveillance authority has all kinds of powers to monitor the systems and can investigate and decide to suspend or terminate AI systems and sanction providers and deployers.

Furthermore, any affected person is subject to a decision that is taken by the deployer on the basis of the output from a high-risk AI system that affects that person in a way that they consider to have an adverse impact on their health, safety, or fundamental rights. They shall have the right to obtain from the deployer (employer) clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

The AI Act provides sanction for violations of the rules regarding high-risk AI-systems, with penalties of up to 15 million EU, or 3% of the total worldwide annual turnover of the company.

1 Entry into force

The AI Act will enter into force 20 days after its publication in the Official Journal of the EU (expected in May 2024). Furthermore, the following deadlines apply:

  • 6 months after entry into force: enforcement of prohibited AI practices.
  • 24 months after entry into force: the AI Act will apply and most other obligations will take effect.
  • 36 months after entry into force: obligations for high-risk systems will take effect.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More