ARTICLE
29 August 2024

Ireland Update: Impact Of The AI Act On Workplace Practices

MG
Maples Group

Contributor

The Maples Group is a leading service provider offering clients a comprehensive range of legal services on the laws of the British Virgin Islands, the Cayman Islands, Ireland, Jersey and Luxembourg, and is an independent provider of fiduciary, fund services, regulatory and compliance, and entity formation and management services.
Artificial intelligence ("AI") offers numerous benefits to employers and employees, in particular in the areas of development and training as well as by increasing productivity and efficiency.
Ireland Technology
To print this article, all you need is to be registered or login on Mondaq.com.

What You Need to Know

Following the EU's regulation on AI (the "AI Act") which came into force on 1 August 20241, employers should:

  • Identify the types of AI systems used in their workplace.
  • Understand the relevant risk-level and what obligations will apply.
  • Take action to ensure they are prepared for the implementation of the AI Act.

Background

Artificial intelligence ("AI") offers numerous benefits to employers and employees, in particular in the areas of development and training as well as by increasing productivity and efficiency. However, AI also raises the potential for ethical concerns relating to bias, discrimination and data protection.

The AI Act mainly focuses on "high-risk AI" systems. These include AI systems used in connection with recruitment or selection of candidates or making decisions affecting employees. Therefore, it is important that employers are aware of the types of AI systems currently used in the workplace, and the risk-level associated with those AI systems.

What is an AI System?

An AI system is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments2.

How AI Systems are used in the Workplace

AI systems may be used by HR departments in a wide variety of areas such as:

  • Recruitment: AI is often used to filter job candidates based on CVs and relevant metrics and to match experience with job profiles.
  • Interviews: Some employers use AI to partially automate and assist with interviews.
  • Performance Management: AI can be used to analyse tasks performed by employees, to analyse and consolidate feedback on employees and create a customised list of targets and goals that can be shared with employees.
  • Development and Training: AI can be used to monitor employee performance and offer suggestions on learning strategies, training and career development.
  • Benefits Management: Many HR departments implement AI-based benefit enrolment platforms which provide employees with personalised benefit suggestions.
  • Safety: AI is often deployed in manufacturing facilities to identify potential hazards.

The AI Act's Key Provisions

Under the AI Act, AI systems are categorised based on the risk they pose, which in turn determines the level of regulation applicable. These categories are (i) prohibited AI, (ii) high-risk AI, (iii) limited risk AI and (iv) minimal risk AI.

Prohibited AI

Prohibited AI includes AI systems that infer emotions of an individual in the workplace unless this is for medical or safety reasons. This means that employers cannot use AI to monitor an employee's emotional state during interviews or when performing tasks.

Prohibited AI also includes the use of biometric categorisation systems that categorise individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. It is important, therefore, that such systems are not used in recruitment practices given the protections afforded to job candidates under the Employment Equality Acts 1998 – 2022 (the "EEA").

High-Risk AI

The AI Act specifically designates the following as high-risk AI:

  • AI systems intended to be used for the recruitment and selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications and to evaluate candidates; and
  • AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

This means that AI systems used in recruitment and performance management will be considered high-risk AI and employers will have to comply with certain obligations under the AI Act, such as:

  • implementing technical and organisational measures to ensure the high-risk AI is used in accordance with the instructions accompanying the system;
  • informing employees where they will be subject to high-risk AI and if the AI system will be used to make decisions affecting them;
  • where employers exercise control over input data, ensuring relevant and representative input data;
  • keeping records of logs generated by the high-risk AI;
  • ensuring those overseeing the use of the high-risk AI have the necessary skills and training;
  • where relevant, conducting a data protection impact assessment in line with the GDPR; and
  • monitoring the use of the high-risk AI and reporting any serious incidents.

Limited Risk AI

Under the AI Act, limited risk AI systems are systems from which there is a risk of impersonation of a human or deception. In the workplace, such systems may include chatbots used by HR departments to provide employment relations support.

Limited risk AI is subject to transparency requirements under the AI Act which means that employers will need to make employees aware that they are interacting with AI and not a human.

How to Prepare for the Implementation of the AI Act

Employers should consider the following:

  • Risk Assessment: Review all AI systems currently in use to determine the appropriate risk category and what obligations will apply. A gap analysis should also be conducted to identify areas of non-compliance with obligations.
  • Updating Policies and Procedures: Update AI, employment and data protection policies to ensure they reflect the obligations imposed upon employers under the AI Act.
  • Training: Organise training on the AI Act for HR departments and individuals that are responsible for oversight of AI systems as well as training for employees on how AI is used in the context of their employment highlighting the importance of identifying output which is biased or inappropriate.
  • Communications to Employees and Employee Representatives: Communicate with employees and/or employee representatives on the use of AI in the workplace to ensure a sufficient level of transparency.
  • Vendor Due Diligence: Conduct due diligence on current and new providers of AI systems to ensure that they comply with their requirements under the AI Act.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More