Opinion: Don't Allow AI To Erode Workplace Trust

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
The key workplace decisions of recruitment, performance, selection, promotion and termination are considered high-risk, and therefore subject to stricter rules, under the new EU AI Act...
Ireland Employment and HR
To print this article, all you need is to be registered or login on Mondaq.com.

The key workplace decisions of recruitment, performance, selection, promotion and termination are considered high-risk, and therefore subject to stricter rules, under the new EU AI Act, writes Síobhra Rush, employment law partner at Lewis Silkin (pictured).

The act, the world's first comprehensive AI legal framework, has been formally adopted by the Council of the EU, with the key compliance obligations to be staggered over the next two years. The Department of Enterprise, Trade and Employment has sought submissions from interested parties in regard to the implementation of the act.

The general expectation is that the act will become the international default, much like the GDPR, which became a model for many other laws on a global scale.

Bias within AI systems

The potential for AI systems to 'bake in' discrimination and bias is well recognised.

Hiring decisions using AI could therefore result in outcomes that are open to legal challenge.

Detecting and addressing the risk of discriminatory outcomes is a multi-stakeholder issue.

Provider assurances will be a key part of the procurement process and deployers must ensure that input data is representative and relevant.

Many employers will go further, putting in place bias audits and performance testing to mitigate these risks.

Similarly, ensuring that AI supported decisions can be adequately explained is critical to maintaining trust in AI systems and enabling individuals to contest effectively decisions based on AI profiling.

Proliferating tools

Novel AI tools are proliferating in areas such as recruitment, performance evaluation, and monitoring and surveillance.

The act categorises these common use cases as automatically high risk.

Lower risk scenarios are where the AI performs narrow procedural tasks or improves the result of a previously completed human activity.

This breadth reflects the wide range of AI systems already in use as workplace tools. These are interesting to consider when assessing the potential reach of the act.

Each stage of the recruitment process can now be supported by AI systems such that generative AI drafts job descriptions, algorithms determine ad targets, and candidates might interact with a chatbot when submitting their application.

Selection, screening and shortlisting supported by AI systems presents legal and ethical risks. Assessments and even interviews may now have less human input.

Collection and objective analysis of employee data means that AI is already widely used as a performance management tool.

Monitoring technology has the potential to provide a safer workplace (for example, tracking delivery drivers' use of seatbelts and speed) but could also be overly intrusive and erode trust by monitoring keystrokes and work rate.

Rigorous end

Employment AI use cases will very likely to fall into the more rigorous end of the act's requirements, but consequent obligations will hinge on whether the employer is a 'provider' or 'deployer' of the AI system.

Most employers will be considered a deployer, with the developers of the AI system the provider.

Providers have extensive obligations:

  • Maintaining risk-management system,
  • Ensuring training data meets quality criteria,
  • Providing monitoring logs,
  • Designing tools that can be overseen by humans, and
  • Registering tool on EU wide database.

The requirements for deployers are somewhat less onerous (and less costly) but will still require significant planning:

  • Using AI in accordance with provider instructions,
  • Assigning trained human oversight,
  • Limiting data use to that which is relevant and sufficiently representative,
  • Monitoring, and flagging incidents to provider,
  • Log-keeping,
  • Informing workers' representatives and affected workers that they will be subject to AI system ahead of use, and
  • Conducting a fundamental rights impact assessment prior to use (if the deployer is providing public service or operating in banking or insurance).

Data request

Deployers may also be required to comply with a data request for an explanation of the role of AI in a decision which has impacted an affected person's health, safety or fundamental rights.

However, lack of settled practice yet, as to what this kind of explanation looks like, may make compliance more difficult.

Employers should be aware that a deployer may be deemed a provider, with more onerous obligations, if s/he:

  • Substantially modifies an AI system,
  • Modifies its intended purposes, or
  • Puts their name or trademark on it.

Lighter transparency obligations apply to use cases deemed limited risk, for example the requirement that users are informed that they are interacting with an AI tool.

Certain uses are banned, such as biometric categorisation and inferring emotions or using personal physical characteristics to determine race, political views, religion, or sexual orientation.

Originally published by Law Society Gazette.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More