Using Generative AI In The Workplace – What Do I Need To Think About?

Generative AI streamlines business processes but poses legal risks in data protection, intellectual property, and employment law. Implementing a comprehensive AI policy ensures compliance and mitigates these risks.
UK Employment and HR
To print this article, all you need is to be registered or login on Mondaq.com.

Generative AI is already being utilised within businesses and amongst workforces. Generative AI offers up opportunities for streamlining processes, improving efficiencies and client service. However, its use in the workplace also raises legal considerations and it is important to get ahead of the curve to ensure that your people are using Generative AI sensibly and not putting your business and others at risk. Primarily, by ensuring you have an AI policy in place, you can help to protect your business and the risk AI poses. In the UK, these primarily revolve around data protection, intellectual property, and employment law.

This article has been kindly contributed to by specialist Employment lawyer Ellie Hibberd in respect of Generative AI's implications for employment law.

Security & Data

Businesses have certain legal obligations when it comes to controlling and processing personal data. The availability of these tools both inside and outside of the workplace means that it is easy to input names, addresses, telephone numbers and any other sensitive personal data into a Generative Ai tool.

It is therefore highly probable that personal data may be input by way of "prompts" when using such tools. It is therefore important to conduct a Data Protection Impact Assessment ("DPIA") prior to using any AI tool in your business. This can help you identify what your lawful basis is for processing the data you require and help you put steps in place to mitigate any identified risks.

Businesses are also required to implement appropriate security measures to protect personal data – as well as organisational measures – like policies for your people.

If you're considering (or already) using AI in your business, now is a great time to review your business' approach to data protection. Our expert lawyers can help you navigate your obligations and build a framework for compliance.

Intellectual Property and Generative AI

Generative AI has the potential to create new intellectual property rights – or, due to the extensive datasets used to train the tools, infringe existing intellectual property rights.

If an AI system generates content based on copyrighted material like a photograph, song or book, it could potentially infringe those pre-existing rights. In the UK, copyright law does not currently recognise AI as an author, but the person who made those arrangements necessary for the creation of the work is deemed to be the author. The courts have found that this is typically the person making the prompt – or otherwise instructing the AI tool to create output. Unless a contrary agreement is reached, employers automatically inherit title to most intellectual property rights created by employees in the course of their employment. This means that employers could automatically inherit infringing materials. The intellectual property impact is an important consideration for businesses.

Implementing an AI policy in your workplace can help set out parameters for how your people can and cannot use Generative AI; ensuring that you have appropriate safeguards in place is crucial to manage your risk.

Employment Law and Generative AI

The use of Generative AI in the workplace could have significant implications for employment law, including:

  • Recruitment: Recruitment can be hugely time-consuming and AI could short-cut some of those processes, for example, creating job specifications, writing up question for interviews or summarising responses. However, due to the training data sets AI tools draw on, care needs to be taken that this doesn't bring an unintended bias into your processes or exacerbate existing systemic biases. You will need to ensure you know and understand how any AI tool you are using has been put together, else you could otherwise find yourself with limited ability to defend allegations of discrimination.
  • Pay/bonus decisions: Generative AI could be used to analyse data from which performance-related pay decisions are made. If that content is not checked, an employer risks not being able to explain the decision reached and if that decision is ultimately found to be flawed, that could expose them to arguments of breaching the relationship of trust and confidence.
  • Contract and document drafting: An employer is likely to be able to access an apparently suitable employment contract or settlement agreement (for example) from a publicly accessible generative AI platform. However, there is no guarantee that that template will satisfy all the statutory requirements for these types of documents. This could result in the document not being legally binding or an exposure to a claim in the future.
  • Redundancy: Generative AI could be used as part of a redundancy selection process, for example, devising selection criteria or even scoring employees. This brings with it the risk of irrational or unfair decision making and there may be a lack of understanding around how the data sets work and how to use data. It will be critical to ensure that there is a human in the loop to ensure that the decision is fair.
  • Job creep: It is well-accepted that having a 'human in the loop' is critical with any AI use. That's likely to add an additional duty to someone's role. Take care over just adding that to a job description or role specification as you could be adding significantly to their workload and without appropriate support and training, you could be putting them in a vulnerable position, which may in turn rebound on you.
  • Offensive / Inappropriate content: AI systems can unintentionally perpetuate or amplify biases present in their training data, leading to the creation of offensive or discriminatory content. Employers must ensure there are appropriate human mechanisms in place to check the output.

It is also worth bearing in mind that an employee who is concerned about how their employer is using AI may 'blow the whistle' on that practice. If doing so is a protected disclosure, they will be protected in law. Where concerns are raised by employees about how AI is being used, it is therefore important to take them seriously, act appropriately and ensure that those employees are not disadvantaged in any way for having raised their concerns.

Above all, keep in mind that an effective and engaged employer/employee relationship is founded on human interaction. Even the best AI tools are no replacement for that and can't apply the ethical and nuanced thinking a well-informed and trained manager can. The more AI is relied upon, the weaker the human relationship becomes and the greater the risks posed to your business.

What next?

While generative AI offers exciting possibilities, it's essential for businesses to take proactive protective action by putting in place a comprehensive AI policy for its people – and for this policy to be drafted by legal professionals to ensure all bases are covered.

At Stephens Scown, we have a team of specialist lawyers who have expertise in data protection, intellectual property, technology and employment law – the perfect combination for a well-rounded protective AI policy for your business.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More