Artificial Intelligence (AI) has been developing under our noses for decades, but the pace of development has significantly increased in momentum given the number of recent technological breakthroughs.

From self-driving cars to automated online assistants, AI is never far from the headlines or our daily lives. However, whilst AI offers many benefits for businesses, we are often quick to overlook the risks and unintended negative outcomes it can bring.

Razia Begum, employment lawyer at Taylor Vinters, explores whether the law is prepared for dealing with automated decisions made by machines, and offers advice on how to protect your business from some of the legal ramifications relating to discrimination.

Practicing what we preach

As a business organisation headquartered in Cambridge, Taylor Vinters is at the heart of the technology sector in the UK. The law firm works with innovative and entrepreneurially minded people and businesses, creating hubs for start-ups and building platforms for growth. The entrepreneurial culture this has created has encouraged us, within our legal business, to incubate an AI early-stage company, so we have first-hand experience of this developing field.

AI is having a growing impact on the way we work and live. It is creating a new workforce, adding value to the skills and capacity of existing resources. It encourages us to rethink the way jobs are carried out, reinforcing the need for human input and, in particular, realising the added-value element of work that cannot be automated. Machine-learning is already powering a host of new systems, which are helping consumers and businesses to make smarter and quicker decisions. But, as we continue to explore the benefits, it is easy to forget the powerful autonomy granted to this sophisticated technology to make automated decisions on our behalf.

AI taking centre stage

Although human influence creates AI, it is not always possible to control the consequences. Many forms of technology learn as they go along, refining how they analyse data-sets and making automated decisions accordingly. This has the potential to impact both positively and negatively on humans. More specifically, outsourcing decision-making to machines risks unintended discriminatory outcomes. Whilst it is encouraging to see technology firms increasingly embracing the possibilities of machine-learning and algorithms, they should also be mindful of the legal pitfalls – both now and increasingly so in the future.

In the same way that humans can be unconsciously biased, so too can machine-learning systems and algorithms. This has been highlighted in a recent study published by Princeton University, shortly following the shutdown of Microsoft's AI-powered chatbot 'Tay' that was unable to recognise when it was making offensive statements. Similarly, it was found last year by New York-based ProPublica that software widely used to assess criminals' likelihood of reoffending was, mistakenly, twice as likely to assign a higher risk to black defendants.

Legal hazards

For businesses developing and implementing AI, caution must be taken from the outset to reduce the risk of discrimination, which can occur both directly and indirectly. If an automated decision results in the unfair treatment of individuals based on 'protected characteristics' – including, but not limited to, age, disability, race or sexual orientation – the UK legal system would consider this a form of direct discrimination.

It becomes murkier, however, when looking at indirect discrimination, whereby a certain provision is applied equally to everyone but puts people with a particular 'protected characteristic' at a disadvantage. Such a practice would be discriminatory unless it can be demonstrated that the relevant decision serves a legitimate aim and is a proportionate means of achieving that aim. To make matters all the more confusing, it can also be unlawful to not treat an individual's disability differently than a non-disabled person. Businesses developing or using machine-learning systems that make decisions about data containing information relating to individuals' disabilities are legally obliged to consider any reasonable adjustments that will ensure that those with a disability are treated on a level playing field to their non-disabled peers.

The proposition of an individual taking legal action because they have been discriminated against by machine-learning systems is a novel phenomenon. It demonstrates where our current law has simply not caught up with the rapid pace of technological change. There is no specific legislation in place to deal with this eventuality and, realistically, will not exist any time soon. Instead, based on current legislation which deals with human discrimination, it seems that in such a case both the business that supplied the AI software and the customer or end-user of that technology could be held liable.

The end-user, or you as a business, could be ordered to compensate the individual for financial loss, which is technically uncapped, or for injury to their feelings. In turn, businesses may attempt to recover such costs from the supplier of the software. Discrimination claims are generally costly, complex, time consuming and more importantly cause short and long-term damage to the reputation of both businesses. These factors would only be exacerbated where there is the added complication of machines determining the chain of causation.

How can businesses protect themselves?

Whilst there is no sure-fire way to eliminate the risk of AI discrimination, there are steps that businesses can and should take to reduce their risk and protect themselves against any negative backlash.

Work collaboratively with your client, at the earliest stage possible, to identify any potentially problematic areas. It is worth establishing the reasons for using the software, and if you do proceed with the particular machine learning platform, to identify possible solutions or ways around the issues.

If required, modify the project brief. You may also want to put in place policies between the AI provider and end-user in order to reduce exposure to legal and associated commercial risks.

Raise awareness (through internal guidance or training) of potential discrimination risks and how they might manifest with relevant team members. For example, staff should be encouraged to examine the language used in your data-set, avoiding any form of stereotype or words which can single out certain groups.

Managing the risk may be labour intensive in the short term, but it is a process worth going through to avoid long-term exposure to legal risks.

During a time of significant technological change, there will always be an element of uncertainty and risk which caveat the excitement and opportunity. Until there are concrete ways for those developing and using AI to eliminate risk from the outset, or even until there are specific legal procedures in place to deal with issues associated with new technology, businesses must ensure that they are aware of the associated discrimination risks. They should do all they can to protect against and minimise potential damage. With the growing number of media platforms, the threat of businesses receiving bad press is high and, in some cases, fatal – particularly for start-ups. Therefore, prevention, particularly in this case, is better than cure.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.