Artificial intelligence has rapidly filtered into our lives and is here to stay. For businesses and wider society, the potential benefits are immense. Diseases can be diagnosed and treated with greater accuracy, fraud can be spotted more effectively, productivity can be boosted and human error removed from all sorts of decision-making processes.

But to get the most out of the technology, humans must relinquish some control and allow a machine to learn, correcting and improving its outcomes as it processes ever larger quantities of data.

This is what makes AI a scary thing. It can out-operate us, but many people have a limited understanding of how it actually works or the boundaries of its capabilities.

It is this element of the unknown that is fuelling another AI debate centred on ethics and governance. Drawing on the discussions from our latest Zebra Project session, here are five burning questions that were debated on this topic and our responses.

1. If a machine makes a decision that harms humans, has unintended adverse consequences for businesses or is simply not in line with what society deems acceptable, who is responsible?

Artificial intelligence is simply a tool that can be used by a business to deliver services to a customer or make decisions that affect an end-user.

If machine learning makes a decision that is unethical, the machine cannot currently be held responsible or be required to compensate the customer or end-user.

Although there have been some discussions at EU level about setting ethical standards or regulating who might be liable in such cases, we have yet to see any detailed proposals.

The starting point for a business is that it should be responsible for any loss caused in providing a service or making a decision (whether or not AI is involved) – in the same way, for example, that a haulage company may be liable to its customer if it fails to make a delivery due to a lorry breaking down.

It is up to a business to limit or exclude liability arising from its use of AI, (subject to certain exceptions specified by law, such as liability for death or personal injury). This is certainly an option worth considering when negotiating commercial contracts.

2. Could the developer of the AI ever be made accountable for a bad decision?

Yes. Just as a service provider may be liable to its customer for liability caused by its use of AI, the developer could be liable if the loss was caused by an AI fault. In other words, there may be a chain of causation.

Using the example above, if the lorry suffered a mechanical failure caused by faulty manufacturing, the haulage company may be able to hold the manufacturer accountable. But it would not be able to do so if the mechanical failure was caused by the haulage company failing to look after the lorry properly.

So, liability will depend on who is really responsible for setting the rules that underpin any automated decisions. If the service provider misuses the AI and unethical decisions are made, then it must bear responsibility for that. However, if the AI has taught itself to reach unethical decisions based on the way it is configured by the developer, then the developer might be held accountable.

An example of this is an algorithm being used to help a large employer sift through high volumes of graduate job applications and decide who should be invited in for interview.

If the employer applies a blanket rule that requires the technology to reject all applications sent in by female candidates, that is unlawful discrimination instigated by the employer. If, on the other hand, the employer asks the technology to select candidates for interview based solely on non-discriminatory criteria but, over time, the AI teaches itself in such a way that most female candidates are rejected, that fault is more likely to rest with the developer.

In both cases, a disgruntled female candidate will likely pursue the employer. It would then be up to the employer to demonstrate that the algorithm's discriminatory decisions were an unintended consequence of how it had been programmed by the developer to self-teach.

3. Should there be more transparency in the way algorithms make decisions?

Ultimately, the success of widespread AI depends on creating an environment of trust. Developing more transparent or explainable AI is key to this to ensure that people have a better understanding of how algorithms make automated decisions.

A lack of transparency is not an inherent feature of machine learning tools – it is a design choice. There is a tricky balance to strike between creating algorithms that make predictable, or at least explainable, decisions and ensuring that AI systems are robust against manipulation.

Take the anti-fraud sector for example. If AI triggered a fraud alert following a card payment, it would be detrimental to the system if the reasons for that decision could be traced by the fraudster. On the other hand, if AI decided that an individual was not insurable, then the insurance company should be able to explain why to the person affected.

Traceability would therefore have to be carefully considered, depending on the type of business and how AI is being used. Much rests on the overall context and any underlying regulatory requirements.

As an example, the new EU General Data Protection Regulation (or "GDPR") will give individuals the right to be told why a certain automated decision based on their personal data has been reached, although this will not mean that they have to explain the detailed inner workings of a machine learning tool.

4. Is there any practical advice for businesses thinking about using AI to help mitigate the risk of mistakes/bias?

It is incumbent on businesses using AI to work together with those who create or develop them, so that they can learn from each other. This is the best practical way to reduce the risk of machines taking automated decisions that are biased, unlawful, unethical or just clear mistakes based on the criteria they are given.

Taking the discrimination example as a starting point, they could consider adopting internal guidance for employees who use or develop AI tools and an external policy or agreement which sets out clearly how discrimination issues will be managed.

Given the potential liability that might arise from discriminatory conduct, it makes sense to adopt a collaborative approach that is aimed at spotting issues early, agreeing who is responsible for putting them right and refining automated processes to avoid repeat mistakes.

More generally, it is also advisable for business leaders to educate themselves on machine learning, the terminology, opportunities and limitations.

5. How feasible would it be to develop data ethics regulation?

It is feasible and could be enforced. However, we would need to think carefully about the scope of that regulation and what could be achieved in addition to what's already in force.

Businesses already have to consider discrimination legislation and the criminal law. In addition, the GDPR is coming in May, which will place significant new obligations on those processing and creating data as well as significant new rights for individuals relating to their personal data.

We would also need to be careful about how we define the scope of any such stand-alone "data ethics" regulation and, in particular, what might be deemed "ethical". Too vague and it becomes unworkable. Too precise and it is inflexible. And what is generally considered "ethical" or acceptable by society will evolve over time. The RSA is doing some interesting work in this area through its Citizens' Juries.

Furthermore, I suspect that people will come to accept machines making decisions more readily, as they become more comfortable with it – again, transparency in automated decision-making processes is key to this.

For further information on AI and ethical governance, please contact Dominic Holmes, Partner and Co-Head of the Taylor Vinters employment team.

For further information on our future debates, please visit the Zebra Project website.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.