A Hong Kong billionaire, an investment manager, and a money managing supercomputer called K1 — what could go wrong?

K1 was designed to comb through online sources to make stock market predictions and execute trades, adjusting its strategy over time based on machine-learning.

But after a billionaire tycoon entrusted a large amount of money to the artificial intelligence system, he faced some very real losses — including $20 million in a single day.

The billionaire is now suing the salesman who persuaded him to entrust his fortune to a robot. The lawsuit — one of the first over AI triggered stock market losses — raises many questions about the legal implications of AI.

Lisa Ruth Lifshitz is a technology and privacy lawyer and a partner at Torkin Manes LLP in Toronto. She's also the Co-Editor and a contributor to a new book, Cloud 3.0 - Drafting and Negotiating Cloud Computing Agreements.

Lifshitz spoke to Spark host Nora Young about how some of those legal issues might apply here in Canada.

When a computer or an AI powered system is making the decisions, who's ultimately responsible if things go wrong?

Well, that's the challenge of dealing with new technologies and AI because there's a bit of an alphabet soup of who's responsible. In many instances it could be a number of parties. It could be the manufacturers of the devices and of the systems. It could be the entity that's purchasing the system if they didn't do their due diligence. It could be a number of other organizations and entities. So that's the interesting part — who is responsible.

When it comes to using some of these AI systems, how do we assign liability?

It's a complex question because it really depends on where you position yourself. Are you the manufacturer of the AI device or product? Are you the distributor? Are you the original programmer? What about the end user or the consumer? So what liabilities or obligations do they have to vet the products that they're using?

There could be issues relating to bias. There could be issues relating to making sure that they did their due diligence. There could be claims relating to flaws built in the algorithms themselves, or there could be claims relating to, in Canada, negligent design or manufacture failure to warn, product liability claims, warranties if they made warranties in their contract. There could be fraud, potentially even false or misleading representations or deceptive marketing under the Competition Act. So there's a whole laundry list of different legislation that may apply.

As we look toward a world where we're seeing more and more of these artificial intelligence and machine learning applications, do you think that Canadian law can adequately deal with these types of questions, or do we need more specific legislation?

There's two schools of thought with respect to how we treat new technologies. Traditionally in Canada we've tended to be more technology neutral. We tend to look at things more broadly, we try to create principles that would apply to varying technologies. We're less channel specific, we're less focused on trying to create a specific law that deals with one aspect.

There's a lot to be said for that approach because that approach allows evolution and, to be frank, by the time most law hits the books it's already out of date in terms of the technology that it's trying to address. Other countries have a different approach and sometimes what they try to do is create very specific laws dealing with very specific types of technology and that has been more the American approach, for example, rather than the broad reason principle approach that we have. The reality is that in Canada it is early days and there is a little bit of a struggle in trying to apply current existing laws to some of these new technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.