Throughout most of the 20th century, artificial intelligence (AI) was the stuff of science fiction. However, through the huge leaps in computational power since the late 1990s, AI has jumped from the pages of science fiction to reality.

AI is now a real growth area in the emerging technologies field whose newfound viability is creating new investment opportunities across many industries. In this article we outline a few key questions to consider when looking at an investment in a target company that uses AI to provide products or services.

A 2018 report by McKinsey found the economic impact of AI will most likely follow an s-curve pattern: slow at the beginning, given the investment required to adopt such technology, followed by rapid growth driven by competition and improvements in complementary capabilities.1 AI is being aggressively adopted by forward-looking businesses, particularly in the banking, retail and automotive sectors.2 This rapid adoption is no surprise, as “the mathematical and statistical foundations of current AI are well established”3 and companies (both tech and traditional businesses) will likely face a threat from AI-driven competitors or new entrants in the near term. The companies that power such progress will become attractive targets for acquisition and investment as these trends unfold.

Canada has quickly become one of the world’s leading hubs for AI (alongside the San Francisco Bay area, New York-Boston, London, Bangalore, Berlin, Beijing, Shenzhen and Tel Aviv).4 In particular, Toronto and Montréal have leading research and commercial centres in the AI space. Toronto has support of all three levels of government with the launch of the Vector Institute, a non-profit organization focused on AI research and start-up incubation. The University of Toronto also counts among its ranks Geoffrey Hinton, one of the pioneers of artificial neural networks and an AI advisor to Google. Montréal, on the other hand, is home to the Montreal Institute for Learning Algorithms and to Yoshua Bengio, one of the co-fathers of deep learning.5

As these opportunities progress from venture investments into large-scale corporate transactions, parties should take the time to understand, assess and allocate the AI-associated risks. Key questions to ask are:

  • How is the AI provided?
  • What data drives the AI?
  • Who owns the intellectual property created by the AI?
  • Who is responsible for liability generated by the AI?

Through careful consideration of these questions, an investor or acquiror can pursue diligence activities designed to assess risk, and can include terms in transaction documents that appropriately allocate risks among the transaction parties.

How is the AI provided?

Before acquiring or investing in any AI-driven target, investors will want to conduct a thorough due diligence process. Investors should seek to learn as much as they can about how the AI is actually provided to the target and its customers.

Few organizations selling AI-based products or services have actually built their own free-standing artificially intelligent capabilities. The more likely scenario, however, is that the organization is leveraging the AI capabilities of an AI provider like IBM, Microsoft or Google to create its offering. In that scenario, the relationship between that AI provider and the target is one that should be thoroughly understood.

What aspects of the target’s product are actually proprietary to the target and what will remain property of the AI provider?

Even at a high level, an investor will want to understand what is special about the target’s business and service offering. What is it doing that a competitor could not easily replicate by leveraging the underlying AI provider’s service?

Does the underlying AI provider introduce business continuity risk for the target?

Investors will want to know whether the underlying AI provider is likely to continue making the underlying AI capabilities available in the future. Even where the AI provider is a reputable party, investors will want to make sure the AI will be hosted and maintained in a manner that allows the target’s business to function, and what options (including underlying costs) are available to transition to a different AI provider. When considering this question, the diligence process should try to uncover how stable the pricing, terms of use and even the service descriptions and specifications associated with the underlying AI are, as many AI providers seek broad rights to make changes to their offerings upon notice. Certain changes could have the effect of rendering the target’s product no longer economically or practically viable.

Is the target’s use of the AI in compliance with their terms of service with the AI provider?

AI services are generally subject to license terms and restrictions. These restrictions can be defined geographically, by the number of authorized users, or by specific types of use of the AI provider’s service. If the target’s use is in breach of these restrictions, curing any breach may lead to unexpected costs, or, where the breach cannot be cured, may lead to the target losing access to the AI that powers their product.

What data drives the AI?

AI is only as good as the data that drives its decision-making. Before investing in a target in the AI space, parties should seek to understand what information the underlying AI platform is accessing to build its algorithms.

The key question when it comes to this data is: does the target have the right to use it for their business purposes? In many ways, IP issues related to AI are not much different than those in any technology-heavy transaction.

One (perhaps obvious) question to ask is whether the AI platform accesses, stores or uses personal information. In Canada, the Personal Information Protection and Electronic Documents Act (PIPEDA) has a consent-driven regime which places limits on the scope of use of personal information, and other industries (such as health care and financial services) may have regulations or codes of conduct that may impose restrictions as well. Where the personal information includes data relating to Europeans, the EU General Data Protection Regulation (GDPR) will need to be considered as well.

Breaches of confidentiality agreements may lead to significant reputational harm in the event of a dispute with a customer and may open the target up to a real risk of liability or injunctive orders hindering the operation of the target’s business.

In addition to personal information, AI may make use of other proprietary or confidential information. Anyone who has taken the time to seek out the “Legal” tab at the bottom of most websites will know that even publicly available information on the internet may be subject to terms of use and other restrictions incompatible with an AI-driven product or service. For example, a government website that makes its data available freely online may also include prohibitions on using that information for any commercial purpose. Where the AI scrapes public-facing websites in order to collect its data sets, it may be difficult to gain any assurance that such information has been obtained and used in a manner that complies with any applicable terms of use.

Similarly, AI that makes use of data sets made up of information the target receives from its customers may be breaching the confidentiality terms between that customer and another third party. Breaches of confidentiality agreements may lead to significant reputational harm in the event of a dispute with a customer and may open the target up to a real risk of liability or injunctive orders which could hinder the operation of the target’s business.

Appropriate diligence processes will help put some shape to the risk posed by the AI’s data use, access and storage:

  • Where do the AI’s data sets come from?
  • Is it possible to understand how the AI uses that data?
  • How well do the target’s agreements with customers or other data providers protect the target from liability stemming from the target’s handling of data?
  • Has the target obtained “public” data in a manner that complies with its terms of use?
  • Has the target properly recorded those terms?
  • Has the target appropriately catalogued which agreements and which terms apply to each component of its data set?
  • If the AI stores data, are there adequate controls in place to protect it and to remediate data breaches?
  • Does the collection, storage and use of the data comply with applicable laws?

In many cases, the extent to which data sets used by an AI platform access personal information or other proprietary or confidential information may be difficult to discern through reasonable diligence. Inventors and targets should consider including risk allocation mechanisms in the transaction documents, such as specific indemnities and representations and warranties (and in appropriate circumstances backed up by holdbacks or escrowed amounts) to address the AI’s use of data, specifically bearing in mind the fundamental importance of such data to the operation of the AI.

Who owns the IP?

In many ways, IP issues related to AI are not much different than those in any technology-heavy transaction. AI technology, and specifically, AI licensing, raises consideration as to who owns and has the rights to the intellectual property generated by the AI.

Proper diligence activities should help determine what the target has created using the AI, and what sort of IP those creations include. Ownership and license rights will, to a large extent, be determined by the target’s contractual arrangements with its AI provider and customers.

Matters that cannot be determined by diligence can, at least as between the target and the investor (or seller and purchaser), be settled through appropriate indemnities, representations and warranties (and in appropriate circumstances backed up by holdbacks or escrowed amounts).

Who is responsible for AI-generated liability?

A special attribute of AI is its ability to effect results with limited human intervention. This makes certain applications of AI inherently riskier than more traditional products and services. For example, if an AI service is meant to serve offers for goods or services directly to consumers, it could inadvertently end up providing preferential pricing to one group of users over another. Even if the AI was not programmed to discriminate, the effects of the AI’s results could be discriminatory. In such cases, the potential consequences are as much reputational as legal in nature.

There are ways to try to understand the scope of these types of risks through proper diligence:

  • Has the target’s liability to its customers been effectively limited contractually or otherwise by law?
  • Does the target collect information about its outputs in a manner that provides some comfort that, at least to date, the AI has not implicated the target in any discriminatory, illegal or tortious act?
  • Does the target routinely test its AI to audit the outputs and tailor the data inputs and the AI’s algorithm?
  • Is it possible to understand the AI’s decision-making process sufficiently to control for potential liability in the future?

In many cases, however, the AI’s functioning and the impact of its application to the real world will remain opaque. For that reason, a careful allocation of risk through indemnities and representations and warranties is advisable, taking into account the amount of control each party has (or ought to have) over the AI’s outputs at any given time.

Conclusion

AI offers great opportunity for investment in the M&A and venture capital space. Growth potential is only expected to rise as adoption of AI and cost efficiency increase. Particular legal issues emerge when investing in AI due to its data requirements and its autonomous capabilities, but acquisition/investment agreements are flexible instruments. Appropriate diligence and careful allocation of risk through indemnities, representations, warranties, holdbacks, escrow, and price adjustments are the parties’ best defence against the uncertainty inherent in pursuing opportunities in this emerging and exciting space.

Footnotes

1 See “Notes from the AI frontier: Modeling the impact of AI on the world economy” McKinsey Global Institute, September, 2018.

2 See “Why Companies That Wait to Adopt AI May Never Catch Up,” Harvard Business Review, December 6, 2018 and “Notes from the AI frontier: Modeling the impact of AI on the world economy.”

3 See “Why Companies That Wait to Adopt AI May Never Catch Up.”

4 See “2017 in Review: 10 Leading AI Hubs” Medium, December 18, 2017 and “Top 6 Artificial Intelligence Hubs in 2018: An Analysis for Their Localization” Analytics Insight, June 6, 2018.

5 See “How Canada became a hotspot for artificial intelligence research” DMZ Ryerson University.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.