Today, Artificial Intelligence (AI) is no longer solely the domain of science fiction writers and Hollywood film studios. Unbeknownst to most consumers, AI has quietly made its way into many aspects of daily life, from depositing cheques through your phone to curated and targeted advertisements on social media platforms.

While many of these developments have the potential to increase our productivity and quality of life—or have already done so—our laws and the courts are playing catch-up. This article will provide a primer on AI, and explore its application in business and in life and the potential legal issues that will arise as AI continues on its exponential growth trajectory.

What is AI?

Before exploring the legal implications of AI, it is helpful to clarify what AI is. Different sources provide different definitions, but in its most simple formulation, AI can be thought of as the ability for computers to accomplish tasks normally associated with humans acting intelligently. Most AI used today does not actually replicate or mimic human intelligence but rather uses a more sophisticated form of traditional programming. In traditional computing, the programmer instructs the computer what to do in every possible scenario. The programmer supplies the intelligence, and the computer simply executes the task. In AI, the computer is taught to make decisions on its own, by analyzing large data sets and drawing its own inferences and conclusions.

There are principally two types of AI—generalized AI and applied AI. Generalized AI refers to a machine or a system that can handle any task thrown at it. Applied (or narrow) AI refers to a machine or system that can perform a specific task in a manner that mimics a component (but not all) of human intelligence. While Generalized AI remains elusive, numerous advances in applied AI have emerged in the past few years. One of these advances is the concept of machine learning (ML). Nvidia, a company at the forefront of ML development, describes ML as "the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is "trained" using large amounts of data and algorithms that give it the ability to learn how to perform the task."1

In ML, as the system continues to complete its task, it learns from user input and becomes more and more intelligent. For example, when you accidentally type the wrong thing into a Google search bar, it often asks if you meant to search for something else. If you click on Google's suggestion, it assumes that its predictive algorithm was correct, thereby validating the system. It uses this user feedback to improve suggestions going forward.

Torys Quarterly: Adapting to Change

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.