From Humble Beginnings

Developments in various fields within artificial intelligence have allowed for remarkable advancements in different types of automation tools. At the core of artificial intelligence is the training of machines to learn through observation (i.e., pattern recognition).1

What began as the narrow application of applied statistics to identify characteristics and traits of significance while analyzing data has evolved into a vast array of complex automation systems, including everything from machine vision systems used by self-driving cars, to voice-to-text systems utilized by Apple's Siri and natural language processing techniques leveraged by social media platforms.

The approaches to building artificial intelligence systems can vary greatly, with a wide variety of potential learning models—from supervised learning (i.e., human-curated training data), to unsupervised learning (i.e., machine-collected and structured training data), to hybrid approaches like semi-supervised training and goal-oriented training like reinforcement learning.2

Emergence of Deep Fakes

Computer-generated and manipulated images and video have existed in various forms of sophistication for several decades. With the recent advent of generative deep-learning models, like variational autoencoders and generative adversarial networks (GANs), in which one neural network generates content (e.g., realistic images of people), as a second adversarial neural network attempts to identify the computer-generated content as fabricated.3 The results of this computational gamesmanship both enhances the GANs' training data and refines its ability to generate realistic content.

Moreover, computer-generated content is not limited to video, audio or graphical renderings, but can also apply to any domain for which learning systems have adequate training data, including natural language, art and musical composition.

OpenAI, a leading AI research institution, recently released both the source code and research findings for state-of-the-art machine-reading comprehension, real-time machine translation and coherent question-and-answering ability—all of which the machine can perform without predetermined or task-specific guidance. 4 State-of-the-art open-source code—which is accessible to anyone—such as provided by OpenAI, has demonstrated AI's ability to engage in unguided and realistic extemporaneous written dialogue and natural language content generation. Moreover, Google Duplex in 2018 unveiled the ability to engage in unscripted, realistic, real-time voice dialogue between machine and human, without the humans realizing they were speaking to a machine.5 Imagine the impact this could have on phone interviews or allegations of misconduct with video or audio "evidence."

What's Next and What it Means for the Future of Work

As AI-based Deep Fake content-generation tools continue to mature in sophistication, fewer aspects of their operation or deployment require specialized technical knowledge and custom development. While it is not uncommon for a party to produce a fake or doctored e-mail for the purposes of retaliation, exploitation or reputational harm, fraudulent e-mails rarely withstand even cursory scrutiny, as e-mail servers, databases, mail clients and digital devices can be examined for authentication purposes. Moreover, the nature of digital communications provides numerous opportunities for forensic analysis. However, with Deep Fake tools that can easily map a person's face and digitally graft their likeness onto another person readily available on social media platforms like Snapchat and Instagram, even non-sophisticated parties have the potential to be incredibly destructive to unprepared employers.

For example, the momentary recording of a single phone call, video conference or webinar may provide sufficient training material for a nefarious party to impersonate management's likeness using nothing more than free software or features found on social media platforms. Imagine a hypothetical where an employer with a substantial number of warehousebased employees is in the process of automating certain historically human-intensive job functions with robotic assistance. A short audio recording allegedly captures a member of management saying, "they'll all be replaced by this time next year," or mocking the concerns of employees. This video is then circulated and immediately inflames a highly delicate situation, while also generating a public relations crisis—which is subsequently used as political fodder by political operatives on social media, vilifying the company for trivializing its human workforce—all which occur before management has had adequate time to investigate or appropriately respond.

With resource constraints no longer serving as a material barrier to access state-of-the-art Deep Fake tools, combined with their increased ease-of-use, the frequency and scope of Deep Fake content generation is poised to rapidly accelerate. In other words, Deep Fake usage is no longer limited to the very technically sophisticated. The availability and potential creative use of these tools by any motivated party is increasingly relevant in the context of the workplace. Emerging startups currently designing systems to detect Deep Fakes are taking various forensic approaches to analyzing content, including scrutinizing video for imperceptible frame drops, developing algorithms that can predict accurate light reflection in an image, and identifying device-specific metadata. The use of Deep Fakes is expanding rapidly and employers need to be aware of the risk posed by this technology before the company becomes an unwittingly victim of its potentially catastrophic and nefarious applications

Regardless of the learning model implemented for a particular function, these AI-based systems are increasingly ubiquitous, easier to replicate and less resource-intensive to leverage. AI-powered Deep Fakes can lead to risk and liability in hiring scenarios, investigations related to employee misconduct, public relations issues and workplace whistleblower situations, to name a few. For example, in a hostile workplace, sexual harassment or labor-relations case, the alleged appearance of impropriety or inappropriate conduct could be utilized to support claims of a specific pattern of workplace behavior and apply additional pressure on an organization during litigation.

Vulnerable Targets

With fully functioning Deep Fake tools readily available, the primary limitation to potential impersonation of an individual or fabrication of an event is the training data for which the content-generation tools use as a reference to draw inferences, extract data points and classify target- or situation-specific characteristics. Therefore, the higher the quality and volume of the training data, the better the results.

Public figures and individuals with large quantities and variations of recordings of their likeness in the public domain are obvious possible targets for Deep Fake impersonations and exploitation. But a manager who frequently leaves voice messages or other audio files may also be susceptible to fakes. Moreover, while the precise manifestations and contexts of future Deep Fakes are unknown, the following are several plausible circumstances where Deep Fakes could foreseeably arise.

  • Computer Generated Impersonations – Re-creation of an individual's likeness, this can include fabricated video content, images, audio, writing style and simulated dialogue in the form of fabricated interpersonal exchanges, alleged recordings and even real-time impersonation and misrepresentation via audio, video or text. One disturbing consideration is that there is no limitation as to the quantity of real-time, computer-generated impersonations that can be occurring in parallel; thus, many different people could be under the impression they are having or had entirely different conversations and interactions with the same person at the same point in time. For example, imagine if all your friends, family, colleagues, neighbors, social media contacts, places of business and an arbitrary number of strangers were under the impression that they were speaking to you live, right now (without your knowledge), each having a uniquely preposterous and negative personal interaction with a computer-generated impersonation of you. In a workplace context, the potential damage an organization could incur to strategic partnerships, public relations, workplace communications and essential operations from computer-generated, real-time impersonations of individuals from within management is extensive and profound.
  • Original Computer-Generated Personas – Creation of a person that does not and has never existed.6
  • Mixed Interactions Between Machine and Humans – Either a simulated interaction between impersonated and/or computer-generated personas, this can exist in the form of an alleged recording of an event that never occurred or in real-time between humans and machines impersonating a real or computergenerated persona.7

To read the full article click here

Footnote

1 Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller, Foundations of Artificial Intelligence - 8. Machine Learning from Observations (2011), http://ais.informatik.uni-freiburg.de/teaching/ss11/ki/slides/ai08_machine_learning_handout_4up.pdf.

2 Yann LeCun, Yoshua Bengio and Geoffrey Hinton (2015), Deep Learning, Nature Review Insight (Vol. 521 May 28, 2015). See https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf.

3 David Foster, Generative Deep Learning (2019). See https://www.oreilly.com/library/view/generative-deeplearning/9781492041931

4 Better Language Models and Their Implications, OpenAI (February 19, 2019), See https://blog.openai.com/betterlanguage-models.

5 Yaniv Leviathan and Yossi Matias, Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone (May 8, 2018). See https://ai.googleblog. com/2018/05/duplex-ai-system-for-natural-conversation.html.

6 See https://thispersondoesnotexist.com.

7 See Google Duplex.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.