Americans are rightly proud and protective of the freedom of speech contained in the First Amendment of the Constitution. Arguably, this right has done more to protect the Constitution and Americans than any other clause in that document. With very limited exceptions, it ensures that anyone in the nation can speak truth to power, express their opinion without government retribution, and shout an idea just for the sake of being heard by someone or no one. But as artificial intelligence grows in sophistication and we have more regular interaction with speech from autonomous programs like Amazon's Alexa, more people are beginning to wonder if the First Amendment is or should be so broad that it protects human and non-human speakers alike. A plain reading of the First Amendment suggests that the Constitution's protection of freedom of speech is not limited to human beings, but extends to AI and autonomous programs. However, that understanding of the First Amendment is controversial, and parties that rely on such technologies in their business and marketing plans should be aware that more limited interpretations are possible.

Autonomous Expression

Autonomous speech has already begun to occupy a growing space in American public and private discourse. For example, researchers examining social media activity leading up to the 2016 U.S. presidential election have documented the extent to which autonomous bots created content on social media in an attempt to influence voters.1 Additionally, 39 million Americans own a smart speaker that operates an AI-powered personal assistant like Amazon's Alexa or Google Home.2 These home AI systems converse with people in the household, and their designers have already begun to suggest that the First Amendment protects their speech.3

Speech from some autonomous bots has proven so controversial that their programmers shut down their creations. "Tay," the AI system created by Microsoft's Technology and Research and Bing teams that operated a Twitter account very briefly in 2016, was intended to tweet as a normal teenage girl and learn from the other Twitter accounts that interacted with it. Unfortunately, based on those interactions, Tay became racist and anti-Semitic, forcing Microsoft to deactivate the account less than 24 hours after first going online.4 In 2015 Amsterdam police questioned the programmer behind a Twitterbot that autonomously tweeted "I seriously want to kill people" at a fashion event in the city. The bot was programmed to create comprehensible sentences based on "random chunks" of the creator's actual Twitter feed, not to tweet with particular meaning or intent. Although the programmer explained this to the police, apologized, and deleted the bot, he also claimed he didn't know "who is/should be held responsible (if anyone)."5

Autonomous programs produce more than tweets and at-home banter. Companies like Narrative Science and Automated Insights have developed artificial intelligence systems that analyze large datasets to produce natural language stories autonomously. Narrative Science has worked with the Big Ten Network to produce short recaps of Big Ten Conference games within minutes of the conclusion of each game.6 Automated Insights claims that it produces more than 1.5 billion pieces of content a year for customers like the Associated Press, Yahoo, and Price Waterhouse Cooper.7 Other AI is expressing itself in different mediums, including visual art and music.8

This means that AI and autonomous programs are and will continue to create speech and expression for businesses and individuals in a variety of functions: marketing, reporting compliance, customer service, etc. How those parties use the technology will be shaped, in part, by the protection of the First Amendment.

How the First Amendment May Protect Artificial Intelligence

To be clear, the issue under consideration here is not how private companies govern autonomous speech. There does not appear to be any doubt that companies like Facebook and Twitter can ban autonomous accounts in their user agreements. But it is still undecided what local, state, and federal governments can do to autonomous speech from AI. Below, I outline the four major positions regarding how autonomous expression should be treated under the First Amendment. Although all four pose logistical, business, and legal concerns to parties using autonomous programs for speech and expression, those concerns exist any time a person or business speaks. However, the first three models are more problematic than the fourth.

1. Speech produced by AI/Autonomous Programs is not speech that is protected by the First Amendment. Under this model, the federal government and states can regulate and prohibit speech from AI however they want, with none of the constitutional limits that have historically applied to speech produced by human beings. This could be used as a blunt measure to use against any expression by AI that is unpopular or controversial, if the party relying on AI is unpopular, or if the elected leaders want to penalize that party.

However, there are a few issues with this approach. The first is enforceability. Twitter has hundreds of millions of accounts, Facebook has billions, and even much smaller companies likely have thousands of accounts that could experience autonomous activity. Is it logistically possible for them to effectively police all of those accounts and terminate the ones with autonomous activity? Social media companies typically only respond to user complaints, and other industries are similar. What would the penalty be? Are there fines for each autonomous account or instance of autonomous expression discovered by law enforcement? 

Professor Tim Wu from Columbia Law School has noted that the primary concern of the First Amendment is the listeners and viewers, not the speaker or broadcaster9. To the extent that autonomous programs provide content that real people find interesting or worthwhile, this interpretation of the First Amendment hurts those people. The First Amendment is intended to preserve a marketplace of ideas in which all opinions are able to come forward and be considered, with the best ones winning in the long run. Prohibiting any ideas – even autonomously created ones – is contrary to that purpose.

2. AI/Autonomous Programs are only capable of producing speech based on code from a human programmer, therefore speech from AI/Autonomous Programs is merely another form of human speech. This is a more nuanced approach and arguably ensures that listeners and viewers receive all available opinions and ideas, as the law would treat autonomous bots and other forms of AI as extensions of the people who programmed them. However, the model doesn't accurately capture what's already happening with technology, as we have already seen AI speech grow beyond merely parroting the ideas and priorities of code writers. As mentioned above, the racist and anti-Semitic views espoused by Microsoft's Tay were hardly the intent of the programmers. The Dutch programmer whose Twitterbot threatened a fashion show did not intend to send that message. If we are going to create a model for addressing autonomous expression from AI and other autonomous programs under the First Amendment, making sure that the model accurately reflects what's really happening with those technologies should be a mandatory requirement.

3. Speech produced by AI/Autonomous Programs is only protected by the First Amendment when that speech represents the speech of its human programmer; speech from AI/Programs is not protected by the First Amendment otherwise. This approach attempts to address the problem in the previous model by differentiating between speech from AI and autonomous programs that is consistent with the programmers' intent and speech that is not. The problem, though, is that it relies on a fundamental question – "Is this speech representative of what the programmer would say?" – that is frequently impossible to answer. As the election last year illustrated, Twitterbots are frequently anonymous. How can a regulatory agency, state government, or court answer what the programmer thought when the programmer is anonymous? In the case of AI personal assistants, there isn't a single programmer; there are many. Is Alexa's speech only protected when it reflects Amazon's publically available statements? Do we have to compare "her" statements to the notes taken at Amazon's board meetings? This model could make regulating AI speech unmanageable.

Even when speech from AI and autonomous programs is different than what its programmers would say themselves, that's not necessarily bad. Although it has been inactive for months, during the election an AI-powered Donald Trump emulator analyzed the real Donald's Twitter production to create new tweets that he could have tweeted. The programmer, Brad Hayes, was inspired while a postdoc at MIT's Computer Science and Artificial Intelligence Lab by a training model that can simulate Shakespeare, as well as a report that analyzed Trump's "linguistic patterns to find that Trump speaks at a fourth-grade level."10 Given that Hayes has considered developing accounts for Democratic politicians, it's fair to say his Donald Trump emulator is more of an exercise than a broadcast of political opinions. But as numerous satirists would attest, just because the bot's algorithm doesn't reflect Hayes' point of view doesn't mean it doesn't deserve First Amendment protection.

4. Speech produced by AI/Autonomous Program is speech that is protected by the First Amendment. This leaves us with the final and most compelling model for applying the First Amendment to speech produced by AI and autonomous programs: a literal reading. The actual text of the First Amendment suggests this is the correct model to apply to AI/autonomous program speech, as the amendment simply states that the government "shall make no law ... abridging the freedom of speech, or of the press." If we read the amendment as it is written without reading preferences and biases into it, nothing there specifically suggests freedom of speech is limited to people. Under this interpretation, all the constitutional protections that human speech enjoys in the United States would also apply to AI and autonomous programs, whether they are Microsoft mistakes or legitimate business exercises.

As mentioned above, companies that produce AI personal assistants appear to support this model. Additionally, this model is the easiest to enforce: govern AI speech like all other speech.

Challenges to Autonomous Expression

How could autonomous expression be challenged? Historically, there are enough examples of idiosyncratic laws and ordinances designed to limit free speech that it is not hard to imagine new ones directed toward Twitterbots, AI, and other forms of autonomous technology. For example:

  • In 1913, Florida enacted a law that required newspapers to give equal space to the opponents of the candidates the papers endorsed. It was good law until the U.S. Supreme Court found it unconstitutional in 1974.11
  • In 1927, Minnesota enacted a law that permitted courts to shut down a newspaper viewed as "malicious, scandalous and defamatory." The Supreme Court found it unconstitutional in 1931.12
  • In 1932, the Los Angeles City Council passed an ordinance that criminalized the distribution of anonymous pamphlets. It was an actively enforced law until the Supreme Court found it unconstitutional in 1960.13

With regard to autonomous Twitterbots, some writers have alleged that they are not speech but are rather "speech ricochets" that "represent a form of technology that can be weaponized."14 That perspective could easily lead a city council, state legislature, or Congress to ban certain forms of autonomous speech. The state of Georgia recently passed legislation that eliminated a tax benefit in response to Atlanta-based Delta Airlines severing its ties to the National Rifle Association.15 It is not hard to imagine AI-produced statements that could also inspire governmental backlash, including banning the AI. That has to be a concern for any person or company that relies on the technology, or expects to, in the long term if autonomous speech is not treated as any other speech under the First Amendment.

Conclusion

By permitting the government to ban lawful speech, even AI speech, we introduce a great deal of uncertainty into the plans of the parties that want to use the technology and eliminate potentially useful voices. The First Amendment protects the speaker, but it also protects the rest of us, who are guaranteed the right to determine if the speaker is right, wrong, useful, useless, or a badly programmed bot. We are owed that right regardless of who is doing the speaking. Having said that, companies and individuals who are developing expressive AI should be aware of the potential dangers of autonomous speech until the certainty of First Amendment protection is assured.

Footnotes

1 Scott Shane, "The Fake Americans Russia Created to Influence the Election," New York Times, September 7, 2017, https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html

2 Sarah Perez, "39 million Americans now own a smart speaker, report claims," TechCrunch, January 12, 2018, https://techcrunch.com/2018/01/12/39-million-americans-now-own-a-smart-speaker-report-claims/.

3 In 2017, when police sought access to a murder suspect's Echo, Amazon filed a legal memorandum in the murder trial that seemed to assert Alexa has First Amendment rights. Amazon's motion was couched in language that referred to "Amazon's First Amendment-protected speech," but referred specifically to "Alexa's decision" about the information Alexa chooses "to include in its response," suggesting that Alexa's First Amendment rights were at stake as well. State of Arkansas v. Bates, Case No. CR-2016-370-2 (Cir. Court Benton County, Arkansas), Memorandum of Law in Support of Amazon's Motion to Quash Search Warrant, filed February 17, 2017, p. 11.

4 Hope Reese, "Why Microsoft's 'Tay' AI bot went wrong," TechCrunch, March 24, 2016, https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/.

5 John Frank Weaver, "Who's Responsible When a Twitter Bot Sends a Threatening Tweet?", Slate, February 2015, http://www.slate.com/blogs/future_tense/2015/02/25/who_is_responsible_for_death_threats_from_a_twitter_bot.html.

6 Steve Lohr, "In Case You Wondered, a Real human Wrote This Column," New York Times, September 10, 2011, http://www.nytimes.com/2011/09/11/business/computer-generated-articles-are-gaining-traction.html?pagewanted=all&_r=0.

7 Automated Insights, Wordsmith homepage, https://automatedinsights.com/wordsmith.

8 Cade Metz, "How A.I. Is Creating Building Blocks to Reshape Music and Art," New York Times, August 14, 2017, https://www.nytimes.com/2017/08/14/arts/design/google-how-ai-creates-new-music-and-new-artists-project-magenta.html.

9 Tim Wu, "Is the First Amendment Obsolete?", Knight First Amendment Institute, September 2017, https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete.

10 "Postdoc develops Twitterbot that uses AI to sound like Donald Trump," MIT CSAIL, March 3, 2016, https://www.csail.mit.edu/news/postdoc-develops-twitterbot-uses-ai-sound-donald-trump.

11 Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974).

12 Near v. State of Minnesota, 283 U.S. 697 (1931).

13 Talley v. California, 362 U.S. 60 (1960).

14 Merritt Baer, "Do Russian-Backed Bots Qualify for Free Speech?", Daily Beast, October 29, 2017, https://www.thedailybeast.com/do-russian-backed-bots-qualify-for-free-speech.

15 Brooke Singman, "Georgia governor signs bills nixing Delta tax break after NRA split," Fox News, March 2, 2018, http://www.foxnews.com/politics/2018/03/02/georgia-governor-signs-bill-nixing-delta-tax-break-after-nra-split.html.

Published in Terralex Connections (May 30, 2018)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.