Deepfakes, Disclosures, And Democracy: The Role Of AI In Elections

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
As election day approaches, discussions around artificial intelligence (AI) and its role in elections have become more prevalent.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

As election day approaches, discussions around artificial intelligence (AI) and its role in elections have become more prevalent. With a growing demand for regulation and the general election nearing, state legislatures, Congress, and federal agencies are contemplating their roles in AI oversight. States across the country have passed legislation regulating deceptive political advertisements. Meanwhile, Sen. Amy Klobuchar introduced three AI-related bills sponsored that are now on the Senate floor, and the Federal Communications Commission (FCC) and Federal Election Commission (FEC) have attempted to mandate disclosures on AI-generated political advertisements.

State AI Bills: Regulating Use of Deepfakes in Political Ads

Without current federal regulation of AI-generated political advertisements, states have taken matters into their own hands by passing their own legislation. These state laws vary in how AI-generated content is regulated and the consequences of noncompliance. Some states have banned the use of AI in political advertisements altogether. For example, Alabama, Hawaii, Minnesota, Mississippi, and Texas now provide for criminal or civil penalties if AI-generated content is used to misrepresent a candidate or influence an election, regardless of whether the advertisement contains a disclosure.

Meanwhile, other states now require disclosures on political advertisements that use AI-generated content. Arizona, Colorado, Florida, Idaho, Indiana, New Mexico, and Washington mandate disclosures on political advertisement that contain deceptive AI-generated content or misrepresent a candidate. However, Michigan, Oregon, Utah, and Wisconsin require a disclosure on any political advertisements that employs AI, regardless of whether there is misrepresentation or deceptive material. The penalties for violating these laws range from civil fines to criminal liability, including serving time in jail.

Sen. Klobuchar's Trio of AI Bills

At the federal level, Sen. Amy Klobuchar (D-MI) sponsored three AI related bills now awaiting floor consideration. These include the AI Transparency in Elections Act of 2024 (S. 3875), Protect Elections from Deceptive AI Act (S. 2770), and Preparing Election Administrators for AI Act (S. 3897).

The AI Transparency in Election Act requires a "clear and conspicuous" disclosure on political advertisements "substantially generated" by AI. The bill charges the Federal Election Commission (FEC), alongside the National Institute of Standards and Technology (NIST), to promulgate regulations reflecting the disclosure requirements in the bill, which vary depending on whether the communication is an image, audio, or video. There is a safe harbor provision for statements that meet the requirements outlined in the bill.

The Protect Elections from Deceptive AI Act aims to ban "materially deceptive" AI-generated content of federal candidates used to influence elections or solicit funds. The intent of this bill is to ban deceptive AI-generated audio, images, or videos of federal candidates. It allows for candidates to have such content removed and seek damages in federal court, including attorney's fees. There are exceptions for news organizations, as well as parody and satirical content.

Lastly, the Preparing Election Administrators for AI Act tasks the Election Assistance Commission (EAC) and NIST with issuing voluntary guidelines for election officials on the benefits and risks of using AI in election administration. Additionally, it requires the agencies to issue a report on the use and impacts of AI after the 2024 general election.

All three bills, led by Sen. Amy Klobuchar, advanced out of the Senate Rules Committee in May and have been awaiting floor consideration.

Agency Efforts to Regulate AI-Generated Political Ads

In July, the FCC released a notice of proposed rulemaking (NPRM) requiring disclosures for political advertisements containing AI-generated content. The NPRM does not seek to ban AI-generated content, but instead, it is aimed at increasing transparency by:

  • Seeking public comment on how to define AI-generated content with specificity;
  • Seeking public comment on "whether to require on-air disclosure and written disclosure in broadcasters' political files when there is AI-generated content in political ads";
  • Proposing that disclosure rules apply to both candidate and issue political advertisements; and
  • Proposing that disclosure rules apply to "broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees."

The FCC and FEC are divided in its support for the NPRM. While Chairwoman Rosenworcel stated the FCC's authority to regulate political advertising stems from the Bipartisan Campaign Reform Act, FEC Chairman Sean Cooksey disagreed, stating that parts of the NPRM fall under the exclusive jurisdiction of the FEC. FCC Commissioner Brendan Carr also pushed back stating that the NPRM would only "muddy the waters" and highlighted that streaming services would not be held to the same standards as broadcast TV. Commissioner Carr and Commissioner Nathan Simington both published statements of dissent in opposition of the NPRM. Despite this, FEC Vice Chair Ellen Weintraub supports the proposed rule stating that the "public would benefit from greater transparency as to when AI-generated content is being used in political advertisements."

The FEC has struggled to regulate this field. Last year, it sought public comments for a possible rulemaking on how it might regulate AI after Public Citizen filed a petition for rulemaking. The petition requested that the Commission amend its current regulation to clarify that the related statutory prohibition applies to deliberately deceptive AI-generated political advertisements as well. However, the comment period has ended and there has been no action since.

Steptoe's Campaign Finance and Political Law team is tracking these developments.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More