ARTICLE
1 August 2024

Steptoe White Paper: Artificial Intelligence And The Landscape Of US National Security Law

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
This white paper is intended to provide a broad overview of the various US national security laws that can apply to AI, illustrating the breadth of legal regimes...
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Introduction

This white paper is intended to provide a broad overview of the various US national security laws that can apply to AI, illustrating the breadth of legal regimes that AI companies and companies using AI must keep in mind and the complexity of applying many of these regimes to novel developments in this rapidly evolving space.1

Much of today's discussion on AI centers around the lack of laws and regulations and the need for policymakers to catch up to rapidly evolving industry developments. Despite this narrative, AI is already subject to a significant number of national security-related laws and several new legal regimes will be implemented in short order. These national security-related regimes can apply to obvious cases such as the use of AI in weapons systems, but can also apply to AI with no clear, direct connection to national security. AI systems used in critical infrastructure, AI algorithms that power social media feeds, and generative AI that can create so-called "deepfakes" are just a few examples of AI systems that may implicate a number of US national security laws. 

While US policymakers are concerned about strategic competition with a number of foreign rivals and adversaries, there is no doubt that China is the country of greatest concern to US officials with respect to AI and national security. Of the various legal regimes and provisions discussed in this white paper, some are laws of general applicability applying regardless of jurisdiction, some target a handful of jurisdictions viewed by US officials as particularly problematic, and some target a single country such as certain export controls measures against China or Russia.

Certain laws discussed herein apply broadly to transactions or other dealings that implicate US national security, generally, while others apply specifically to AI. AI systems rely on two fundamental building blocks: (1) advanced semiconductors needed to provide sufficient computing power to train, and in some cases operate, AI models and (2) significant quantities of data used to train AI models. Both of those building blocks are also subject to a range of US national security laws and, while this paper focuses on AI software, it will also touch on those elements.

I. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

While many US national security laws already apply to AI, we begin with a discussion of the new national security regimes that will be implemented in the near future.

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the "AI EO").2 The preamble to the AI EO explains that the Biden administration "places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so."3 It adds, "The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society."4

Although the AI EO touches on a number of areas, perhaps the most significant and detailed area of the AI EO is Section 4 entitled "Ensuring the Safety and Security of AI Technology," which lays out a number of key policy priorities related to AI and national security. Below we lay out some of the key initiatives from Section 4 of the AI EO.

A. Development of new standards, tools, and tests

The order requires the Department of Commerce (Commerce), including the National Institute of Standards and Technology (NIST), and other federal agencies, to establish guidelines and best practices to promote "consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems."5 This includes creating or revising existing standards related to AI risk management, secure software development, evaluating and auditing AI capabilities, and red-teaming.6 These standards cover a wide range of areas, including dual-use foundation models;7 generative AI; use of AI in critical infrastructure; so-called "synthetic content," including deepfakes; and nuclear, chemical, radiological, and biological weapons proliferation, among many other topics.

While many of these standards are intended to be voluntary, others are intended to form mandatory requirements and certain of the voluntary standards could become mandatory with time – either because industry expectations make them a de facto requirement or because they are embedded in future regulations, statutes, or contracts with the US government.

Some of these standards have been released, at least in draft form, and are discussed below, while others are forthcoming.

B. Rules for developers of powerful AI models to share information with the us government

The AI EO directs the Department of Commerce to issue regulations requiring companies "developing or demonstrating an intent to develop potential dual-use foundation models" to provide regular reports to Commerce on a variety of topics, including: current and future business activities related to training, developing, and producing dual-use foundation models; the ownership, possession, and protection of the model weights of the dual-use foundation model; and the results of red-team testing based on guidance from the Department of Commerce and NIST, among other topics. 

The order also mandates the promulgation of rules requiring reporting by persons that "acquire, develop, or possess a potential large-scale computing cluster," including "the existence and location of these clusters and the amount of total computing power available in each cluster."8

C. Reporting and customer due diligence rules for iaas providers

With respect to infrastructure as a service (IaaS), the AI EO directs the Department of Commerce to require IaaS Providers to report to Commerce "when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber enabled activity."9 Such reporting obligations must also be flowed down to "foreign resellers" of the IaaS Product. 

The order further directs Commerce to issue rules requiring IaaS Providers to "ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller."10 Commerce has taken additional steps to implement this portion of the AI EO in a new notice of proposed rulemaking (NPRM), discussed below.

To view the full article, click here.

Footnotes

1. For purposes of this white paper, "AI companies" include companies developing, testing, training, researching, and selling or distributing AI products and services. "Companies using AI" refers to non-AI companies that use AI developed by others as part of their products and services. Given the rapid evolution of AI, and accompanying legal and regulatory frameworks, we anticipate updating this white paper periodically.

2. "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 FR 75191 (Oct. 30, 2023), https://www.federalregister.gov/ documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

3. Id.

4. Id.

5. Exec. Order No. 14110, § 4.1(a)(i).

6. As defined in the AI EO, the term "AI red-teaming" means "a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI." Exec. Order No. 14110, § 3(d).

7. The AI EO defines a "dual-use foundation model" as "an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters...." Exec. Order No. 14110, § 3(k). Notably, models fall into the above parameters "even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities." Id.

8. Id.

9. Id.; see also definitions of IaaS Provider and IaaS Product, Exec. Order No. 13,984, 86 FR 6837 (Jan. 25, 2021), https://www.federalregister.gov/d/2021-01714.

10. "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," 88 FR 75191 (Oct. 30, 2023), https://www.federalregister.gov/ documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More