What Government Contractors Need To Know About Artificial Intelligence Legal Issues

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
The rapid growth of artificial intelligence (AI) adoption creates opportunities for government contractors.
United States Government, Public Sector
To print this article, all you need is to be registered or login on Mondaq.com.

The rapid growth of artificial intelligence (AI) adoption creates opportunities for government contractors. In particular, the US government's desire to increase its use of AI in government systems means contractors can help build out those systems. And like other companies, contractors can leverage AI to operate their own businesses more efficiently. But as government contractors seize these AI opportunities, they must grapple with a range of new legal issues as well.

The federal government has been grappling with similar issues, and particularly with how to regulate and procure this rapidly evolving technology. In October 2023, the Biden administration released Executive Order 14110 on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," establishing the White House's position on key aspects related to the use of AI.1 Additionally, various agency initiatives are underway, as a result of both the Executive Order and otherwise, to influence use of AI going forward. Notably, Executive Order 14110 mandated 150 specific actions for various government agencies to implement within very aggressive timelines. Agencies timely completed these action items,2 which is truly a feat in the government world and signals the dedication and importance being placed on AI issues.

One key area, to which many of the mandates relate, is "responsible AI use." Companies that want to provide AI to the government will need to be aware of and adhere to these responsible AI use principles. Companies that use AI in their operations must be aware of and develop AI use policies based on these principles, as well as a host of other legal issues. And if companies are using third-party AI tools, they need to ensure they conduct AI-specific vendor diligence.

This article describes key federal government initiatives relating to artificial intelligence and important considerations for government contractors on the use and development of AI. This article includes an overview of key legal issues, recommended elements of corporate AI policies, and important issues to consider when conducting AI-specific vendor diligence for contractors (and all companies, for that matter).

US Government AI Policy and Initiatives

Background on US Government AI Policy

Leading up to the release of Executive Order 14110, the White House in October 2022 published its "Blueprint for an AI Bill of Rights, Making Automated Systems Work for the American People."3 This document outlines five principles focused on ensuring protections for Americans with respect to AI: (1) Safe and Effective Systems, (2) Algorithmic Discrimination Protections, (3) Data Privacy, (4) Notice and Explanation, and (5) Human Alternatives, Consideration, and Fallback. The AI Bill of Rights is a voluntary, nonbinding framework that forms the basis for protections that were eventually included in Executive Order 14110.

In 2023, the federal government engaged in additional efforts to define its approach to AI. On January 23, 2023, the National Institute of Standards and Technology (NIST) released the first version of its "Artificial Intelligence Risk Management Framework" (AI RMF).4 This framework is a resource for organizations designing, developing, deploying, or using AI systems regarding management of risks associated with AI and promoting trustworthy and responsible development and use of AI systems. The NIST AI RMF is a voluntary framework.

On April 20, 2023, the Secretary of Homeland Security announced a new initiative to combat evolving threats, including those related to generative AI.5 The initiative includes the creation of an AI Task Force that will drive specific AI applications to advance critical homeland security missions, including (1) integrating AI to enhance the integrity of supply chains and the broader trade environment, such as deploying AI to improve screening of cargo and identifying the importation of goods produced with forced labor, and (2) collaborating with government, industry, and academia partners to assess the impact of AI on the Department of Homeland Security (DHS)'s ability to secure critical infrastructure.

Further, on April 25, 2023, officials from the Federal Trade Commission (FTC), the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the US Equal Employment Opportunity Commission (EEOC) released a joint statement on "Enforcement Efforts Against Discrimination and Bias in Automated Systems."6 And in July 2023 the White House met with and secured voluntary commitments from leading AI companies to manage the risks posed by AI.7 These commitments included (1) ensuring products are safe before being introduced to the public, (2) putting security first, and (3) working to earn public trust by ensuring transparency and accountability with respect to AI systems.

Executive Order 14110

Executive Order 14110 establishes a comprehensive framework for the development, deployment, and regulation of AI technologies.8 It underscores the importance of AI in enhancing national security, economic prosperity, and the quality of life of US citizens.9 Key sections of the Executive Order focus on (1) ensuring safety and security of AI technology; (2) promoting innovation and competition; (3) supporting workers; (4) advancing equity and civil rights; (5) protecting consumers, patients, passengers, and students; (6) protecting privacy; (7) advancing federal government use of AI; and (8) strengthening American leadership abroad.10 The directive mandates federal agencies to prioritize AI in their budgets, encourages the private sector's investment in AI research and development, and emphasizes the need for international collaboration to establish global norms and standards for AI.11

The anticipated effects of the Executive Order are far reaching and touch multiple industries and sectors. Most significant for companies and government contractors are the following actions stemming from the Executive Order. The Executive Order:

  • Imposes testing obligations on developers of the most powerful systems and requires sharing results using the government's authority under the Defense Production Act;12
  • Directs many agencies to take specific actions to protect consumers, patients, students, and workers;
  • Contemplates assessments of job displacement due to AI, as well as potential remedies;
  • Mandates efforts for managing content authentication and provenance (e.g., to prevent deepfakes);
  • Calls on Congress to implement federal privacy legislation;
  • Takes aim at "BAD" AI (biased and discriminatory AI) to promote equity and civil rights;
  • Focuses on the government's responsible use of AI;
  • Creates programs and provides resources to enhance US leadership in innovation;
  • Promotes US leadership in coordinating global regulatory efforts; and
  • Takes steps to protect US infrastructure from foreign bad actors' use of AI.13

In short, the federal government plans to leverage the positive aspects of AI but acknowledges the development, deployment, and use of AI must be done responsibly.

The Executive Order includes myriad tasks for agencies, with many due dates within 90 or 270 days of issuance of the Order.14 On January 29, 2024, the White House released a report announcing that agencies had completed all the 90-day action tasks by the Executive Order;15 and on April 29, 2024, the White House released a similar report announcing completion of all 180-day actions.16 Below we discuss those actions and initiatives that are key for companies that do business with the federal government, in addition to those in critical infrastructure sectors and other relevant industries.

NIST Guidelines and Best Practices

As noted above, NIST released its AI RMF prior to issuance of Executive Order 14110. Under the Executive Order, NIST is further tasked with establishing guidelines, standards, and best practices relating to AI (building on its AI RMF).17 These new guidelines include (1) an AI RMF companion publication for generative AI; (2) a resource for secure software development practices for generative AI; (3) benchmarks for evaluating AI capabilities focusing on harm in cybersecurity and biosecurity; and (4) resources for development and testing to enable safe, secure, and trustworthy AI—particularly with regard to dual-use foundation models.18

As of the Executive Order's six-month mark, NIST released these new guidelines as draft documents for public comment. The documents include:

  • AI RMF Generative AI Profile (NIST AI 600-1);19
  • Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication 800-218A);20
  • Reducing Risks Posed by Synthetic Content (NIST AI 100-4);21 and
  • A Plan for Global Engagement on AI Standards (NIST AI 100-5).22

NIST further announced its NIST GenAI program, which will evaluate generative AI capabilities through testing and seeks to tackle issues associated with identification of synthetic content.23

The NIST guidance consists of voluntary frameworks to assist companies with the responsible development of AI. It does not have the force of law, but it may not be without legal significance. As in the cybersecurity space, where NIST standards form the basis for Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) requirements for security controls for sensitive information, NIST guidelines on AI are likely to be included in eventual laws and regulations applicable to companies and contractors that sell AI products and services to the federal government.

Additionally, based on some proposed state legislation, it is likely that adherence to the NIST AI RMF may be required sooner rather than later. For example, proposed Colorado state legislation seeks to impose obligations on developers and deployers of AI systems and provides an affirmative defense if they have implemented and maintained a program that complies with a nationally or internationally recognized risk management framework for AI systems.24 NIST's AI RMF is one the most frequently cited risk management frameworks in the United States.25

Footnotes

1. Exec. Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Oct. 30, 2023).

2. Id.

3. Off. of Sci. & Tech. Pol'y, Blueprint for an AI Bill of Rights, White House (Oct. 4, 2022), https://www.whitehouse.gov/ostp/ ai-bill-of-rights/.

4. AI Risk Management Framework (RMF) 1.0, NIST (Jan. 2023), https://www.nist.gov/itl/ai-risk-management-framework

5. Memorandum from Alejandro N. Mayorkas, Sec'y, DHS, to Dr. Dimitri Kusnezov & Eric Hysen, Establishment of a DHS Artificial Intelligence Task Force (Apr. 20, 2023), https://www. dhs.gov/sites/default/files/2023-04/23_0420_sec_signed_ai_task_ force_memo_508.pdf

6. Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, Fed. Trade Comm'n (Apr. 25, 2023), https://www.ftc.gov/legal-library/browse/cases-proceedings/ public-statements/joint-statement-enforcement-efforts-againstdiscrimination-bias-automated-systems.

7. See Press Release, The White House, Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI (July 21, 2023), https://www.whitehouse.gov/briefing-room/ statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificialintelligence-companies-to-manage-the-risks-posed-by-ai/.

8. Exec. Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Oct. 30, 2023).

9. See id.

10. See id.

11. See id.

12. Defense Production Act of 1950, Pub. L. No. 81-774, 123 Stat. 2006 (codified at 50 U.S.C. § 4501 et seq.).

13. See Exec. Order 14110, 88 Fed. Reg. 75,191.

14. See id.

15. Press Release, The White House, Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Biden's Landmark Executive Order (Jan. 29, 2024), https://www. whitehouse.gov/briefing-room/statements-releases/2024/01/29/ fact-sheet-biden-harris-administration-announces-key-ai-actionsfollowing-president-bidens-landmark-executive-order/

16. Press Release, The White House, Fact Sheet: Biden-Harris Administration Announces Key AI Actions 180 Days Following President Biden's Landmark Executive Order (Apr. 29, 2024), https://www.whitehouse.gov/briefing-room/statements-releases/2024/04/29/biden-harris-administration-announces-key-ai-actions-180-days-following-president-bidens-landmark-executiveorder/.

17. Exec. Order 14110, 88 Fed. Reg. 75,191.

18. See id.

19. NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (Apr. 2024), https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf.

20. NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (Apr. 2024), https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218A.ipd.pdf

21. NIST AI 100-4, Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency (Apr. 2024), https://airc.nist.gov/docs/ NIST.AI.100-4.SyntheticContent.ipd.pdf

22. NIST AI 100-5, A Plan for Global Engagement on AI Standards (Apr. 2024), https://airc.nist.gov/docs/NIST.AI.100- 5.Global-Plan.ipd.pdf.

23. For additional information on these documents, see James Gatto, NIST Updates AI RMF as Mandated by the White House Executive Order on AI, SheppardMullin (Apr. 30, 2024), https:// www.ailawandpolicy.com/2024/04/nist-updates-ai-rmf-as-mandated-by-the-white-house-executive-order-on-ai/.

24. Colorado Artificial Intelligence Act (CAIA) (S.B. 24-205). For more information on the proposed Colorado legislation, see James G. Gatto, Colorado Introduces an AI Consumer Protection Bill, Nat'l L. Rev. (Apr. 12, 2024), https://natlawreview.com/ article/colorado-introduces-ai-consumer-protection-bill. 

25. AI Risk Management Framework (RMF) 1.0, supra note 4.

To view the full article, click here.

Originally published by ABA

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More