ARTICLE
19 August 2024

NIST's Latest Guidance On Secure AI Development And Global Standards

BB
Baker Botts

Contributor

Baker Botts is a leading global law firm. The foundation for our differentiated client support rests on our deep business acumen and technical experience built over decades of focused leadership in our sectors and practices. For more information, please visit bakerbotts.com.
On July 26, the National Institute of Standards and Technology (NIST) released four guidance documents related to artificial intelligence (AI) development and implementation.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On July 26, the National Institute of Standards and Technology (NIST) released four guidance documents related to artificial intelligence (AI) development and implementation. These documents were issued pursuant to instructions under AI Executive Order 14110, which directed several U.S. government agencies to promulgate guidance and regulations with respect to safe, secure, and trustworthy AI.

The first document, titled "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile" (RMF GAI), describes and defines risks associated with generative AI (GAI) and outlines how organizations can govern, manage, and mitigate such risks. The RMF GAI profiles the functions and categories of NIST's 2023 AI Risk Management Framework, specifically for GAI technology. It provides a cross-sectoral profile for managing risks related to GAI implementation, applicable across different sectors and addressing both current concerns and potential future harmful scenarios.

NIST also released three additional documents:

"Secure Software Development Practices for Generative AI and Dual-Use Foundation Models" (SSDF), which updates prior NIST software development guidance to add recommendations for implementing secure development practices specifically tailored to generative AI systems. It covers the entire AI model development lifecycle and emphasizes practices to secure AI elements and mitigate risks from malicious tampering.

"A Plan for Global Engagement on AI Standards" (AI Plan), which provides directives to drive worldwide development and implementation of AI-related consensus standards, cooperation, and information sharing. It emphasizes the need for context-sensitive, performance-based, and human-centered AI standards.

"Managing Misuse Risk for Dual-Use Foundation Models" (MMRD), which offers comprehensive guidelines for identifying, measuring, and mitigating misuse risks associated with powerful AI models. This document is open for public comment through September 9, 2024.

Together, these guidance documents attempt to define best practices to reduce risks that arise when developing and deploying AI models. While the NIST guidance is not legally binding, those developing or deploying AI models might take note, as deviation from prevailing practices or recommendations could introduce insurance or liability risks, particularly for those operating in accordance with federal information systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More