ARTICLE
15 February 2023

Federal Guidance Offers Framework To Minimize Risks In AI Use

WR
Wiley Rein

Contributor

Wiley is a preeminent law firm wired into Washington. We advise Fortune 500 corporations, trade associations, and individuals in all industries on legal matters converging at the intersection of government, business, and technological innovation. Our attorneys and public policy advisors are respected and have nuanced insights into the mindsets of agencies, regulators, and lawmakers. We are the best-kept secret in DC for many of the most innovative and transformational companies, business groups, and nonprofit organizations. From autonomous vehicles to blockchain technologies, we combine our focused industry knowledge and unmatched understanding of Washington to anticipate challenges, craft policies, and formulate solutions for emerging innovators and industries.
These days, artificial intelligence is capable of a range of tasks, from answering online questions to ghost-writing term papers to helping with critical medical diagnoses.
United States Corporate/Commercial Law
To print this article, all you need is to be registered or login on Mondaq.com.

Wiley Rein's Duane Pozza explores the broader impact of NIST and White House frameworks for corporate AI use, noting that the federal guidance offers steps to reduce risk while maximizing benefits of the technology.

These days, artificial intelligence is capable of a range of tasks, from answering online questions to ghost-writing term papers to helping with critical medical diagnoses.

As AI promises more new benefits, there remain concerns about risks if AI-powered innovations are not properly managed.

The federal government has begun to weigh in more actively on AI risk management—releasing two frameworks outlining key considerations for companies that develop and use the tool.

These frameworks are voluntary, but provide a roadmap for how companies can get ahead of potential issues while maximizing AI's benefits.

NIST and White House Frameworks

On Jan. 26, the National Institute for Standards and Technology, at the Department of Commerce, released version 1.0 of an AI Risk Management Framework. The AI RMF is meant as a voluntary framework and was developed with input from industry and other stakeholders.

The AI RMF follows on the White House's release of a framework for an AI Bill of Rights late last year, and shares overlapping foundational principles. Both the AI RMF and Bill of Rights lay out key issues for companies and other organizations to address as they take steps to implement AI.

Both recognize AI's potential to benefit and improve lives, but also focus on steps that can be taken to address risks.

They lay out key guiding principles and characteristic of "trustworthy" AI, which include reliability, safety, transparency, privacy, and other safeguards. Taken together, the frameworks map an approach to addressing risks in certain categories.

Protecting Against Discrimination

A key consideration when deploying AI is to identify and take steps to counteract potential harmful bias, which can result in discriminatory outcomes.

Suggested approaches in this area include conducting proactive bias assessments, performing ongoing disparity testing, and ensuring that data sets are diverse, robust, and free from proxies for demographic features.

The frameworks also suggest seeking broad and diverse input to help identify and combat potential bias.

Promoting Safety, Security, Resiliency

Companies also need to closely monitor AI systems to ensure they are operating safely and not causing unintended outcomes, and that potential vulnerabilities are identified and addressed.

Recommendations include pre-deployment testing, ongoing monitoring and reporting, use of high-quality data, and evaluations to mitigate risks.

Addressing Transparency and Explainability

Companies using AI should consider how to convey use and outcomes of AI decisions—particularly high-impact decisions.

Depending on the risks involved, they should consider how best to convey answers to questions about "what happened" in the system, "how" a decision was made, and "why" a decision was made by the system and its meaning or context to the user.

This kind of analysis is one way to address other kinds of risks, so that operators and users can gain deeper insights into AI results and address them if necessary.

Protecting Data Privacy

AI often uses large amounts of data, and companies should take steps to protect privacy in data usage, collection, and access. They need to assess what existing laws and regulations might apply to data used in connection with AI.

The frameworks also recommend that AI developers and users promote privacy via methods such as privacy-enhancing technologies and minimizing personally identifiable data through de-identification or aggregation.

Incorporating Human Review and Accountability

The frameworks recognize that humans have a key role in overseeing AI uses and evaluating AI-generated outcomes. Companies should assess the points where human involvement and review is best deployed to mitigate risks.

Other considerations include remedial actions if an AI system fails and adequate training for those administering and reviewing these systems.

Both the AI RMF and AI Bill of Rights are meant to be voluntary approaches, and in the short term are most likely to influence agency use of AI and government contracting. However, they are also intended for private sector use and adaptation.

Looking forward, companies using AI and algorithmic technology will grapple with regulatory efforts at federal agencies, such as the Federal Trade Commission, and states such as California and Colorado.

The frameworks are not meant to substitute for regulatory compliance, but will help any company or organization assess key risks in AI use and get ahead of the curve. And with new uses of AI technology growing by the day, now is the time to implement effective strategic approaches.

Originally Published by Bloomberg Law

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More