White House Announces Voluntary Commitments Of Leading Artificial Intelligence Companies To Manage Potential Risks

LB
Lewis Brisbois Bisgaard & Smith LLP
Contributor
Founded in 1979 by seven lawyers from a premier Los Angeles firm, Lewis Brisbois has grown to include nearly 1,400 attorneys in 50 offices in 27 states, and dedicates itself to more than 40 legal practice areas for clients of all sizes in every major industry.
Many concerned policymakers view the participation of independent experts as a critical component necessary to ensure objective evaluations.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Washington, D.C. (July 26, 2023) – The Biden-Harris Administration, along with other policymakers in the United States and internationally, have prioritized developing appropriate policy on artificial intelligence (AI) – hoping to seize the transformational benefits of AI, while managing its serious risks. Since October 2022, the White House has announced its AI Bill of Rights, issued an Executive Order (EO), and convened meetings with a host of advisors, experts, concerned stakeholders, CEOs, and policymakers (domestic and international) to inform this Administration's policy development efforts.

On July 21, 2023, in an important step forward – and with the leaders of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI in attendance – the White House announced voluntary commitments from these seven leading U.S.-based technology companies to manage the potential risks posed by AI. The companies' voluntary commitments fall primarily within three categories – safety, security, and trust – and provide as follows:

Safety

  • Facilitating internal and external security testing of AI systems by "independent experts" before the systems' release to guard against significant sources of AI risks as well as its broader societal effects. Many concerned policymakers view the participation of independent experts as a critical component necessary to ensure objective evaluations.
  • Sharing information on managing AI risks – including best practices for safety and data on attempts to circumvent safeguards – as well as technical collaboration to increase safety.

Security

  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. "Weights" and "biases" are the learnable parameters that some machine learning models use, and a "weight" decides how much influence an input will have on the output. As such, the companies agreed that model weights be released only when intended and when security risks are considered, and that they will protect the model weights with the same vigilance as their own intellectual property
  • Facilitating third-party discovery and reporting of vulnerabilities in AI systems.

Trust

  • Developing robust technical mechanisms to ensure that users know when content is AI-generated (or user-generated), such as a watermarking system to reduce the dangers of fraud and deception.
  • Publicly reporting AI systems' capabilities, limitations, and areas of appropriate and inappropriate use, including security risks and societal risks, such as the effects on truth, fairness, and bias.
  • Researching the societal risks that AI systems can pose, including how to avoid harmful bias and discrimination, as well as how to protect privacy and children.

In addition to risk management, the AI companies agreed to wield the enormous potential of this technology to help advance societal interests and address our greatest challenges, cancer prevention, climate change, education, and other urgent matters.

The Future

The Biden-Harris Administration views this announcement --what industry can do --as an important milestone in setting the course for developing an effective AI regulatory framework for this powerful and promising technology. Still, the White House believes that there is more work to be done to capture the benefits of this technology, while managing the risks. To this end, expect to see the Administration engage actively on various fronts, including:

  • Executive Branch. The White House will issue a new EO soon, which will require a "whole of government approach," with each department or agency across the government considering the promising benefits to be seized and the risks to be managed – and ultimately to create AI policy plans. The EO also will ask departments and agencies to consider additional authorities they may need Congress to evaluate to govern AI effectively in their respective areas of jurisdictions.
  • Congress. The Administration will work with Congress to pass AI legislation. White House advisors have expressed optimism that AI is an issue ripe for bipartisan legislation.
  • International. The White House has been consulting a number of allied nations on AI policy development and is encouraged by its discussions to date. The Administration will continue to lay the foundation with international trading partners and allies to develop AI policy in a harmonized manner.

Implications

The companies' voluntary commitments highlight the issues and concerns of policymakers regarding the risks associated with the rapid development of AI. Companies and individuals should take note of these commitments, as they will impact how AI is privately developed and will likely form the basis for a new legal and regulatory framework in the U.S. and on a global scale. It is important to note that the policy development process is still in its early stages and continually evolving. Unlike many other important issues, AI policy is, refreshingly, not an area in which lines have been drawn and policymakers are entrenched in different corners. Instead, the current atmosphere seems to be one in which everyone is learning together. Indeed, as one policymaker noted, we need "to work together to get our arms around this technology."

The companies' commitments discussed in this alert do not address how businesses and individuals need to manage their own day-to-day legal risks that result from the ongoing development and adoption of AI products and resources, including the safeguarding of sensitive data. In addition to helping clients navigate and be heard in policy development processes like this, Lewis Brisbois' attorneys have experience managing these emerging AI issues and are being called upon by clients to advise on these increasingly complex and quickly evolving issues. Our attorneys stand ready to assist on compliance and management of business and legal risks, intellectual property and privacy issues, as well as investigations, litigation, and disputes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

White House Announces Voluntary Commitments Of Leading Artificial Intelligence Companies To Manage Potential Risks

United States Technology
Contributor
Founded in 1979 by seven lawyers from a premier Los Angeles firm, Lewis Brisbois has grown to include nearly 1,400 attorneys in 50 offices in 27 states, and dedicates itself to more than 40 legal practice areas for clients of all sizes in every major industry.
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More