According to the National AI Advisory Committee, which is tasked with advising the President and others on topics related to artificial intelligence, AI "is one of the most powerful and transformative technologies of our time" with the capacity to "address society's most pressing challenges." AI is already widely used across industries, and its use cases continue to expand.

As the emergence of AI-enabled tools has drawn the attention of corporate America, it has also captured the attention of lawmakers looking to leverage its benefits and curb its potential dangers. Bipartisan bills have been introduced in the U.S. Senate and House of Representatives seeking to establish a legislative and regulatory framework for AI. And on Oct. 30, the White House announced an Executive Order (https://bit.ly/49rDjN2) that builds on President Biden's previously issued AI Bill of Rights, as well as receipt of voluntary AI-related commitments from 15 large technology companies.

Federal legislators and regulators are not the only officials interested in developing policies on AI. In 2023, half of U.S. States introduced legislation concerning AI, and several have already passed laws. Some States — like California, Connecticut, and Rhode Island — require agencies to inventory and disclose AI use cases as a way of assessing the extent to which agencies rely on the technology. Other States — Illinois, Louisiana, and Texas — have created exploratory committees to spearhead AI risk assessments. Still others — like Colorado — have begun the early work of implementing policy solutions to address aspects of AI that carry potential for abuse.

As their States' chief law enforcement officers, State AGs are keenly interested in the development and regulation of AI, and are moving quickly to understand their role in harnessing the advantages of the technology, while protecting the public from harm. In June 2023, 23 State AGs — Democrat and Republican — wrote to the National Telecommunications and Information Administration (NTIA) in response to the agency's request for comment on AI policies. (https://bit.ly/3QPAMoU)

The AGs urged NTIA to ensure that "AI systems are valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair." The AGs pledged "read[iness] to work with [NTIA] on a range of fronts" in support of that goal and advocated for concurrent enforcement authority between the federal government and the States in order to "enable more effective enforcement to redress possible harms."

State AGs have also articulated concern over AI's role in highrisk use cases. In September 2023, AGs from all 50 States, D.C., and three U.S. territories wrote to Congress urging lawmakers to investigate the impact of AI on child exploitation. The AGs applauded Congress' efforts to study AI and "begin[] the process of developing a regulatory framework to address some ... harms," but articulated "a deep and grave" fear related to "a new frontier" of technology. "We are engaged in a race against time to protect the children of our country from the dangers of AI," they wrote. "Now is the time to act." (https://bit.ly/3QLwiiV)

As the emergence of AI-enabled tools has drawn the attention of corporate America, it has also captured the attention of lawmakers looking to leverage its benefits and curb its potential dangers.

Although their recent letters indicate that they see a critical role for the federal government in regulating AI, AGs undoubtedly recognize for themselves an opportunity — and indeed an obligation — to take action, particularly in the absence of major federal legislation. AGs have a variety of enforcement authorities to bring to bear in that effort, including traditional consumer-protection laws, newly enacted consumer data protection laws, and novel AI focusedregulatory tools designed to mitigate risk.

Traditional consumer protection laws afford AGs broad authority to pursue companies for statements and practices the AGs allege are deceptive or misleading. With respect to AI, State AGs can be expected to take action if consumers are misled about what they were seeing, hearing, or reading, or are otherwise experiencing an unfair outcome. They may be particularly attuned to corporate marketing materials and advertisements representing that AIenabled technology guarantees or enables certain metrics of success. Indeed, the Federal Trade Commission (FTC) has pursued cases on these grounds using its parallel federal authority.

AGs might also focus on the aspects of AI that can induce consumers to act — so-called "dark patterns." Such techniques include small icons or dropdown menus that obscure the cost of a transaction, or countdown clocks suggesting (inaccurately) that a sale price or special offer will expire. AGs have also shown interest in the use of dark patterns to track users' locations despite the users' preference not to share that data.

The recency of consumer data-privacy protection laws, coupled with the innovative technology at issue, combine to raise novel questions of constitutional law and statutory interpretation.

As a corollary to their use of traditional consumer-protection authorities, AGs can be expected to use newly enacted dataprivacy laws in investigating uses of AI. In recent years, 10 States have passed consumer data-privacy protection laws, and six more States have bills pending. AGs have (often exclusive) jurisdiction to enforce those laws, which generally afford customers a right to access, delete, and correct their data. Some State laws also require businesses to enable consumers to opt out of data processing for automated decisions that have significant impacts on their finances, education, housing, or employment.

The bounds of AGs' authority under these laws have yet to be delineated, particularly with respect to AI. The recency of consumer data-privacy protection laws, coupled with the innovative technology at issue, combine to raise novel questions of constitutional law and statutory interpretation. In the absence of clear precedent, it is difficult to predict how courts will navigate that challenging intersection.

No matter how uncharted the legal waters, State AGs can be expected to pay particular attention to AI's potential for abuse with respect to their most vulnerable constituents — children. In recent years, AGs have been aggressive in pursuing investigations and enforcement actions against tech companies they believe are engaging in practices that put young users at risk, either by exposing them to predators, invading their privacy, and/or creating an environment that is ripe for addiction and detrimental to their mental health.

The AG's September 2023 letter to Congress indicates that AGs view AI as raising similar threats to children and for that reason can be expected to aggressively pursue investigations and enforcement actions designed to mitigate that danger.

Although State legislative efforts in the AI space are nascent, a few laws have been passed that provide a preview of sorts with respect to tools that might ultimately be used to curb AI's potential to aggravate discrimination and bias — a major concern of policy makers and a key feature of President Biden's recently passed Executive Order. In 2021, New York City enacted a local law that prohibits employers and employment agencies from using automated decision tools in hiring unless the tools undergo a bias audit within one year of introduction.

Colorado also recently enacted a law prohibiting discriminatory insurance practices that use "external consumer data and information sources, as well as any algorithms or predictive models." As many AGs have been active in enforcing their States' anti-discrimination laws (and care deeply about the subject), it is reasonable to expect that AGs will have a role in ensuring that uses of AI do not undercut decades-long efforts to fight discrimination in employment, housing, and other key areas.

AI's transformative potential is near limitless. But with that great potential comes risk. And with that risk comes obvious opportunity for State AGs to act. Companies that market and use AI tools should pay close attention to AGs' priorities with respect to their industry and prepare to navigate a dynamic investigatory and regulatory environment for many years to come.

Originally published by Thomson Reuters

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.