When Consumer Protection, Unfair Trade Practices, And AI Collide

MG
Marshall, Gerstein & Borun LLP

Contributor

Marshall, Gerstein & Borun is a full service intellectual property law firm that protects, enforces and transfers the intellectual property of clients in more than 150 countries worldwide.  Nearly half the Firm’s professionals have been in-house as general counsel, patent counsel, technology transfer managers, scientists or engineers, and offer seasoned experience in devising and executing IP strategy and comprehensive IP solutions. Learn more at www.marshallip.com.
The intersection of consumer protection, unfair trade practices, and AI is rapidly becoming a focal point of regulatory scrutiny on both sides of the Atlantic.
Worldwide Consumer Protection
To print this article, all you need is to be registered or login on Mondaq.com.

The EU and U.S. Artificial Intelligence Acts

The intersection of consumer protection, unfair trade practices, and AI is rapidly becoming a focal point of regulatory scrutiny on both sides of the Atlantic. With the EU and several U.S. states advancing respective AI legislative frameworks, the potential for these laws to reshape the landscape of innovation and market competition is significant. While these policies aim to strike a delicate balance — fostering technological advancements while safeguarding consumers and ensuring fair market practices — there are implications for leaders in the life sciences sector.

The European Union AI Act

The European Union Artificial Intelligence Act ("EUAIA," or "the Act") is a EU regulation passed on March 13, 2024, and unanimously approved by the EU Council on May 21, 2024. (For the uninitiated, the EU Council is an intergovernmental body of 27 nations, akin to the Jedi High Council but comprising countries like Luxembourg and Malta instead of Grand Master Yoda and Supreme Chancellor Palpatine). All kidding aside, the EUAIA is among the first real attempts at comprehensive AI regulation by a major power, and it is already serving as the model for the U.S. states' legislative efforts.1

The Act is expected to enter into enforcement in July 2024 and will be phased in over the next two to three years, including regulation of high-risk AI systems covered by existing EU harmonization regulation (e.g., certain medical devices and certain in vitro diagnostics) and general-purpose AI (GPAI) systems.

The EUAIA governs the deployment and use of AI systems and AI models in the EU, including GPAI, by businesses (specifically, by "providers" and "deployers" of AI as defined in the Act).2

Jurisdictionally, the Act applies to AI deployers that are established or located within the EU, and to AI providers regardless of whether established or located within the EU. However, critically, the Act includes long-arm provisions extending to providers and deployers of AI systems and AI models, the output of which is used in the EU, regardless of whether those providers or deployers are located or established in the EU.3

The Act takes a risk-based approach and imposes affirmative requirements for "high-risk" AI systems, whether stand-alone or used within other enumerated products, such as certain medical devices and certain in vitro diagnostic applications.4

For life sciences companies, the EUAIA is poised to significantly impact future regulatory considerations for products that are AI systems/models or incorporate such systems/models, whether created by the company or a third party.

For example, the EUAIA imposes specific requirements on high-risk AI systems, including processes for continuous iterative risk management, data governance, technical documentation, automatic recordkeeping and "transparent" instructions for use by deployers.5 The EUAIA requires that these systems be designed for human oversight, and to achieve accuracy, robustness and cybersecurity. The Act imposes registration requirements, both for companies and the AI systems they provide or deploy, as well as marking requirements.

Some AI systems are excluded from the definition of "high risk," specifically if intended to perform a narrow procedural task, improve the result of a previously completed human activity, detect certain decision-making patterns, or perform certain preparatory tasks for other high-risk AI systems.

Separately, the EUAIA defines and regulates GPAI models, defined as "essential components of AI systems [that] do not constitute AI systems on their own." GPAI models are further narrowed to GPAI models that pose systemic risk, for example, when having a threshold number of cumulative floating point operations used for training of greater than 10^25.6

In addition to increased regulation of so-called "high-risk" AI systems and GPAI models, the EUAIA explicitly prohibits certain unwanted AI practices, such as intentional and harmful manipulation of human behavior; harmful exploitation of vulnerabilities of humans due to age, disability, social or economic status; certain forms of social scoring; pre-crime analysis; certain forms of facial recognition; certain forms of emotional inference in workplace and educational institutions; certain forms of biometric categorization; and certain forms of real-time remote biometrics for law enforcement purposes.

Ostensibly, these prohibitions are in keeping with the stated purposes of the Act, which include encouraging "human-centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights, democracy, the rule of law and environmental protection," avoiding harmful effects of AI, and fostering innovation.

Given these objectives, the EUAIA should be considered as not only a cutting-edge AI regulation, but also an extension of existing consumer products and consumer protection law to AI technologies.

U.S. States Get In On The Act

On May 1, 2024, Colorado Governor Jared Polis signed Colorado Senate Bill 24-205 into law. The bill is titled "Concerning Consumer Protections in Interactions with Artificial Intelligence Systems," and it loosely tracks the consumer protection concepts/definitions of the EUAIA, while differing in other respects.

For example, the Colorado Act (which does not provide a private right of action) defines a "consequential decision" as "a decision that has a material legal or similarly significant effect on the provision or denial to any consumer ... healthcare services" where "healthcare services" are defined according to the meaning of 42 U.S.C. § 234(d)(2).7

Unlike the EUAIA, the Colorado Act applies only to those doing business in Colorado. However, other states—including California, Utah and Illinois—are considering or have already passed similar legislation. It seems quite likely that that there will soon be a patchwork of (potentially contradictory and competing) state-level AI regulations in the U.S.

A Brief Digression Regarding Missed Opportunities

In view of the EUAIA, it is difficult not to see the U.S. and U.S. companies as increasingly behind the regulatory curve. The failure of the U.S. Congress to act thus far, and the attendant vacuum that the States are now filling, is all the more puzzling in an environment where many AI market leaders are U.S.-based companies.

Over 90 years ago, Justice Brandeis wrote of the virtues of states "serv[ing] as a laboratory; and try[ing] novel social and economic experiments."8 Decentralized experimentation can be a positive thing. But there is a lot riding on AI policy outcomes, and there is no indication that a fractured approach will be more effective than a coherent federal policy. In fact, the opposite seems true.

For example, the EUAIA authors included the sobering statement that AI models could pose systemic risks that could lead to events imposing negative consequences on entire cities and communities:

General-purpose AI models could pose systemic risks which include ... major accidents, disruptions of critical sectors and serious consequences to public health and safety; ... negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content ... potential intentional misuse or unintended issues of control relating to alignment with human intent; chemical, biological, radiological, and nuclear risks ... including for weapons development, design acquisition, or use; offensive cyber capabilities ... the capacity to ... interfere with critical infrastructure; risks from models of making copies of themselves or 'self-replicating' or training other models; ... models [that cause] harmful bias and discrimination... to individuals, communities or societies; the facilitation of disinformation ... with threats to democratic values and human rights; risk that a particular event could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity or an entire community.

In the AI space, it can sometimes be challenging to separate reasonable rhetoric from science fiction. Regardless, one might think that when words like "self-replicating, nuclear, chain-reaction, an entire city" are strung together, Congress would sit up, take notice, and craft a cohesive regulatory framework—but that has not happened. There may be a silver lining, however, which is that the inherent chaos of decentralized, state-led AI regulation in the U.S. will subject U.S. companies to a much lighter regulatory touch. Less comprehensive oversight of AI in the U.S. may make the U.S. a friendlier and more competitive business climate as the AI race continues.

So where does that leave us?

Practical Realities And Best Practices

Given the risk-based and consumer product-centric approach of the EUAIA, Colorado Act, and others coming down the pike, existing consumer protection frameworks are useful lenses. Indeed, a compliance overlap/complement between new AI regulations and existing regulatory burdens (e.g., data protection, privacy, FDA, etc.) may exist.

Given the EUAIA and U.S. laws and the potential penalties for non-compliance,9 it has become increasingly important for developers, providers, and deployers (and other participants in the AI value chain) of AI models and AI systems to analyze and understand their AI systems across multiple dimensions.

Minimally, life sciences companies with computing systems that touch AI, in particular, should consider the following questions:

  • Are we a provider, deployer, distributor, importer, manufacturer, and/or authorized representative concerning any AI systems/models (including those of a third party) or GPAI? If so, do we understand our role-based obligations?
  • Where are our AI systems/models used (including their outputs), and what are those localities' governing statutory/regulatory regimes?
  • Do any of our AI systems qualify as high-risk AI systems and/or general-purpose AI systems under those regimes? If so, do we understand our obligations based on these qualifications under stepped-up regulatory requirements?
  • Do we qualify for any exemptions/exclusions?
  • Are we inculcating AI literacy and ensuring that those involved in developing AI systems receive appropriate education and training on the potential impact of AI on consumers?
  • In what stage of the life sciences pipeline does our AI model/system operate (preclinical, trial, clinical)?
  • What is the nature and content of the data used to train/operate our AI systems, and are we implementing appropriate data governance policies?
  • What potential harm could occur if our AI system malfunctioned?
  • To what extent are our AI systems/models autonomous or subject to human override?
  • To what extent are the outcomes of our AI systems/models reversible?
  • Do we have appropriate documentation, risk management, impact assessment, and review systems for our AI systems?
  • Do any of our AI systems make consequential decisions for consumers?
  • Have we conducted and maintained conformity assessment and registration procedures, if required?
  • Do we have existing agreements, the terms of which should be reviewed/revised given new regulations?
  • Have we complied with compulsory disclosure and transparency obligations, including public statements, consumer notifications, and/or government notifications?

Pay Attention To Government Actions On AI

AI continues to transform the global economy, creating opportunities and enhancing consumers' quality of life. Predictably, governments are acting to minimize risk and, in the process, creating compliance pitfalls that can ensnare companies.

Staying abreast of this evolving landscape is essential for all companies, especially those engaged in existing regulated activities that may be subject to enhanced regulatory scrutiny.

This is especially true given that regulations such as EUAIA and the Colorado Act effectively expand the scope and reach of consumer protection and unfair trade practices law to cover specific activities involved in developing and deploying AI systems and models.

Footnotes

1. China and the UK have issued rules and regulations, but nothing as comprehensive as the EUAIA.

2. The Act defines "provider" and "deployer," and distinguishes between "placing into the market" and "putting into service" of AI systems and AI models.

3. The Act also applies to importers, manufacturers, authorized representatives and affected persons. The Act specifically excludes AI systems exclusively marketed/sold/used for military, defense or national security purposes; and AI systems/models with the sole purpose of scientific R&D; and models used in pre-market research, development and non-real world testing.

4. For enumerated products, see EUAIA, Annex I.

5. The EUAIA includes simplified procedures for small and medium enterprises, including startups.

6. This is an adjustable threshold; however, at the time of this writing, even the largest GPAI models (e.g., GPT-4) are thought not to exceed this threshold.

7. "[A]ny services provided by a health care professional, or by any individual working under the supervision of a health care professional, that relate to— (A) the diagnosis, prevention, or treatment of any human disease or impairment; or (B) the assessment or care of the health of human beings."

8. New State Ice Co. v. Liebmann, 285 U.S. 262, 311, 52 S. Ct. 371, 386–87, 76 L. Ed. 747 (1932) (Brandeis, L., dissenting).

9. Fines for non-compliance with certain provisions of the EUAIA can reach 7% of worldwide turnover or EUR 35,000,000; whichever is greater (these numbers are lower for small and medium entities). Noncompliance with the Colorado Act may result in a finding of unfair trade practices under Colorado law, subject to per-instance fines for each consumer or transaction.

Originally Published by Life Science Leader

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More