The use of generative artificial intelligence (AI) and machine learning (ML) in healthcare recently has been developing at a fanatical and fascinating pace. Because the consequences of such technology are yet to be fully understood, thoughtful consideration of its use by industry stakeholders and users is necessary, especially with respect to the legal implications within the healthcare industry. This practice note discusses AI's development in healthcare and federal and state efforts to regulate its use. It provides health law practitioners with an overview of the legal considerations associated with AI's use in healthcare, including data privacy, corporate practice of medicine, provider licensing, reimbursement, intellectual property, and research. It concludes with a discussion of the ethical considerations involved with AI in healthcare and considerations for protections against potential liability.

This practice note is organized into the following topics:

  • AI's Development in the United States and Certain Foreign Jurisdictions
  • Existing Legal Framework of AI Regulation in the United States
  • AI Regulatory Considerations in U.S. Healthcare
  • Ethical Considerations of AI Use in Healthcare
  • Protecting against Potential Healthcare AI Liabilities
  • Conclusion - Successful AI Requires Sophisticated Regulation and Regulatory Counsel

For an overview of current practical guidance on generative AI, ChatGPT, and similar tools, see Generative Artificial Intelligence (AI) Resource Kit.

To follow legislative developments related to ChatGPT and generative AI, including those related to healthcare, see ChatGPT Draws State Lawmakers' Attention to AI.

AI's Development in the United States and Certain Foreign Jurisdictions

Although AI can be described simply as the engineering and science of making intelligent machines, its effects are much more complex. ML is a subset of AI focused on how to improve computer operations based on informed actions and statistics. While AI programming has been in existence for decades, the recent developments in generative AI have been transformative in mainstream use. Accelerated growth in healthcare can be attributed, at least in part, to the COVID-19 Public Health Emergency (PHE) when digital healthcare, including products driven by AI, emerged as a marketable means to accessible care.

Pre- and post-PHE, the United States has been a premier healthcare leader with breakthrough innovations and research, and this continues to be the case with AI's evolution. However, the current barren regulatory landscape has cast a unique shadow over AI's potential, which is particularly significant in light of an aging population, high Medicaid and Children's Health Insurance Program enrollment-growing 29.8% from February 2020 to December 2022-and multiple ongoing epidemics in mental health and substance abuse. Considering this healthcare climate, AI as a regulated and tamed tool has an incredible opportunity in history with its unique ability to renovate the health and wellness not only of the nation, but the entire global population, at a pivotal point in human history.

Such optimism stands in stark contrast to warnings about AI's potential to harm or mislead. In fact, the World Health Organization (WHO), which issued the Ethics & Governance of Artificial Intelligence for Health in 2021, recently called for caution to be exercised as "the data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness." While international bodies, like the European Union, have been actively monitoring and pushing for limitations on AI for years, to date, the United States has virtually allowed the industry to regulate itself. Without swift action, de facto legal regimes for AI may be established outside of the United States, most significantly in China, if only due to the size of its population base. This is notable, as is the lack of experience by federally elected officials and staff in the crucial arena of computer science and law, coupled with the fact that Congress has been notoriously adverse to imposing sweeping limitations on technology companies. The United States has a tremendous opportunity to grow and lead in this arena. Alternatively, many experts strongly believe the role of governing AI must be a global collaboration with international monitoring, similar to how the nuclear field is regulated. While AI now has legislators' attention and future regulation is ultimately expected, stakeholders are hyper-aware of the implications of further delay.

Deaf to legislation battles, AI/ML in healthcare has advanced in a broad range of applications, from innovations in identifying acute health episodes and improving personalization of care and treatment plans, to pharmaceutical development and isolation and self-harm prevention. Understanding that AI is constantly evolving, this practice note focuses on the legal considerations of AI in healthcare in the United States that can be applied alongside regulatory developments to support protective and successful implementation.

Existing Legal Framework of AI Regulation in the United States

Currently, no comprehensive federal framework to regulate AI/ML exists. The White House's Blueprint for an AI Bill of Rights does offer high-level direction in the design, deployment, and use of automated systems to prioritize civil rights and democratic values, a number of federal agencies have issued high-level guidance or statements, and Congress is taking steps to educate itself, including through hearings with stakeholders and technology executives; however, material and standardized safeguards have yet to be established. In contrast, certain states are actively developing and implementing laws to oversee the development and deployment of AI that impacts healthcare. For example, the California Consumer Privacy Act (CCPA) provides consumers with rights to opt out of automated decision-making technology. Illinois' proposed Data Privacy and Protection Act would regulate the collection and processing of personal information and the use of so-called covered algorithms, which include computational processes utilizing AI/ML. Approximately half of the country's states already have pending or enacted AI legislation.

Stakeholder and industry groups are also actively releasing guidance, despite the lack of enforceability, which materially limits its implementation. For instance, in order to align on health-related AI standards in a patient-centric manner, the Coalition for Health AI (CHAI) released a Blueprint For Trustworthy AI Implementation Guidance and Assurance for Healthcare. The American Medical Association (AMA) has similarly published Trustworthy Augmented Intelligence in Health Care, a literature review of existing guidance, in order to develop actionable guardrails for trustworthy AI in healthcare.

AI Regulatory Considerations in U.S. Healthcare

At minimum, industry actors should consider the full array of healthcare regulatory and legal issues when creating or using AI/ML products, including those described herein.

Data Privacy
The privacy rights of patients and users are a tremendous consideration at the crux of AI/ML. Consumer and health information privacy laws may be implicated at both the federal and state level with regard to the access, sharing, and use of protected health information (PHI) and personally identifiable information (PII) with AI/ ML. Generally, the Health Insurance Portability and Accountability Act of 1996 (HIPAA), Pub. L. No. 104-191, limits the ability of certain health entities to share PHI unless an exception applies, and specifically prohibits the sale and commercialization of PHI. In addition, many state data privacy laws are broader and more comprehensive than HIPAA, including CCPA and Washington's recently enacted My Health My Data Act, 2023 Wash. Advance Legis. Serv., ch. 191. Such laws may necessitate authorization, consent, notice, or proper anonymization of data prior to its transfer or use. Further, certain sensitive data, such as mental health, reproductive health, and substance use disorder information, genetic information, and healthcare records of minors are subject to more aggressive restrictions. As such, in assessing AI/ML models or algorithms, it is critical to determine whether PHI, PII, or other sensitive data is regulated and whether consent, notice, and/or other preconditions must be met prior to accessing, disclosing, or transmitting data in AI/ML products.

Data Assets and Rights
With the development of AI/ML, data already collected by healthcare providers becomes a valuable asset that can be used to improve the quality of care for patient populations, and it can also be monetized with further use cases. In order for AI/ML to provide quality results, relevant and high-quality data tailored to the task at hand is imperative. Quality patient data collected at the provider level can be used to improve AI/ML, ultimately resulting in higher-quality outputs. This data can also be monetized through licensure to other companies looking for quality data to train their own AI/ML models. There should be a disciplined approach when allowing third parties or vendors access to this data, as these third parties often request broad rights to use the data to improve their services. Agreements should be carefully crafted to clearly retain all ownership rights in its data for its users, while also providing the relevant third party a limited license to use such data as desired.

Data Commercialization
Relatedly, caution should be exercised where an AI/ML health product does not have a monetary cost for its use. In some instances, developers of allegedly free AI/ML products are compensated via the use of valuable client data entered into the product. Essentially, a user may be trading data holding value and, in effect, privacy of the data subjects, for the use of the product. The terms of use and privacy policies associated with such products should be closely reviewed to determine the data rights that may be exchanged for the use of an AI/ML product.

The commercial and legal stakes are specifically high with regard to the use of data in AI/ML training. Use of data in a manner that violates federal or state data privacy laws can be potentially catastrophic for an AI/ML product and patient welfare. The developer of the AI/ML model or algorithm could be required to unwind the improperly used data from the AI/ML, which is a complex, near-impossible task, or else destroy the AI/ML models or algorithms that were trained with data that was not properly licensed or obtained, as the FTC has required for certain algorithms trained with improperly used data.

Corporate Practice of Medicine
Generally, the corporate practice of medicine doctrine (CPOM) prohibits the practice of medicine by a corporation, including by employment of licensed healthcare providers (physicians, and in some states other licensed healthcare providers), other than by a professional corporation owned by individuals duly licensed to practice the profession. The public policy rationale behind CPOM is that clinical decision-making should be left to duly licensed professionals, and not be unduly influenced by unlicensed persons or corporations. Not all states have CPOM restrictions, and CPOM laws vary widely state-to-state.

Under existing doctrines, CPOM could impact or outright prohibit generative AI models from being used for clinical decision-making, and in more restrictive states, could prohibit generative AI-related tasks even where a licensed provider supervises the AI. Developments related to the application of CPOM to generative AI in healthcare should be monitored, especially as they are expected to evolve with the proliferation of AI.

To read the full article click here

Previously publish by Practical Guidance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.