ARTICLE
10 September 2024

Is There Room For AI In The ICU? Guiding Principles And Compliance Considerations

AC
Ankura Consulting Group LLC

Contributor

Ankura Consulting Group, LLC is an independent global expert services and advisory firm that delivers end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation. Ankura consists of more than 1,800 professionals and has served 3,000+ clients across 55 countries. Collaborative lateral thinking, hard-earned experience, and multidisciplinary capabilities drive results and Ankura is unrivalled in its ability to assist clients to Protect, Create, and Recover Value. For more information, please visit, ankura.com.
Artificial Intelligence (AI) offers unprecedented opportunities to enhance patient care, streamline clinical documentation, and support medical decision-making processes.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Introduction

Artificial Intelligence (AI) offers unprecedented opportunities to enhance patient care, streamline clinical documentation, and support medical decision-making processes. More and more healthcare professionals find themselves caught between rapidly changing regulations and documentation requirements, leaving them less time for what they care about most — patient care. Can AI be the key to providing physicians and other clinicians more opportunities and time to spend with their patients? This article delves into five key considerations and recommendations that healthcare organizations and those at the bedside must consider as they adopt AI.

1. Patient Privacy and Data Security

AI tools require access to vast amounts of patient data to learn and make accurate recommendations, but the confidentiality of patient information is paramount in healthcare. An ongoing consideration for providers will be the weighing of the benefits of an AI solution with the risks that are inherent from a privacy and security perspective. Among those are: how will the data be shared, what data will be shared, and what will you negotiate in your contracts? Will software owners return protected health information, and how can you manage this expectation against the idea that AI will continuously learn based on the data that is put into the system? Physicians and their staff should engage in transparent communication with patients about how their data is used and secure the appropriate patient authorization when possible.

Recommendation: Partner closely with the with IT and data analytics function to ensure robust data encryption and access control measures and regularly audit AI systems and processes for compliance with the Health Insurance Portability and Accountability Act (HIPAA) of 1996, Pub. L. No. 104-191 and other relevant regulations. Carefully review contracts and business associate agreements to determine what protected health information will be shared and retained by the AI system. Be transparent with patients when utilizing AI and secure appropriate consent and authorizations when needed.

2. Accuracy and Reliability of AI-Assisted Documentation

The development and adoption of AI tools in healthcare demands significant attention and resources, including the meticulous control and examination of AI's development, customization, and testing phases. The compliance department is crucial in ensuring that AI systems meet regulatory standards and best practices, thus safeguarding healthcare providers and patients alike. Inaccurate AI-generated documentation can lead to severe consequences, such as misdiagnoses, inappropriate treatment plans, and other patient safety issues. Therefore, healthcare providers must diligently review and verify AI-generated documentation to prevent these risks. Continuous monitoring and quality assurance are essential to maintain the integrity and reliability of AI tools. AI holds the potential to revolutionize healthcare by streamlining clinical documentation and supporting healthcare providers. However, the responsibility for clinical decisions must always rest with human professionals.

Recommendation: Before AI tools are integrated into clinical settings, complete rigorous testing in controlled environments to identify and mitigate potential issues to ensure that the AI operates accurately and reliably under various conditions and can support healthcare providers effectively without compromising patient safety. Establish a cadence of ongoing monitoring including regular audits, performance evaluations, and updates to the AI tools based on the latest medical knowledge and regulatory guidelines to identify and address any emerging issues, ensuring that the use of AI remains a reliable support tool for healthcare providers.

3. Bias and Fairness

The use of AI could inadvertently perpetuate or even exacerbate existing biases present in training data, leading to disparities in patient care. Engage multidisciplinary teams, including ethicists, compliance, and clinical experts, in the development and oversight of AI tools. All care-related decisions, even those made with AI, can have profound implications for patient outcomes and well-being. Recently, the use of AI in care by Medicare Advantage (MA) plans has come under scrutiny from the Centers for Medicare and Medicaid Services (CMS). According to the Final Rule CMS-4201-F related to coverage criteria and utilization management requirements:

  • MA plans may use algorithms or AI to assist in making coverage determinations. However, responsible AI use must comply with the rules for making determinations of medical necessity as outlined in § 422.101(c), which includes basing decisions on specific individual patient circumstances rather than sets of data that do not consider individual patient's needs;
  • Algorithms or AI tools should be aligned with coverage under § 422.101(b)(6)(ii). AI cannot be used to alter coverage criteria over time, and predictive algorithms or software tools cannot apply other internal coverage criteria that have not been made public; and
  • All MA plans should pay particular attention to the nondiscrimination requirements of Section 1557 of the Affordable Care Act (ACA) which prohibits discrimination in certain activities and health programs and activities. The 2024 Final Rule, "Nondiscrimination in Health Programs and Activities," was finalized by the U.S. Department of Health and Human Services (HHS) on April 26, 2024. It updates Section 1557 of the Affordable Care Act (ACA) to prohibit discrimination based on race, color, national origin, age, disability, or sex in federally funded health programs and services. The rule applies to health insurance issuers receiving federal assistance, including Medicare Parts C and D payments, state Medicaid agencies, and those enrolled in Health Insurance Marketplaces and other health coverage.1 Plans must ensure that tools do not perpetuate existing biases or introduce new ones.2

Recommendation: AI tools are only as good as the data they are trained on, so use diverse and representative datasets in training AI models to minimize bias. Implement regular reviews of AI decision-making processes and outcomes to identify and correct any biases. Inherent biases in training data can lead to biased AI outputs, which can have serious implications in healthcare. For example, training data from periods with unrecognized racial or gender biases can result in AI systems that perpetuate these biases. Human professionals must be involved in the development and training of AI tools to identify and mitigate these biases, ensuring fair and accurate AI outputs.

4. Transparency and Explainability

AI's potential to streamline clinical documentation and support healthcare providers is promising. However, AI cannot operate in a vacuum. The responsibility for documentation, diagnosis, and treatment must always rest with healthcare providers, with AI serving as an adjunct tool. AI systems, while powerful, are not infallible. They require the expertise and judgment of human healthcare providers to ensure accuracy and patient safety. Healthcare professionals must have the final say in clinical decisions, leveraging AI as a support tool rather than a replacement for human interaction. The accuracy and reliability of AI in streamlining clinical documentation are paramount, and this depends heavily on thorough testing and human oversight. This is echoed in President Biden's recent Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence whose introduction reads, in part, "In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built."3

Recommendation: Providers should prioritize the use of explainable AI tools that provide consistent rationales for their recommendations. Additionally, physicians should be adequately trained to interpret AI-generated documentation and communicate its basis to patients. Lack of transparency can erode trust and hinder the acceptance of AI-assisted documentation on the part of patients.

5. Legal and Ethical Responsibility

As AI plays a more significant role in clinical documentation and decision-making, healthcare organizations must consider how liability is apportioned. Informed consent is one of the most immediate challenges in integrating AI into clinical practice. Questions are especially challenging to answer in cases where the AI operates using "black-box" algorithms, which may result from noninterpretable machine-learning techniques that are very difficult for clinicians to understand fully. AI health apps and chatbots are increasingly used for diet guidance, medication adherence, health assessments, and analyzing data from wearable sensors. These uses raise ethical questions about user agreements and informed consent.

Recommendation: Determine the required transparency surrounding the information that a clinician may need to disclose when they cannot fully interpret the diagnosis/treatment recommendations by the AI. Ensure that each organization develops clear policies and guidelines that delineate the roles and responsibilities of AI systems and human healthcare providers. Ensure that these policies are communicated effectively to all stakeholders.

Conclusion

The ethical implications of AI integration are complex and multifaceted. There is a vital need to maintain a balance between leveraging AI for its transformative potential and upholding ethical standards, patient rights, and social equity. AI has the potential to improve patient care by optimizing clinical documentation and allowing physicians to spend more time with their patients, shifting towards more patient-centered care, facilitated by technology. This challenge calls for a collaborative effort among healthcare leaders, legal counsel, and other stakeholders to ensure that AI is integrated into healthcare practices aligned with the highest ethical and professional standards. Addressing these five critical areas -- patient privacy and data security, accuracy and reliability of AI-assisted documentation, bias and fairness, transparency and explainability, legal and ethical responsibility -- is essential for the responsible adoption of AI in healthcare, ultimately benefiting patients, physicians, and the broader healthcare system.

Footnotes

1. https://www.federalregister.gov/documents/2024/05/06/2024-08711/nondiscrimination-in-health-programs-and-activities

2. Frequently Asked Questions related to Coverage Criteria and Utilization Management Requirements in CMS Final Rule (CMS-4201-F)

3. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

DD1 - AI is not really a system but a tool or solution that overarches multiple systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More