Welcome to the latest installment of Arnold & Porter's Virtual and Digital Health Digest. This digest covers key virtual and digital health regulatory and public policy developments during February and early March 2025 from the United States, United Kingdom, and European Union.
Artificial intelligence (AI) has been the focus this month, with certain aspects of the EU AI Act now in force and key guidance being published by the European Commission. In addition, the much criticized AI Liability Directive has been withdrawn by the European Commission. In the UK, the UK government published its AI Action Plan setting out its proportionate, flexible regulatory approach towards AI, and the Medicines and Healthcare products Regulatory Agency (MHRA) hosted an Innovation Showcase demonstrating how it is using digital technologies and AI throughout the regulatory lifecycle.
Regulatory Updates
First Provisions of the EU AI Act Now Apply. The first provisions of the EU Artificial Intelligence Act (EU AI Act) are now in effect. These provisions include the definition of what qualifies as an AI system, the obligation of AI literacy (requiring companies developing, placing on the market, or using AI systems to ensure users of AI systems have a sufficient level of AI literacy), and prohibited AI use cases under the EU AI Act. Other provisions of the EU AI Act will apply in accordance with the transition timelines.
European Commission Publishes Guidelines Aimed at Companies Developing or Using AI Systems. These provide clarity on the definition of AI Systems and on the prohibited AI practices under the EU AI Act. On the AI Systems definition, the guidelines clarify that only technologies that learn, reason, or adjust intelligently qualify as AI Systems. Traditional software, such as simple prediction models or basic data processing software, is excluded. On prohibited AI practices, the guidelines provide clarifications with concrete examples of what qualifies and does not qualify as prohibited practices. They also outline the responsibilities of companies engaging in prohibited AI practices. Both guidelines are yet to be formally adopted by the European Commission.
UK Government Publishes AI Opportunities Action Plan (the Report). The Report highlights a proportionate, flexible regulatory approach towards AI. While life sciences is not its main focus, the Report recognizes that ineffective regulation could reduce uptake in sectors such as the medical sector, and features a number of examples of how AI can be used in health care. The Report recommends that the government should appoint AI Sector Champions in key industries, including life sciences, to collaborate with industry and government to develop AI adoption plans. Further, all regulators (including the MHRA) should publish annual reports on how they enabled innovation and growth by AI.
MHRA Innovation Showcase. Earlier this month, the MHRA hosted an Innovation Showcase, demonstrating the MHRA's work across the spectrum of innovation, with a focus on AI prototypes in the regulatory lifecycle. A number of use cases within the MHRA were demonstrated, highlighting a "prove by doing" approach to innovation. AI featured heavily in the use cases, including a generative AI assistant to respond to questions about the British Pharmacopeia, the use of AI to assist assessment of clinical trial applications, and the use of AI to identify online sellers of counterfeit medicinal products. The MHRA intends to expand its approach into other areas where innovation and AI can lead to productivity gains.
UK Online Pharmacies Must Strengthen Safeguards for Supply of Medicines Via Telehealth Services. The UK's General Pharmaceutical Council published new guidance for registered pharmacies providing pharmacy services at a distance, including on the internet. The guidance introduces enhanced safety measures whereby prescribers must take additional steps to ensure the information that a person provides in order to obtain medicines from an online pharmacy is accurate. Notably, medicines categorized as "high-risk" should not be prescribed based on an online questionnaire alone. See our February BioSlice Blog for more information.
Liability Updates
European Commission Withdraws AI Liability Directive (AILD), as Confirmed in the European Commission 2025 Work Program and Annexes. Proposed by the European Commission in September 2022, the AILD aimed to ensure broader protection for damage caused by AI systems. It faced much criticism from members of the European Parliament (as well as industry), who argued it would add unnecessary regulatory burden. In particular, the EU AI Act and the new Product Liability Directive set out a framework for the regulation of AI and provided redress for those who may suffer harm from AI; the addition of a further set of overlapping rules was seen as duplicative, as well as adding the risk of potential confusion.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.