This digest covers key virtual and digital health regulatory and public policy developments during May and early June 2024 from United Kingdom, and European Union.
Of interest, artificial intelligence (AI) safety has been in focus over the past month, including with the publication of the Interim International Scientific Report on the Safety of Advanced AI. International collaboration in this area is increasing as world leaders met at the AI Summit in Seoul, and the UK government recently announced a collaboration on AI safety with Canada, supplementing its existing commitment with France. Further, the UK launched the AI safety evaluations platform, which is available to the global community. In the meantime, the EU has established an AI Office to oversee the implementation of the AI Act and the Medicines and Healthcare products Regulatory Agency (MHRA) has published its AI Airlock to address novel challenges in the regulation of artificial intelligence medical devices (AIaMD).
Regulatory Updates
Council of the European Union Adopts the AI
Act. On May 21, 2024, the Council of the European Union formally adopted the Artificial Intelligence
Act (AI Act). Following a lengthy negotiation period since the
initial proposal by the European Commission (EC) in April 2021, the
legislative process for the world's first binding law on AI is
nearing its conclusion. For further details on the negotiations
surrounding the text of the AI Act, see our
January 2023 Advisory, and our
April 2024 digest for details on the agreed provisions of the
AI Act.
The AI Act will become law 20 days after its publication in the
EU's Official Journal, and will apply two years after that,
with some exceptions for specific provisions.
The EC also put forward the AI Pact, a voluntary initiative intended to
have companies comply with the requirements of the AI Act ahead of
its full implementation.
Establishment of the European AI Office. On May 29, 2024, the EC established the Artificial Intelligence Office, following the adoption of the commission establishing the AI Office on February 14, 2024, as mentioned in our April 2024 digest. The AI office will be responsible for:
- Ensuring the coherent implementation of the AI Act: supporting the governance bodies in EU Member States and directly enforcing the rules for general-purpose AI models
- Coordinating the drawing up of state-of-the-art codes of practice: conducting testing and evaluation of general-purpose AI models, requesting information, and applying sanctions
- Promoting an innovative EU ecosystem for trustworthy AI: enabling access to AI sandboxes and real-world testing
- Ensuring a strategic, coherent, and effective European approach on AI at the international level
The AI Office is currently:
- Preparing guidelines on the AI system definition and on the prohibitions
- Getting ready to coordinate the drawing up of codes of practice for the obligations for general-purpose AI models
- Overseeing the AI Pact, which allows companies to engage with the EC and stakeholders regarding the implementation of the requirements of the AI Act ahead of its application
The first meeting of the AI Office is expected at the end of June 2024.
Council of the European Union Adopts the Extension to IVDR Transition Periods and Accelerated Launch of Eudamed. On May 30, 2024, the Council of the European Union formally adopted the regulation to amend the Medical Device Regulations (EU) 2017/745 and the In Vitro Diagnostic Medical Device Regulations (EU) 2017/746 (IVDR), as applicable, to extend the transition provisions for certain in vitro diagnostic medical devices under the IVDR; allow for a gradual roll out of Eudamed so that certain modules will be mandatory from late 2025; and include a notification obligation in case of interruption of supply of a critical device. The details are discussed in our February 2024 digest and in our February 2024 blog post. The regulation will enter into force following publication in the EU's Official Journal.
Launch of MHRA AI Airlock Regulatory Sandbox. On May 9 2024, the MHRA launched the AI Airlock, a new regulatory sandbox for AIaMDs. The aim of the AI Airlock is to identify the regulatory challenges posed by standalone AIaMD. The MHRA has created a platform through which regulators, manufacturers, and other relevant stakeholders can bring their expertise and work collaboratively to understand and mitigate novel risks associated with these products. A small number of real-world AIaMD products will be assessed to identify possible regulatory issues that could arise when AIaMD products are used for direct clinical purposes within the National Health Service (NHS). The AI Airlock has been set up to follow the regulatory sandbox model. However, it is described as being different from other regulatory sandboxes due to the collaboration between the MHRA, Department of Health and Social Care (DHSC), NHS AI Lab, NHS England, and UK Approved Bodies. You can read more in our May 2024 blog post.
Launch of UK AI Safety Evaluations Platform, Inspect. On May 10, 2024, the UK's Department for Science, Innovation and Technology (DSIT) and the AI Safety Institute launched a new AI safety testing platform called Inspect. Inspect is a software library which enables innovators to assess specific capabilities of their technologies, for example core knowledge, ability to reason, and autonomous capabilities, and then generates a score based on the results. The platform is open-source and available to the global AI community with the aim of enhancing the consistency of safety evaluations of AI models across the world.
MHRA Publishes Proposals for International Recognition of Medical Devices. On May 21, 2024, the MHRA proposed the adoption of a procedure to recognize the approvals and certifications of medical devices from certain international regulators. The aim is to facilitate faster access to medical devices for patients in Great Britain, to avoid duplicative assessments of devices in the UK and allow the MHRA to focus resources on innovative devices that may be excluded from the proposed scheme. In particular, the following are excluded:
- Software as a Medical Device (SaMD) (including AIaMD) products that do not satisfy the MHRA's intended purpose guidelines
- SaMD (including AIaMD) products approved via a route which relies on equivalence to a predicate (i.e. U.S. 510(k))
The relevant regions from which certificates and approvals would potentially be recognized are Australia, Canada, European Economic Area countries, and the U.S. In order to benefit from the scheme, the device must meet certain eligibility criteria, including that the labelling and packaging are in English, and the device must be in "all aspects" the same as that originally approved or certified by the recognized regulator. The proposed regime provides four different access routes. The applicable route will depend on, for example, the classification of the device.
You can read more about this topic in our blog post from May 2024.
UK Leading the Way on AI Safety — Interim
International Scientific Report on the Safety of Advanced AI
Published. The interim International Scientific Report on
the Safety of Advanced AI was published on May 17, 2024. Commissioned by the
UK government, experts from more than 30 countries, the EU, and the
UN have come together to produce this independent research report.
It focuses on general purpose AI by setting out its capabilities,
risks, and how those risks may be mitigated. Publication comes
ahead of the AI Summit in Seoul (as mentioned in our
May 2024 digest), hosted jointly by the UK and South Korea,
where world leaders subscribed to the Seoul Declaration, which is
committed to cooperation and collaboration on thresholds for
significant AI risks and safety testing.
Through commissioning the report and jointly hosting the AI Summit,
the UK is positioning itself as a leader in AI safety. The UK also
recently announced a collaboration with Canada, whereby
joint research in AI safety will be undertaken. This is in addition
to the UK's collaboration with France, as discussed in our
March 2024 digest.
Meanwhile, leading AI developers agreed to the Frontier AI Safety Commitments,
under which they will publish risk assessments of their
cutting-edge AI ahead of the AI Action Summit in France.
Report on AI Governance Published. On May 28,
2024, the House of Commons Science, Innovation and Technology
Committee published their report on AI governance. The committee
recommends that the government be ready to introduce specific
legislation in this area in case regulators' current powers and
the voluntary codes of practice prove ineffective. This will depend
upon whether the sectoral regulators, such as the MHRA, can
implement the government's overarching principles and keep pace
with innovation. The report urges the government to assess whether
sectoral regulators' powers are sufficient to address risks of
AI. Further, it recommends examination of how enforcement can be
coordinated among sectoral regulators, identifying any lack of
clarity or gaps in powers. It must also ensure that regulators have
sufficient resources to enforce and investigate the development of
AI. The MHRA, together with various other health authorities and
oversight bodies, submitted joint evidence that was taken into
account in the report. They submitted that AI governance in health
is generally strong, although different opinions can emerge on
issues that cut across multiple bodies' remits.
Research report and updates on the MHRA-NICE partnership
into digital mental health technologies. On May 3, 2024,
the MHRA published a research report into the public's
perspectives of the benefits, risks, and applicability of digital
mental health technologies (DMHT). The report was commissioned by
the MHRA and the National Institute for Health and Care Excellence
(NICE) as part of a joint three-year partnership funded by the
Wellcome Trust looking to inform the future regulatory and
evaluation framework for DMHT.
On May 7, 2024, the MHRA also published an update on other aspects of the
DMHT partnership with NICE. The MHRA reports that it has concluded
its work on mapping out the landscape of available DMHTs and their
key characteristics, and exploring the key challenges for DMHT
across the regulatory and evaluation pathway. It has led to the
development of a conceptual framework for categorizing DMHTs and
clearer proposals for how DMHTs qualify as SaMD. This work has been
submitted for publication and sets up future work to consider the
classification of DMHTs as SaMD and clinical evidence and
post-market surveillance requirements. To receive future updates on
the project, please register your interest.
Private Members' Bill on AI regulation Has Been
Dropped. In previous digests, we described how the Artificial Intelligence (Regulation) Private
Members' Bill was progressing through the House of Lords
with its second reading. The bill sought to place AI regulatory
principles on a statutory footing and establish a central AI
authority to oversee the regulatory approach. This approach
differed from that proposed by the UK government, where instead
core regulatory principles will be set out in guidance and will be
applied by existing regulatory authorities in their individual
sectors. Although the bill passed its third reading in the House of
Lords and was subsequently sent to the House of Commons on May 10,
2024 to be scrutinized, the bill has now been dropped due to the
announcement of the general election on July 4, 2024. It will be
interesting to see whether a similar proposal is put forward or
whether the current government's more flexible approach to the
regulation of AI will change when the new government is formed.
Privacy Updates
Publication of the ICO's Strategic Approach to AI. On May 1 2024, the Information Commissioner's Office (ICO) published its strategic approach to AI. Like the MHRA's strategic approach, which was discussed in last month's digest, this was also in response to the February 1, 2024 letter from the Secretaries of State of DSIT and DHSC. The ICO explains how the principles outlined in the government's white paper already largely mirror the data protection principles that the ICO regulates and what work it has already done and plans to do to implement these principles:
- Publication of guidance. The ICO has published a range of guidance on how data protection law applies to AI: AI and data protection, automated decision-making and profiling, explaining decisions made with AI, and an AI and data protection toolkit. It also tracks the latest developments, for example it published a report on the impact of neurotechnologies and neurodata on privacy, and is holding a consultation series on generative AI. The ICO plans to update the guidance on AI and data protection and automated decision-making in spring 2025 to incorporate the Data Protection and Digital Information Bill once the bill has passed.
- Provision of advice and support. The ICO offers advice services for AI innovators through its regulatory sandbox, innovation advice service, innovation hub, and consensual audits. It is currently participating in a pilot of the AI and Digital Hub, which allows innovators to ask and coordinate complex questions to multiple regulators simultaneously and will be testing new regulatory sandbox projects in the coming months, such as personalized AI for those affected by cancer.
- Regulatory action. The ICO uses its enforcement powers to promote compliance and safeguard the public.
Finally, the ICO explains how it collaborates with other regulators, government, standards bodies, and international partners to promote regulatory coherence.
UK Government Calls for Views on New Voluntary Cyber
Security Codes of Practice and the Development of a Global AI
Security Standard. On May 15, 2024, the UK government announced two new voluntary codes of practice
in the cyber security space: the AI Cyber Security Code of Practice
and the Code of Practice for Software Vendors. These codes
supplement others already in use, such as the Code of Practice for app store operators and
developers. The new codes are intended to assist AI and
software developers to improve cyber security by encouraging them
to ensure that their products can withstand attempts at hacking,
tampering, and sabotage. In addition, the AI Security Code sets out
measures that can be taken by various entities across the supply
chain to improve the security of AI products. It aims to increase
confidence among AI users across a broad range of industries, and
it is hoped that, in turn, this will boost efficiency and encourage
economic growth.
The AI Cyber Security Code of Practice is meant to open up
discussion with a wide range of stakeholders, including industry.
It is the foundation for eventually aligning a global standard on
AI security. The government welcomes views on the codes and on the
intention of developing a global AI security standard until August
9, 2024.
Reimbursement Updates
Consultation Open on Fast-Track MedTech Funding. On May 23, 2024, NICE and the NHS announced proposals to allow MedTech developers to gain access to NHS funding under a new fast-track route for clinically and cost-effective products. This would allow the NHS to introduce "game-changing products" recommended by NICE on a large scale. The new pathway aims to ensure that patients can benefit from the best products, devices, digital technologies, or diagnostic innovations, and to provide greater certainty for MedTech developers. The pathway has been developed according to five guiding principles:
1. It is developed in coordination with NICE, focusing on high-impact products
2. It should support existing and emerging technologies
3. It will include a mechanism for automatic identification of funding for technologies that are clinically and cost-effective to support their adoption on the NHS
4. It should enable change in clinical practices and services
5. It should support bias identification and mitigation
The consultation is open for feedback from
patients, clinicians, academics, and industry until August 15,
2024.
IP Updates
UK Intellectual Property Office Releases Updated
Guidance on the Examination of Patents Involving Artificial Neural
Networks. We reported on the developments of Emotional
Perception AI Ltd v. Comptroller-General of Patents, Designs
and Trade Marks [2023] EWHC 2948 (Ch) in our
December 2023 and
February 2024 digests and highlighted to readers and developers
of digital health products using artificial neural networks (ANN)
the shift in approach from the UK Intellectual Property Office
(UKIPO) Manual of Patent Practice to ensure that examiners do not
reject inventions using ANNs under the "program for a
computer" exclusion to patentability.
Since then, on May 7, 2024, the UKIPO has updated its guidance on the examination of patent
applications relating to artificial intelligence inventions (the AI
Guidance). The AI Guidance summarizes the UKIPO's position on
when an AI invention makes a technical contribution and when it is
excluded from patent protection.
To reflect Emotional Perception, the AI Guidance confirms
that an invention involving an ANN (whether implemented in hardware
or software) is not a computer program as such and therefore is not
excluded from patent protection for lack of technical contribution.
However, we infer from the tone and content of the AI Guidance that
the UKIPO is not in agreement with the outcome in Emotional
Perception. The AI Guidance notes that examiners are
encouraged to consider whether other exclusions to patentability
might apply instead.
Emotional Perception has, for now, provided the
opportunity for a broader scope of inventions using ANNs to be
patentable, but this may be short-lived if the Court of Appeal
reverses the decision of the High Court. The Court of Appeal heard
the appeal on May 14-15, 2024, and a decision is likely to be
provided before the end of July 2024.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.