ARTICLE
1 August 2024

Virtual And Digital Health Digest - July 2024

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
This digest covers key virtual and digital health regulatory and public policy developments during June and early July 2024 from the United States, United Kingdom, and European Union.
Worldwide Intellectual Property
To print this article, all you need is to be registered or login on Mondaq.com.

This digest covers key virtual and digital health regulatory and public policy developments during June 2024 from United Kingdom, and European Union.

While it has been a relatively quiet month in the EU given elections in the European Parliament and in the UK (as well as other countries across the EU), agencies across the globe have published important guidance on machine-learning enabled medical devices. This includes the UK Medicines and Healthcare products Regulatory Agency's (MHRA) guiding principles on transparency, published together with the U.S. Food andDrug Administration (FDA) and Health Canada, and the International Medical Device Regulators Forum (IMDRF) consultation on its guiding principles on good machine learning practice (which itself follows similar guidance from MHRA, FDA, and Health Canada in 2021). This demonstrates the increased importance of international standards in this area and the need for coordination between regulatory authorities to standardize guidance for these products.

Regulatory Updates

MHRA Publishes Guiding Principles on Transparency in Machine-Learning Enabled Medical Devices. On June 13, 2024, the MHRA published its guiding principles on transparency for machine learning-enabled medical devices, developed jointly by the MHRA, the FDA, and Health Canada. The principles are intended to promote transparency for machine-learning medical devices (MLMDs), but should be taken into account for all medical devices.

The principles explain that the relevant audience must be considered in order to have effective transparency. Further, the motivation for transparency in the case of MLMDs should be safety and efficacy of the device, as well as being crucial to patient-centered care.

The information that is of relevance will vary depending on the MLMD. However, the principles provide guidance on the type of information that may be suitable, including how a device can be described clearly and accurately and how the safety of the device can be maintained throughout its lifecycle.

Where the information is placed and when it is communicated should also be considered. The principles suggest providing information to users in a way that is responsive and enabling information to be personalized and implementing human-centered design principles in order to support transparency. This could mean, for example, involving parties and users in the design and development, and communicating with a suitable level of detail and language for the intended audience.

The International Medical Device Regulators Forum Opened a Consultation on Its Draft Guiding Principles on Good Machine Learning Practice for Medical Device Development. On July 1, 2024, the IMDRF published a consultation on its draft Guiding Principles on good machine learning practice for medical device development. The document sets out 10 principles that are intended to promote the safe and effective development of high-quality AI medical devices. They address a broad range of areas, such as demonstrating and measuring the performance of the device, software engineering practices, and transparency.

At the forefront of the principles is the importance of understanding the intended use and purpose of the device. This theme can be seen throughout the document, including in relation to the choice of reference standards, model and design, and the information to be provided to users. Users of the AI-enabled medical device must be provided with information that is clear and appropriate for their needs. This is in line with the approach taken by the MHRA in its guiding principles on transparency in machine-learning enabled medical devices (discussed above).

The principles also highlight the importance of using appropriate datasets. Datasets should be representative of the whole patient population, and data that have been used to train the algorithm should not be used for testing.

The consultation is open until August 30, 2024.

It is also worth noting that in 2021, the MHRA, the FDA, and Health Canada published a joint statement identifying 10 guiding principles to help inform the development of Good Machine Learning Practice. There is some similarity between this document and the recent consultation. You can read more about this in our November 2021 Blog.

IP Updates

Global Patent Dispute Over Continuous Glucose Monitoring Technology Heats up in Europe. An ongoing global dispute between Abbott and manufacturers and distributors of continuous glucose monitoring (CGM) devices and technology, used to remotely measure blood glucose levels, has led to a string of decisions including from the Unified Patent Court (UPC) and the UK Patents Court. There are multiple claims of patent infringement and counterclaims for revocation in the U.S. and Europe. In this digest, we provide a summary of one recent decision from the UK Patents Court and two from The Hague local division of the UPC.

UK Patents Court Reject's Abbott's Claim Against Dexcom for Continuous Glucose Monitoring Device Patent Infringement and Invalidates Abbott's Patent for Obviousness. In a UK Patents Court decision dated June 28, 2024, Justice Mellor rejected Abbott's patent infringement claim against Dexcom in relation to Dexcom's G7 CGM device (launched October 2022) which competes with Abbott's flagship integrated CGM product. The judge further held that Abbott's patent was invalid for obviousness over a prior U.S. patent.

The judge's conclusion that Dexcom's device was non-infringing largely hinged on the interpretation of the wording in claim 1 "coupled to the housing." This wording related to the physical relationship between the introducer needle and the device housing, and whether it should be interpreted narrowly (as argued by Dexcom) as the needle and the housing needing to be "physically yoked together so that they move together" via a manual process or broadly (as argued by Abbott) as the needle and housing merely being in contact to enable either manual or automatic movement. Applying a purposive construction, the judge agreed with Dexcom that "coupled" should be interpreted as limiting the claim to a manual insertion of the needle, meaning that Dexcom's G7 CGM device did not infringe Abbott's patent.

The judge also concluded that Abbott's patent was invalid for obviousness over a U.S. patent (Heller) which described a device similar to Abbott's. The judge concluded that the process through which the skilled team would go to get from Heller to the claims in the patent were not the list of "steps" between different inventions, as argued by Abbott, but rather were an inevitable part of any practical implementation by a notional skilled team of the teaching disclosed in Heller. Any differences between the CGM devices described in Heller and in the Abbott patent claims were minimal and merely routine design implementation choices.

This case illustrates the impact a few words can have on claim construction and the value to paying close attention to claim drafting.

UPC Preliminary Injunction Granted Against Sibio's CGM Device. On June 19, 2024, the Unified Patent Court Local Division of The Hague handed down separate rulings on two preliminary injunction (PI) applications made by Abbott against Sibio Technology Limited (Sibio) in relation to continuous glucose monitoring technology. The glucose monitors in question werein vivoanalyte monitoring systems, which use an insertablein vivosensor along sensor electronics in an integrated unit along with a display device with proprietary software (typically on a smartphone).

In the first ruling, the court denied the PI, concluding that it is more likely than not that claim 1 of the patent-in-suit would be held to be invalid on the basis that amendments made by Abbott to the patent extended beyond the content of the original application and so constituted unallowable "added matter." The UPC applied the "gold standard" disclosure test, acknowledging that this was the standard test used in many UPC member states. Namely, this means that any amendment to a European patent application or European patent relating to disclosure (i.e., the description, claims, and drawings), must only be made within the limits of what a skilled person would derive directly and unambiguously using common general knowledge, as seen objectively and relative to the date of filing in relation to the whole of the application as filed. Interestingly, this patent had been opted out of the UPC's competence but this opt-out was withdrawn in March 2024.

In the second ruling, Abbott asserted another patent and, in this case, the court (made up of the same three legal judges, as well as a technical judge) granted the PI in Germany, France, The Netherlands, and Ireland. The fact that the court deemed that it was competent to grant a PI in Ireland is notable as Ireland has yet to ratify the UPC Agreement. Abbott's submissions mentioned the fact that the patent-in-suit was in force in the UK. However, the court noted that, since the UK is no longer a UPC contracting member state, the court did not understand the PI application to involve the UK. These cases are an interesting case study on how multiple PI applications can be deployed to stop alleged infringers of digital health products from marketing and selling such products.

European AI Act: Copyright Implications. Digital health companies offering AI related goods and services should be preparing themselves to ensure that they will be compliant with the upcoming EU Artificial Intelligence Act (the EU AI Act), as discussed in previous digests.

The EU AI Act contains a specific set of rules that apply to general purpose AI (GPAI) models, i.e., those trained on a large amount of data using self-supervision at scale that displays significant generality and that are able to perform competently a wide range of distinct tasks. Examples of GPAI models include generative AI applications such as ChatGPT. The provisions relating to GPAI models will not enter force until 12 months after the EU AI Act itself enters force.

The obligations on providers of GPAI models include putting in place policies to comply with EU copyright law irrespective of the jurisdiction in which the copyright-relevant acts underpinning the training of the GPAI model take place (Art 53(1)(c) and Recital 106).

The EU AI Act also confirms that the process of scraping copyright works to train AI models benefits from the text and data mining exception in Article 4 of the Copyright in the Digital Single Market Directive 2019/790, subject to the right of rights holders of being entitled to expressly reserve their rights to enforce copyright (i.e., they can "opt-out," so long as they do so explicitly).

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More