ARTICLE
20 August 2024

Pharmaceutical Advertising: The EU AI Act: Impact On The Life Sciences Industry

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
We are in an era of exploration for artificial intelligence (AI), where companies are keen to utilise the perceived benefits of the new technology, and regulators are racing to develop...
European Union Food, Drugs, Healthcare, Life Sciences
To print this article, all you need is to be registered or login on Mondaq.com.
  1. Introduction

We are in an era of exploration for artificial intelligence (AI), where companies are keen to utilise the perceived benefits of the new technology, and regulators are racing to develop a framework to offer both guidance to industry and enable the regulators to assess the AI technology. On 12 July 2024, the European Union met that challenge when the EU Regulation laying down the first harmonised rules on AI (referred to as the "AI Act") was published in the Official Journal, setting in train the implementation of this new regulatory framework. The AI Act entered into force on 1 August 2024, with a staggered implementation of different parts of the regulation.

This landmark piece of legislation is the first specifically intended to regulate AI. It has been met with both hesitation and enthusiasm, and the full impact for companies developing AI technologies is still to be determined.

The AI Act is so-called "horizontal legislation", meaning it applies across industries, and its impact will be felt from agriculture to aerospace and beyond. In this chapter, we discuss some key questions on the application of the AI Act to the life sciences industries, with a particular focus on the regulatory framework for so-called "High Risk AI Systems", given the significant impact this will have on AI medical devices (AIMD).

  1. What Is an AI System?

The AI Act applies to systems that fall within its definition of "AI system". This is a machine-based system that infers, with a certain level of autonomy, on the basis of received inputs, how to generate outputs such as predictions and decisions.

The AI Act takes a risk-based approach to regulating AI, meaning the extent of the provisions that apply to a technology is dependent on certain risk factors. Key pillars of that approach include: (a) a prohibition on certain harmful AI practices;1 (b) a new regulatory framework for AI systems considered to be high risk (High Risk AI Systems); and (c) limited regulatory obligations for general AI systems deemed to have a transparency risk, but otherwise considered to present a low risk. In contrast, AI systems outside these categories are subject to very limited obligations, such as ensuring AI literacy.2

Importantly for life sciences companies, AIMD that undergo a conformity assessment procedure involving a Notified Body to be placed on the market in accordance with the EU Medical Devices Regulations (the MDR and IVDR)3 will be considered High Risk AI Systems for the purposes of the AI Act.

  1. When Will the AI Act Apply?

The AI Act will apply to AI systems placed on the market or put into service in the EU. The AI Act also has extra-territorial application where the output produced by the system is used in the EU.4 It is unclear how remote such use of an output can be – for example, whether this could include the eventual use in the EU of a medicine (the output) that was in part designed by an AI system outside the EU. This seems unlikely in the context of development of a medicinal product, but such remote use may be more important, for example, in the context of political campaigns, and so it is unclear how broadly this provision will be applied. In relation to AIMD, the provisions on scope are worded differently to the provisions on distance sales of a medical device in the Medical Devices Regulations,5 whereby services offered to EU residents using a medical device are caught by the MDR or IVDR. There is therefore some inconsistency in the application of the two regimes.

  1. When Are Medical Devices Regulated as High Risk AI Systems?

The focus of the AI Act is the new regulatory regime for High Risk AI Systems, which applies in addition to any sector-specific regulatory regime, such as the MDR or IVDR for medical devices. As noted above, an AI system that is a medical device, or is the safety component of a medical device,6 is a High Risk AI System where the applicable conformity assessment procedure under the MDR or IVDR requires the involvement of a Notified Body. Given the classification of software medical devices under the MDR and IVDR, virtually all AIMD will also be High Risk AI Systems.

AI systems used for certain other purposes deemed high risk are also within scope of the AI Act. These purposes include the evaluation of a person's eligibility for essential healthcare services and emergency triage of healthcare. These AI systems could be subject to the regulatory framework for High Risk AI Systems even if they do not qualify as a medical device.

  1. What Are the Obligations for High Risk AI Systems?

Like the MDR and IVDR, the regulatory framework for High Risk AI Systems set out in the AI Act is largely based on the New Legislative Framework approach, meaning many of the concepts will be familiar to medical device manufacturers. In many respects, these obligations duplicate, and overlap with, the requirements of the MDR and IVDR. Key areas of overlap include: pre-market conformity assessment; CE marking; risk management; obligations on the "provider" who is placing the AI system on the market to establish and maintain a quality management system; and post-market surveillance.

The duplication of requirements was one of the main concerns of industry during the legislative process. Not only does this create an additional regulatory burden for AIMD manufacturers, but there is also the risk of inconsistencies in the application of corresponding requirements under the two regimes. For example, a key issue raised during the legislative process was the potential for dual conformity assessment procedures. The final text of the AI Act provides that the conformity assessment for AIMD under that regulation "shall be" part of the conformity assessment procedure under the MDR or IVDR, as applicable.7 The AI Act also gives the provider the option to integrate aspects of this procedure, including producing a single technical documentation and declaration of conformity as well as integrating testing under the two regimes8 (though see further below for issues regarding conducting clinical testing of AI systems). A manufacturer's Notified Body under the MDR or IVDR can also act as the Notified Body under the AI Act, provided the Notified Body's competence has been assessed under the AI Act.9 How these provisions work in practice and whether they address the concerns raised by industry will largely depend on guidance from the authorities and the approach of Notified Bodies to the AI Act. This current uncertainty is unsatisfactory for the devices industry.

A particular area of concern is whether the relevant harmonised standards, compliance with which, under both regimes, gives rise to a presumption of conformity with certain statutory requirements,10 will diverge under the two regimes. The AI Act addresses this concern by requiring that when the Commission issues a request to a standards-setting body, it must specify that the new standards must be consistent with existing sectoral harmonised standards.11 However, it is unclear if that will be sufficient to alleviate these concerns. The European medical devices industry body, MedTech Europe, has called for the body responsible for developing harmonised standards under the AI Act to consult with relevant sectoral groups and industry stakeholders when producing these standards.12

Other relevant obligations to note include:

  • Data, data governance and elimination of bias: A key objective of the regulatory framework for High Risk AI Systems is to ensure the accuracy of AI systems and to mitigate the risk of bias in their outputs. This is achieved primarily through data governance and by setting general standards for the data used to train, validate and test AI systems.13 Thankfully, the amendments introduced during the legislative process mean the required standards are more qualified than originally proposed by the Commission, which were impractical, and likely impossible, to meet as they required data sets to be "free of errors and complete". The final version requires that the data sets must be to the best extent possible, free of errors and complete in view of the intended purpose.14 AIMD providers will therefore need to ensure they have sufficiently representative data sets, but may also be able to address deficiencies in the data through carve-outs in the intended purpose in the declaration of conformity and the labelling. This is similar to the risk management process under the MDR/IVDR.
  • Transparency and human oversight: The AI Act contains various measures aimed at ensuring effective human oversight of High Risk AI Systems given the perceived risks of AI operating without this supervision. This includes obligations on the provider of AIMD to ensure the deployers of the system (discussed further below) can exercise this oversight. This must partly be addressed through the design of the system itself, such as human interface tools, but also through the provision of information to the user. To this end, the AI Act sets out detailed transparency obligations requiring providers to supply the information necessary for users to appropriately operate the system and interpret its output.
  • Obligations on additional parties: The primary obligations under the AI Act are on providers who develop AI systems. However, unusually for a product regulatory framework, the AI Act places obligations directly on users in that it also applies to "deployers" who use AI systems in a professional capacity.15 In the life sciences and healthcare context, this is most likely to apply to healthcare providers such as hospitals. This will entail a new regulated role for hospitals, one with which they are not familiar, although the obligations on providers relating to transparency and human oversight above tie in to helping deployers meet their new obligations. Deployers are subject to obligations to monitor AI systems they deploy and to directly report certain risks and incidents through the market surveillance mechanisms. The latter does of course mean that AIMD providers may not always have control over notifications of incidents involving their products to authorities, although hospitals and healthcare professionals already report adverse events for medical devices, and so this concept will hopefully not add too much additional burden on providers (manufacturers) or deployers (users) of AI systems. Similarly to the MDR and IVDR, the AI Act also imposes direct obligations on importers and distributors, though there may be difficulties in identifying the relevant entities that undertake these roles in the context of AI.
  1. What Is the Concern About Pre-Market Testing of AIMD?

Arguably the most concerning feature of the AI Act for developers and legal manufacturers of AIMD is the apparent lack of a lawful pathway for conducting clinical investigations of general AIMD (or performance studies for IVD AIMD) in the EU. This appears to be an oversight of the legislators, and – if not clarified – could have a paralysing effect on European R&D of AIMD.

The AI Act prohibits the placing on the market or putting into service of High Risk AI Systems unless they have been conformity assessed and CE marked in accordance with the AI Act. This is the same as under the MDR/IVDR. However, AIMD must undergo clinical investigations under the MDR or a performance study under the IVDR, as applicable, to demonstrate its safety and performance in the intended purpose before obtaining the relevant CE mark. In the MDR and IVDR, devices intended for clinical investigations or performance studies respectively are carved out of the definition of "placing on the market". Therefore, the general rule that all devices must undergo conformity assessment and be CE marked before they are placed on the market does not apply to devices in the context of clinical investigations and performance studies. This allows appropriate and necessary testing to be undertaken without breaching the legal provisions.

However, no such exclusion is included in the AI Act. As such, it is ostensibly unlawful to perform a clinical investigation on a device that has not already been demonstrated to comply with the AI Act. This puts the AI Act into conflict with the MDR and IVDR.

There are certain exceptions from the scope of the AI Act, which apply to (i) AI systems or AI models "specifically developed and put into service for the sole purpose of scientific research and development",16 which would not apply to AIMD intended to be placed on the market as a medical device; and (ii) any research, testing or development activity regarding AI systems or models prior to their being placed on the market or put into service, excluding testing in real world conditions17 – i.e., excluding use in clinical investigations or performance studies given they are placed on the market during the clinical investigation and are available for use by third parties in "real world conditions".

The AI Act also provides two limited pathways for the temporary testing of High Risk AI Systems in "real world conditions" that do not amount to placing on the market or putting into service. However, even the pathway most likely to be useful for clinical testing of AIMD, that of "regulatory sandboxes", has a variety of practical flaws that mean it may not be practicable for all clinical testing.

The two possible pathways for clinical testing of AIMD are:

  1. Regulatory Sandboxes: The AI Act requires that each Member State establishes one or more "regulatory sandboxes" that provide a controlled environment for the development and testing of AI systems.18 However, it is unclear whether these regulatory sandboxes can provide an appropriate route for conducting clinical investigations and performance studies for medical devices in the EU, or that competent authorities have the capacity to operate sandboxes on the scale necessary to meet the demand for clinical investigations under the MDR. It is also not clear whether these will even be appropriate for the specific context of AI in the highly regulated life sciences and healthcare context, given the operation of the sandbox may require the cooperation of the healthcare system.
  2. Testing in real world conditions outside of regulatory sandboxes: The AI Act allows limited testing of High Risk AI Systems in real world conditions outside of regulatory sandboxes19but only where the system is for certain purposes set out in Annex III of the AI Act. Rarely would a medical device have one of these purposes and thus be able to use this pathway for real world testing. One possible example of an AIMD that could foreseeably fall within its scope is an AIMD product for mental health diagnosis and treatment could be considered to have the Annex III purpose of "emotion recognition".
  1. What Will Be the Impact on Medicines R&D?

The AI Act will have little impact on AI drug discovery platforms used for drug design or candidate and target identification at the pre-clinical stage. These systems are not medical devices and do not otherwise meet the criteria to be classified as High Risk AI Systems. They will therefore only be subject to the limited obligations for general AI systems, such as ensuring staff involved in deploying the system have appropriate levels of AI literacy.

AIMD used in the context of medicines R&D, such as that used in clinical trials, will be subject to the obligations on High Risk AI Systems. As set out above, this will create an extra layer of complexity where that AIMD is not already CE marked and the intention is to conduct a combined medicine clinical trial and device clinical investigation. It seems unlikely that this would be possible to conduct in an AI regulatory sandbox and it may, based on the current wording, be necessary to conduct the relevant testing in sequence so the CE marked AIMD can be used within a clinical trial. Despite the stated efficiencies of AI, this may in fact slow down drug development. This cannot have been the intention of the legislators and is likely to be "clarified" in due course.

  1. What Are the Penalties for Non-Compliance With the AI Act?

Infringements of the regulatory framework for High Risk AI Systems can result in fines of up to the higher of: (i) EUR 15 million; or (ii) 3% of worldwide turnover in the preceding financial year.20 These fines could, in principle, apply in addition to any enforcement under other regimes, such as the MDR, IVDR or GDPR. The regulator is required, when setting the quantum of any fine, to give appropriate regard to any administrative fines imposed under other regulatory regimes for the same activity.

  1. What Happens Next?

The regulatory framework for High Risk AI Systems will become applicable on 2 August 2027,21 meaning that from this date, only AI systems that have undergone the necessary conformity assessment procedure can be placed on the market. Certain AI systems are "grandfathered" and not subject to the High Risk AI Systems regulatory regime. However, this only applies to AI systems placed on the market or put into service before 2 August 2026, provided they are not subject to any "significant change" in their design after that date.22 The AI Act also empowers the Commission to adopt a large number of implementing acts setting out the detailed requirements in certain areas, such as regulatory sandboxes. These implementing acts therefore need to be drafted and adopted to fully implement the relevant parts of the AI Act. Guidance will also need to be drafted covering many aspects of the AI Act to help operators understand the applicable requirements. There is therefore still a large amount of detail to be provided under the Act.

It is not clear that existing medical device Notified Bodies will be able to both be assessed for competence under the AI Act and scale up capacity to enable providers to obtain the necessary certification in time to meet the deadline. The experience of implementation of the MDR and IVDR does not give rise to optimism in this regard. That the transitional provisions under the AI Act only appear to allow for Notified Bodies to apply for notification under the AI Act from 2 August 2025 also points to difficulties in completing necessary conformity assessments on time. It seems likely that extensions will be needed to these deadlines.

It is likely that any Notified Body that is already competent to assess AIMD under the MDR and/or IVDR will seek to be assessed as able to perform conformity assessments under the AI Act – this is the stated intention of a number of well-known Notified Bodies under the device legislation. However, it is prudent for manufacturers of AIMD to confirm with their existing Notified Body as soon as possible that it does intend to perform assessments under the AI Act and to check on its progress on an ongoing basis. If that Notified Body does not intend to apply for designation under the AI Act, or if progress is insufficient, the manufacturer will need to apply to a new Notified Body to avoid its product becoming unlawful to supply at the end of the transition period. The fact two separate Notified Bodies may be needed for AIMD will add to the complexity for manufacturers.

There is also the risk that Notified Bodies that perform conformity assessments for AIMD under the medical devices regulatory framework will need to spread resources even more thinly. Medical devices is one of the few areas where there is already significant regulation of AI systems and accordingly is an area where Notified Bodies already have AI expertise. This expertise could therefore be redirected to the assessment of other AI systems leading to further delays for the assessment of medical devices.

  1. What About the UK?

In contrast to the EU, the UK's approach under the previous Conservative government was to adopt a "pro-innovation" framework of principles, to be set out in guidance rather than legislation, which will then be implemented by regulatory authorities in their respective sectors, such as by the Medicines and Healthcare products Regulatory Agency (MHRA) for medicines and medical devices. According to the then government, the aim is that this will enable the UK to remain flexible enough to deal with the speed at which AI is developing, while also being robust enough to address key concerns. The new Labour government elected in July 2024 has not indicated it will materially deviate from this approach, although in its election manifesto it stated its intention to introduce binding regulation on the "handful" of companies developing the most powerful AI models and banning certain "deepfakes". It is not clear what the government considers to be the "most powerful" AI models that would be subject to regulation and the extent of the requirements it intends to impose. In particular, legislation to govern AI was not included in the King's Speech on 17 July 2024, as part of the new government's legislative and policy agenda.

In line with the UK's approach so far, the MHRA has recently set out its strategic approach to AI23 and how it will regulate AIMD within the existing framework. As part of this, the MHRA has stated that the devices regime is unlikely to set out specific legal requirements beyond those being considered for software as a medical device. Instead, the MHRA intends to publish guidance on how AI fits into the regulatory regime. The MHRA has also launched a regulatory sandbox for software and AIMD called the "AI-Airlock"24 whereby regulators, manufacturers and other relevant stakeholders can bring their expertise and work collaboratively to understand and mitigate novel risks associated with these products.

There have been questions as to whether the UK's approach to date is behind the curve. Increasingly, commentators have suggested that a firmer approach is required. Moreover, there is a risk that by not setting out a clear regulatory framework now, regulators will have the difficult task of having to regulate AI systems once they are already on the market. However, the stated aim of the UK's approach has been that companies will use the UK as a launch country to test out – and invest in – innovative products and make use of the regulatory flexibility rather than having to meet the more stringent rules of the EU AI Act. It remains to be seen whether the new government will retain broadly the same approach to regulation of AI systems and how it will implement its commitment to regulating the "most powerful" AI models.

  1. So What Should Companies Do?

To prepare for the implementation of the AI Act, companies in the life sciences sector should:

  • analyse whether any systems they (intend to) commercialise or deploy meet the definition of AI systems in the AI Act and whether the product is a High Risk AI System;
  • assess their role in relation to that AI system and whether they are a provider, importer, distributor or deployer to determine what obligations may apply;
  • if they will be a provider of a High Risk AI System, ensure they have arrangements with a Notified Body that is likely to be assessed as competent to perform conformity assessments under the AI Act with sufficient time to perform those assessment before the end of the transition period;
  • monitor for relevant updates on implementation of the AI Act such as the publication of implementing acts, guidance and standards;
  • engage with industry bodies to provide comments on implementing acts and guidance as necessary;
  • assess the adequacy of the data against the standards in the AI Act and put in place appropriate data governance measures; and
  • review their contractual arrangements with their supply chain and customers to ensure they and their customers can comply with any new obligations under the AI Act.

Footnotes

1 Article 5 of the AI Act.

2 Article 4 of the AI Act.

3 The Medical Devices Regulation 2017/745/EU (MDR) and The In Vitro Diagnostic Medical Devices Regulation 2017/746/EU (IVDR).

4 Article 2 of the AI Act.

5 Article 6 in each of the MDR and IVDR.

6 Defined as "component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property", Article 3(14) of the AI Act.

7 Article 43(3) of the AI Act.

8 Article 8(2) of the AI Act.

9 Second paragraph of Article 43(3) of the AI Act.

10 Article 40(1) of the AI Act.

11 Article 40(3) of the AI Act.

12 MedTech Europe, Medical technology industry perspective on the final AI Act, 13 March 2024, available at: [Hyperlink]

13 Article 10 of the AI Act.

14 Article 10(3) of the AI Act.

15 Article 3(4) and Article 26 of the AI Act.

16 Article 2(6) of the AI Act.

17 Article 2(8) of the AI Act.

18 Article 57 of the AI Act.

19 Article 60 of the AI Act.

20 Article 99(4) of the AI Act. Note, breach of the prohibition of specified AI practices, which are unlikely to be relevant to life sciences companies, can result in administrative fines of up to the higher of: (i) EUR 35 million; or (ii) 7% of worldwide turnover in the preceding financial year (Article 99(3) of the AI Act).

21 Article 113(c) of the AI Act.

22 Article 111(2) of the AI Act.

23 MHRA, MHRA's AI regulatory strategy ensures patient safety and industry innovation into 2030, 30 April 2024, available at: [Hyperlink]

24 MHRA, AI Airlock: the regulatory sandbox for AIaMD, 9 May 2024, available at: [Hyperlink]

Download : Pharmaceutical Advertising Laws and Regulations Report 2024 The EU AI Act: Impact on the Life Sciences Industry (iclg.com)

Originally published in ICLG.com, 07/08/2024

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More