ARTICLE
25 September 2024

AI: EMA Publishes Guiding Principles On The Use Of Large Language Models (LLMs)

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
On 5 September 2024, the European Medicines Agency (EMA) and the Heads of Medicine Agencies (HMA) published a guidance document (the Guidance) with general principles and recommendations on the use of large language models (LLMs) in regulatory science and the regulation of medicinal products.
United States Technology

On 5 September 2024, the European Medicines Agency (EMA) and the Heads of Medicine Agencies (HMA) published a guidance document (the Guidance) with general principles and recommendations on the use of large language models (LLMs) in regulatory science and the regulation of medicinal products. The guiding principles for users are also summarised in a one-page factsheet.

While the Guidance is aimed at the European Union (EU) regulatory authorities, it is instructive for companies active in the Life Sciences sector. It provides useful insight into how regulators will approach the use of LLMs in their regulatory activities, the risks they have identified and how these risks can be mitigated.

The development of the guiding principles set out in the Guidance are part of the EMA's and HMA's multiannual AI workplan to 2028 and, according to EMA, will be subject to regular future updates.

LLMs use and related risks

LLMs are defined as "a category of generative AI or foundation models trained on large amounts of text data making them capable of generating natural language responses to an input or request (prompt)".

The Guidance divides LLMs into four categories based on how users interact with them and highlights that each type should be considered by the EU regulatory authorities when selecting a model, as the model may affect flexibility, control, resource requirements and integration possibilities:

  1. third-party, open or closed source, externally hosted, available online
  2. third-party, externally hosted, part of enterprise solutions
  3. third-party, open source, internally hosted
  4. (re)trained internally

According to the Guidance, while LLMs possess powerful capabilities that can also support various processes within the regulatory system for medicinal products, the use of LLMs also poses several challenges and risks, including failures at seemingly trivial tasks, and the return of irrelevant or inaccurate responses, known as hallucinations. This is particularly important when dealing with issues such as patient safety.

Additionally, the Guidance highlights that LLMs have not been exposed to information that answers a scientific or regulatory question and that the validation of these technologies is challenging. Other aspects that need consideration are related to confidentiality, data protection and privacy, IP laws and ethics.

The Guidance provides, therefore, advice to the relevant regulatory agencies on how to empower their staff to be able to leverage LLMs only where they can be used safely and responsibly to support regulatory science and the regulation of medicinal products.

Ethical considerations

The Guidance refers to the High-Level Expert Group's proposed requirements that apply to all AI systems (i.e., accountability, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing). It also notes the sources of risks and harms arising from discrimination, information hazards, dissemination of false or misleading information, malicious uses such as cyberattacks, unsafe practices in the design of an LLM application, and automation.

EMA and HMA highlight that compliance with data protection rules must be ensured during all stages of the life cycle of LLMs.

User principles

  1. Take appropriate measures to ensure safe input of data: to ensure lawful, responsible and safe use of LLMs, users must understand the capabilities and limitations of such LLMs. The Guidance recommends that the staff of regulatory authorities actively educate themselves, apply critical thinking when deploying and using LLMs. This includes adapting the prompts accordingly, drafting prompt text carefully and double-check prompt input to avoid inputting sensitive information such as personal data and IP protected content and taking care when copy-pasting to a prompt. Staff should also avoid automation bias and review LLM outputs, redraft output that is new, consider disclosing the use of an LLM, ask for sources and confirmation and review and test code produced by an LLM.
  2. Continuously learn how to use LLMs effectively: as LLMs are evolving, it is recommended that LLM users continuously educate themselves to improve efficiency and reduce financial and environmental costs and reach out to relevant networks for further training.
  3. Know who to consult when facing concerns and report issues: LLM users may face problems arising from the training data, information sharing or the content of the output. It is therefore critical to know who to contact within an organisation to address concerns. The Guidance recommends that staff should know who to consult regarding security and data protection and report incidents or "severely biased or erroneous outputs" to the appropriate function or team.

Organisational principles

  1. Define governance that helps users have a safe and responsible use: the Guidance recommends that organisations consider defining governance on the use of LLMs, staff training and risk monitoring and take care and implement mechanisms regarding the use of their own data to improve the performance of LLMs.
  2. Help users maximise value from LLMs: organisations should consider change management activities to help users maximise value – including training, providing an LLM support team and prompt/input pre-screening tools.
  3. Collaborate and share experiences: this can be done through fora such as the European Specialised Expert Community (ESEC)'s AI Special Interest Area and the EU Network training centre.

Next steps

The publication of the guiding principles demonstrates EMA's focus on LLMs and more generally, the use of Artificial Intelligence in the Life Sciences sector.

The EMA also published a draft Reflection paper on the use of AI in the lifecycle of medicines on 19 July 2023 (see more details in our BioSlice post) although the final guidance has not yet been published.

As we are moving forward and technology progresses, having a consistent approach to LLMs based on shared understanding of the opportunities and risks will be essential for the pharmaceutical industry. We will keep monitoring EMA's initiatives and work on AI and will be providing further updates when more information becomes available.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More