During 2024, EMA published tools and guidelines to integrate artificial intelligence into pharmacovigilance processes. Here they are.
The European Medicines Agency (EMA) is taking significant steps towards integrating Artificial Intelligence (AI) into its regulatory and scientific activities. It has introduced a section dedicated to AI on its official website and published a document entitled “Guiding principles on the use of large language models in regulatory science and for medicines regulatory activities“, a document that provides guidelines on the use of large language models (LLM).
The section on the EMA website dedicated to AI
EMA has created a section dedicated to AI on its official website, a point of reference for those working in the pharmaceutical and regulatory sector. This page, part of the Big Data program, provides an overview of the initiatives to integrate AI into European regulatory processes, with a focus on innovation, safety and transparency. Here is an overview of the main topics covered in the section.
The AI work plan: a framework for the future
The page presents the AI work plan for the period 2023-2028, developed by the Big Data Steering Group, which identifies four key areas of intervention:
- product guidance and support. Providing guidance on the use of AI throughout the lifecycle of medicines;
- AI tools and technologies. Developing frameworks for the implementation of AI-based tools, ensuring safety and reliability;
- collaboration and training. Strengthening the skills of operators through learning programmes;
- experimentation. Promoting the adoption of a structured approach to testing and integrating AI technologies.
These initiatives not only foster innovation, but also aim to ensure that the adoption of AI occurs in a responsible manner, with particular attention to risks and safety.
Reflections on AI and the lifecycle of medicines
Another important element of the EMA section on AI is the reflection paper “The use of artificial intelligence (AI) in the lifecycle of medicines”, which provides guidance for developers and pharmaceutical companies to use AI and machine learning safely and effectively during the different phases of a medicine’s lifecycle.
The main objective is to ensure that AI technologies can be used to improve quality, reduce time and optimise processes, without compromising patient safety or regulatory compliance.
Practical initiatives: the case of the Scientific Explorer
In March 2024, the EMA introduced the Scientific Explorer, an AI-based tool designed to make it easier for authorities to find scientific regulatory information. This tool is a concrete example of how the agency is using AI to improve data efficiency and accessibility.
Public consultation
The EMA page highlights a public consultation regarding an AI model used in the determination of disease activity in liver biopsies. This initiative demonstrates the EMA’s commitment to engaging industry experts and stakeholders to ensure transparency and collaboration in the adoption of AI technologies.
The use of Large Language Models
The “Guiding Principles for the Use of Large Language Models” document is also available, a guide that establishes guidelines for the use of advanced language models in medicines regulatory activities.
Large Language Models (LLMs) are generative AI models, trained on large sets of textual data, that can generate natural language responses to specific inputs. In the regulatory context, LLMs find application in tasks such as automatic document processing and data analysis.
Guiding Principles for the safe and effective use
The EMA document outlines several principles for the safe and effective use of LLMs:
- data security. Ensure that data input into models is done in a secure manner, protecting the confidentiality and integrity of information;
- critical thinking. Apply a critical approach in evaluating the outputs generated by models, verifying their accuracy and relevance;
- continuous learning. Promote a constant updating of skills and knowledge related to the use of LLMs, to adapt to technological evolutions;
- governance: establish clear governance to guide the responsible use of LLMs within regulatory authorities.
These principles aim to ensure that AI is used in an ethical, transparent and safe manner, while minimising the risks associated with its use.
Challenges and opportunities
EMA recognises that, although LLMs offer significant opportunities to improve the efficiency and effectiveness of regulatory activities, there are also challenges, such as the risk of “hallucinations”, i.e. the generation of plausible but inaccurate information. Therefore, it is crucial to implement control and validation measures to mitigate these risks.
Collaboration and training
The document highlights the importance of collaboration between different regulatory authorities and the need for continuous training for staff, in order to ensure the effective and responsible use of LLMs in regulatory activities.
A balance between innovation and safety
EMA is working to integrate AI into regulatory processes in a responsible way, balancing innovation and safety. With initiatives such as the new section on AI, the 2023-2028 work plan and the guiding principles on LLMs, the agency demonstrates its commitment to the conscious use of emerging technologies.
For pharmaceutical companies, these initiatives represent not only a regulatory challenge, but also an opportunity to improve data management and optimise processes. However, adopting AI requires a critical and well-planned approach, to maximize benefits while minimizing risks.