Ce topic appartient à l'appel Innovative Health Initiative JU Call 11
Identifiant du topic: HORIZON-JU-IHI-2025-11-03-two-stage

AI-Powered Signal Detection in Pharmacovigilance

Type d'action : HORIZON JU Research and Innovation Actions
Date d'ouverture : 17 juin 2025
Date de clôture 1 : 09 octobre 2025 02:00
Date de clôture 2 : 29 avril 2026 02:00
Budget : NC
Call : Innovative Health Initiative JU Call 11
Call Identifier : HORIZON-JU-IHI-2025-11-two-stage
Description :

Expected Impact:

The action under this topic is expected to achieve the following impacts:

  • enhanced drug safety by improving the speed and accuracy of identifying adverse drug reactions (signal detection);
  • proactive risk management by improving risk assessment and prediction, scalability in monitoring, and fostering collaboration among stakeholders;
  • improved patient safety through an earlier and more effective risk management plan, risk communication, and risk mitigation;
  • faster and more informed decision-making through AI-driven insights;
  • increased efficiency through rapid processing of vast amounts of data at a much faster rate compared to traditional methods;
  • streamlined processing by automating routine pharmacovigilance tasks, thereby reducing the manual workload for healthcare professionals, and the operational costs associated with these activities;
  • support for future policies and the shaping of regulations through evidence generated on the use of AI in signal detection and pharmacovigilance to improve patient safety;
  • increased consistency in approaches used by industry, academia and regulators.

The action will also support the EU political priority to boost European competitiveness and contribute to a number of European policies/initiatives, which include European policies and regulations on AI for signal detection, the Regulation on the European Health Data Space (EHDS)1 through recommendations of data space for pharmacovigilance activities, the EU Artificial Intelligence Act2 and the European Health Emergency Preparedness and Response Authority (HERA) through earlier risk communication and mitigation.

1https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202500327

2https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Expected Outcome:

Industry, regulators, researchers and other stakeholders have access to evidence-based and practical guidance, with aligned perspectives of public and private stakeholders, on the use of artificial intelligence (AI) for signal detection and other pharmacovigilance (PV) applications to ensure patient safety.

Patients and citizens will benefit from earlier and more accurate signal detection, which will lead to earlier risk communication and more effective measures to manage the risks.

More specifically the action under this topic must contribute to all of the following outcomes (which can be applied to various therapeutic areas irrespective of the size and composition of the safety database and to products under development as well as those in post-marketing setup):

  • AI-powered algorithms and methods for faster and more accurate signal detection;
  • a comprehensive list of data sources where AI methods could be used for improved signal detection, including a set of recommendations, along with principles to be followed to support a suitable common data model for simultaneous analyses of a wide range of different data sources (including clinical trials and post-marketing surveillance data) for the same purpose;
  • AI-powered algorithms and methods for highly accurate risk prediction to help identify potential risks in the future before they escalate into significant public health issues and enable proactive measures to mitigate risks;
  • recommendations, including practical considerations for implementing AI-powered signal detection and risk prediction systems in real-world scenarios, to enable effective and trusted use of AI;
  • tools and templates for practical implementation of AI – power signal detection and risk predictions by the public and private stakeholders;
  • training and user guides and other education materials on the implementation of the recommendations and the use of AI.Central to the delivery of these outcomes are transparency, trustworthiness, and adherence to the ethical and legal principles of the use of patient-level data and any proprietary information.

Central to the delivery of these outcomes are transparency, trustworthiness, and adherence to the ethical and legal principles of the use of patient-level data and any proprietary information.

Scope:

Spontaneous reporting systems (SRSs) have been essential for signal detection in pharmacovigilance but suffer from low accuracy and delays, impacting patient safety. More recently, electronic health records (EHRs) have also been used for signal detection1, but the performance needs to be improved [1]. A safety signal is information on a new or known adverse event that may be caused by a medicine and requires further investigation . Signal detection is the identification of potential exposure-outcome relationships that warrant further consideration.

AI offers a promising solution by improving the efficiency, accuracy, and timeliness of signal detection using diverse and untapped data sources to allow for enhanced and timely benefit-risk profile evaluation. Recent regulatory developments include the FDA's January 2025 guidance on AI for decision-making (FDA Guidance AI), which provides recommendations for using AI in regulatory decision-making about drug safety and effectiveness. Additionally, the EMA's September 2024 reflection paper (EMA- Reflection paper on AI) discusses AI's role throughout the lifecycle of medicinal products, from drug discovery to post-authorisation.

Advances in digital technology and computer science, such as generative AI, machine learning, and predictive analytics, have the potential to enable faster and more accurate analysis of both traditional and emerging data sources, which will improve patient safety, provision of healthcare, and public health. There are different PV areas where AI could potentially be applied, including individual case safety report (ICSR) management, periodic reports, signal detection, and risk management. The scope of this topic focuses on the use of AI for signal detection and risk prediction. It also covers opportunities that may not be 'signal detection' per se but rather augmentations/support beyond signal detection for instance with the expanded use of data and AI-powered methods, including characterisation of cases that can provide context for interpreting an exposure-outcome relationship.

The use of AI for ICSR management and processing as well as periodic reports are out of the scope of this topic.

To fulfill this aim, the action funded under this topic should:

1. Evaluate, select, optimise and test AI algorithms using disparate data sources for signal detection. This implies:

  • carrying out a review of existing literature, including results from previous initiatives. and practical applications. This will help to understand the strengths and limitations of different approaches and identify a collection of systems, AI methods, and tools that have been tested on various data sources;
  • selecting the most effective algorithms for signal detection based on this review;
  • pilot testing the algorithms to evaluate their performance using a series of use cases against different business scenarios from different stakeholders’ perspectives. Performance metrics include accuracy, reliability/repeatability, and trustworthiness. The criteria of the use case studies will be developed at an early stage of the project when promising algorithms and tools have been identified;
  • optimising AI algorithms to perform signal detection at the level of a medical concept or syndrome, with emphasis on transparency requirements, including model interpretability, data provenance, and traceability of AI decision-making processes.

2. Evaluate diverse data sources to be considered within a cohesive pharmacovigilance network for the purpose of signal detection. This implies:

  • identifying data sources and reference datasets needed to pilot test the algorithms. This will include EHRs (medical records, claims, registries) as one of the main data sources in this project and other data sources such as spontaneous reporting systems (EudraVigilance, FDA Adverse Events Reporting System FAERS and WHO Vigibase), social media and genomics;
  • evaluating these data sources addressing their overall quality, how fit they are for purpose, current limitations and future opportunities, such as electronic health records, social media platforms, and others. This includes evaluating them individually or simultaneously to ensure a holistic view of drug safety, enhancing the analysis and monitoring of adverse drug reactions for a more thorough understanding of drug safety;
  • developing a set of recommendations that could be utilised for simultaneous analyses of different data sources, along with the principles to be followed to support a common data model for evaluating different data sources for the same purpose.

3. Evaluate and develop predictive models to identify risks in the future (risk prediction).

  • based on the results from signal detection, develop predictive models using different data sources that may help identify potential risks in the future before they escalate into significant public health issues. These models would use historical data and advanced analytics to forecast potential risks, potentially enabling proactive measures to mitigate risks.

4. Develop a recommendations document for implementing AI-powered signal detection and risk prediction systems in real-world scenarios

  • using the results from the pilot tests, design a recommendations document which will serve as a reference for implementing AI-powered signal detection and risk prediction systems in real-world scenarios. The recommendations will include a set of principles and practical considerations to enable effective, explainable, and trusted use of AI and will include ethical, legal, and governance considerations for the sharing and use of real-world data and AI-algorithms;
  • engage with the European Medicines Agency (EMA) to seek endorsement of the recommendations document via the “Qualification Procedure”.

5. Develop recommendations for human-in-the-loop (HITL) and human-on-the-loop (HOTL) AI in pharmacovigilance signal detection for optimal performance and oversight.

6. Develop templates and tools for practical implementation, including integration into existing PV systems of AI – power signal detection and risk prediction models by different stakeholders.

7. Develop training plans and education materials to disseminate the recommendations widely to the stakeholder community and develop a strategy for uptake.

For all these activities, applicants are expected to adhere to ethical and legal principles. For instance for trustworthy AI, human oversight and verifications will follow regulatory frameworks such as the Assessment List for Trustworthy Artificial Intelligence (ALTAI).

Applicants are expected to develop a regulatory strategy and interaction plan for evidence generation to support the regulatory qualification of the methodology as relevant and engage with regulators in a timely manner (e.g. national competent authorities, EMA Innovation Task Force, qualification advice).

Applicants are also expected to foster proactive and early involvement of regional healthcare systems and health authorities in all stages of the discussion and decision-making processes.

1 Signal Identification Methods in the Sentinel System