Expected Outcome:
This topic aims at supporting activities that are enabling or contributing to one or several expected impacts of destination “Ensuring equal access to innovative, sustainable, and high-quality healthcare”. To that end, proposals under this topic should aim to deliver results directed towards and contributing to all the following expected outcomes:
- Healthcare professionals, at all stages of healthcare provision, have access to user-centric, robust and trustworthy virtual assistant solutions based on Generative Artificial Intelligence (AI)[1] models and other AI tools to support them towards the provision of safer, more efficient and personalised care.
- Healthcare professionals benefit from cross-country applicable methodologies with the aim to facilitate acceptability, healthcare uptake and public trust of virtual assistant tools based on Generative AI models.
- Patients benefit from enhanced outcomes, more personalised care, and increased engagement with their healthcare professionals, leading to improved safety, quality of care, access to appropriate healthcare information and patient-doctor communication.
- Healthcare systems benefit from improved cost-effective patient outcomes, superior to standard of care in terms of accuracy, safety, and quality, and from cost-savings through advancements in highly accurate, transparent, traceable, and explainable solutions.
Scope:
Healthcare professionals face important challenges related to efficiency, patient safety and provision of quality care with limited health systems’ resources. Multimodality of health data combined with the available high-performance computing capabilities have the potential to empower effective and accurate use of trustworthy and ethical Generative AI-based solutions, augmented by other AI tools to address these challenges. Generative AI may benefit patients, healthcare professionals and health systems.
This topic will contribute to advancing and generating research to better understand and improve Generative AI-based virtual assistant solutions and their applicability in healthcare settings by improving patient health outcomes, fostering personalised healthcare and support the resilience, sustainability, and efficiency of the healthcare systems. In addition, the topic aims to also cover the understanding and mitigation of possible shortcomings (biases) and frameworks for monitoring and overseeing these solutions’ use.
Research actions under this topic should include all the following activities, ensuring multidisciplinary approaches and a broad representation of stakeholders in the consortia (e.g. industry, academia, healthcare professionals, patients):
- Develop virtual assistant solutions based on new or optimised trustworthy and ethical Generative AI models, augmented by other AI tools to support healthcare professionals. The models should leverage extensive and diverse multimodal health and research data, public knowledge, and reliable healthcare systems information relevant for healthcare settings. Examples can include electronic health records, medical imaging, genomics, proteomics, molecular data, laboratory results, patient information (including on safety), and/or unstructured health data (the applicants may choose any type of available large-scale data). The development and training of the models should take place in multinational consortia and federated governance approaches should be considered. The applicants should demonstrate how the project goes beyond combining existing data and generates new specific knowledge to improve clinical decision making.
- Demonstrate the added-value and clinical utility of the virtual assistant solutions in at least two healthcare use cases in different medical fields and unmet needs showing e.g. improved care management and efficiency, prediction of potential patient-specific therapeutic strategies and outcomes, etc. The applicants should provide evidence of high maturity technology for the use cases and assess the relative effectiveness of the solutions compared with standard of care, including on why these solutions would be superior to other AI tools and would deliver better outcomes. They should actively engage healthcare professionals as end users, and other stakeholders such as patients, caregivers in the development and testing of the solutions, ensuring that diverse perspectives and intersectional considerations are integrated throughout the process. Training and education activities for healthcare professionals should be organised.
- Develop a regulatory strategy/interaction plan with regulators (including in the area of Health Technology Assessment) for generating evidence, where relevant, in a timely manner. Consider also the potential for future regulatory impact of the results and sustainability aspects.
- Develop or adapt existing methodologies for continuous assessment of the developed solutions. The methodologies should demonstrate technical robustness, healthcare utility and trustworthiness of the Generative AI-based solutions, by adopting:
- Appropriate metrics for evaluating alignment with human values, ethical principles and the intended purposes of Generative AI models, performance including testing their technical robustness and clinical utility, as well as their model intelligibility, in view of ensuring AI trustworthiness[2].
- Appropriate solutions to identify and mitigate potential bias[3] of the models (e.g. representativeness of the data, bias of the trainer, bias of training and validation data, algorithmic discrimination and bias including gender bias etc.).
- Appropriate techniques to discover and demonstrate explainability of model reasoning, increase users’ trust, and address the black box element, thus further enhancing transparency, model explainability and alignment.
- Methods to systematically address and assess ELSI (Ethical, Legal and Societal Implications), including data privacy concerns and risk of discrimination/bias (not limited to sex, gender, age, disability, race or ethnicity, religion, belief, minority and/or vulnerable groups). The implication of medical errors originated from AI-assisted decision-making and the effects on potential legal liability for healthcare professionals should be explored.
All proposals should demonstrate EU added value by focusing on the development and/or use of trustworthy Generative AI models developed in the EU and Associated countries, involving in the consortium EU industrial developers, including leading-edge startups when possible. An open-source approach is encouraged when technically and economically feasible. Successful proposals are encouraged to utilise the resources offered by the AI factories[4], when relevant and in accordance with the specific access terms and conditions.
The proposals should adhere to the FAIR[5] data principles and apply GDPR[6] compliant processes for personal data protection based on good practices of the European research infrastructures, where relevant. The proposals should promote the highest standards of transparency and openness of models, as much as possible going well beyond documentation and extending to aspects such as assumptions, code and FAIR data management.
Proposals are encouraged to exploit potential synergies with the projects funded under the topic HORIZON-CL4-2021-HUMAN-01-24, as well as with other projects funded under Horizon Europe and Digital Europe Programmes. When the use cases are relevant to diseases covered by specific Horizon Europe Partnerships or missions (e.g., European Partnership on Rare Diseases, European Partnership on transforming health and care systems, the Cancer Mission, etc.), the proposals should adopt the federated data-management and data access recommendations already developed. Moreover, the applicants are encouraged to leverage available and emerging data infrastructures (e.g., European Health Data Space[7], European Genomic Data Infrastructure[8], Cancer Image Europe[9], European Open Science Cloud[10], EBRAINS[11] etc.), whenever relevant. Adopting EOSC recommendations and services for high-quality software is also encouraged. The expansion of health data and/or existing or under development AI infrastructures is not in the scope of this topic.
When possible, the developed models should be trained with multimodal data in different EU languages, to ensure accessibility and inclusivity.
Successful proposals are encouraged to utilise the resources offered by the AI factories[4], when relevant and in accordance with the specific access terms and conditions.
This topic requires the effective contribution of social sciences and humanities (SSH) disciplines and the involvement of SSH experts and institutions as well as the inclusion of relevant SSH expertise, to produce meaningful and significant effects enhancing the societal impact of the related research activities. The active engagement of healthcare professionals as end users, patients, and their caregivers is central to achieving targeted outcomes in the development and testing of the Generative AI virtual assistant solutions.
Proposals should consider the involvement of the European Commission's Joint Research Centre (JRC) based on its experience and with respect to the value it could bring in providing an effective interface between research activities and preliminary regulatory science as well as strategies and frameworks that address fit for regulatory requirements. In that respect, the JRC will consider collaborating with any successful proposal and this collaboration, when relevant, should be established after the proposal’s approval.
All proposals selected for funding under this topic are strongly encouraged to collaborate, for example by participating in networking and joint activities, exchange of knowledge, developing and adopting best practices, as appropriate. Therefore, proposals are expected to include a budget covering the costs of any other potential joint activities without the prerequisite to detail concrete joint activities at this stage. The details of these joint activities will be defined during the grant agreement preparation phase.
Applicants envisaging to include clinical studies[13] should provide details of their clinical studies in the dedicated annex using the template provided in the submission system.
[1] Generative AI is a type of AI technology that can generate various forms of new content such as text, images, sounds, and even code, such as for programming or gene sequencing (https://ec.europa.eu/newsroom/dae/redirection/document/101621).
[2] Ethics Guidelines for Trustworthy AI, published by the European Commission’s High Level Expert Group on Artificial Intelligence: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
[3] Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum: https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en
[4] https://digital-strategy.ec.europa.eu/en/policies/ai-factories
[5] See definition of FAIR data in the introduction to this work programme part.
[6] General Data Protection Regulation: https://commission.europa.eu/law/law-topic/data-protection_en
[7] https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en
[8] https://gdi.onemilliongenomes.eu
[9] https://cancerimage.eu
[10] https://research-and-innovation.ec.europa.eu/strategy/strategy-2020-2024/our-digital-future/open-science/european-open-science-cloud-eosc_en
[11] https://www.ebrains.eu
[12] https://digital-strategy.ec.europa.eu/en/policies/ai-factories
[13] Please note that the definition of clinical studies (see introduction to this work programme part) is broad and it is recommended that you review it thoroughly before submitting your application.