Ce topic appartient à l'appel Indirectly Managed Action by the ECCC
Identifiant du topic: HORIZON-CL3-2025-02-CS-ECCC-01

Generative AI for Cybersecurity applications

Type d'action : HORIZON Research and Innovation Actions
Date d'ouverture : 12 juin 2025
Date de clôture 1 : 12 novembre 2025 00:00
Budget : €40 000 000
Call : Indirectly Managed Action by the ECCC
Call Identifier : HORIZON-CL3-2025-01-IM-01
Description :

Expected Outcome:

Projects will develop technologies, tools, processes that reinforce cybersecurity using AI technological components, in particular Generative AI, in line with relevant EU policy, legal and ethical requirements.

Proposals should address at least one of the following expected outcomes:

  • Developing, training and testing of Generative AI models for monitoring, detection, response and self-healing capabilities in digital processes, and systems against cyberattacks, including adversarial AI attacks.
  • Development of Generative AI tools and technologies for continuous monitoring, compliance and automated remediation. These should consider legal aspects of EU and national regulation as well as ethical and privacy aspects.

Scope:

The use of Artificial intelligence is becoming indispensable with applications where massive data is involved. Understanding all implications for cybersecurity requires deeper analysis and further research and innovation.

Generative AI presents both opportunities and challenges in the field of cybersecurity. This topic supports the research on new opportunities brought by Generative AI for Cybersecurity applications, to develop, train and test AI models to scale up detection of threats and vulnerabilities, enhance response time, cope with the large quantities of data involved, and automate process and decision-making support; for example by generating reports from threat intelligence data, suggesting and writing detection rules, threat hunts, and queries for the Security information and event management (SIEM), creating management, audit and compliance reports and reverse engineering malware.

Proposals addressing expected outcome a)

(a) (i) Advanced threat and anomaly detection and analysis: Current cybersecurity tools may struggle to keep pace with the evolving tactics of cyber attackers. Developing, training and testing of Generative AI models can be used to analyse large volumes of data and accurately identify anomalies and deviations from normal patterns of behaviour, enabling more effective threat detection, analysis and response.

Tools should also support cybersecurity professionals as they may struggle to detect and respond to threats posed by generative AI, particularly as these systems become more sophisticated and difficult to distinguish from genuine human activity.

(a) (ii) Adaptive security measures: Cybersecurity tools often rely on static rules and signatures to detect threats, making them less effective against new and evolving attack methods. In addition, many cybersecurity tools still rely on manual intervention for threat response, which can be time-consuming and ineffective. Generative AI, through development, training, finetuning and testing of Generative AI models can support these tools to adapt and respond to emerging threats in real-time, improving overall security posture.

(a) (iii) Enhanced authentication and access control: The use of AI technologies could improve resilience of authentication and access control systems to unauthorized access and credential theft, making it more difficult for unauthorized users to gain access to sensitive information or systems.

Proposals addressing expected outcome b)

(b) (i) Development of tools powered by Generative AI that analyse and facilitate the Application of the national and EU regulation in digital systems, in particular the Artificial Intelligence Act, the Directive on measures for a high common level of cybersecurity across the Union (NIS2) and the Cyber Resilience Act.

(b) (ii) Adaptation to a dynamic environment. Companies, public sector and organisations face an ever-changing environment which makes keeping up with compliance towards cybersecurity rules challenging. On one hand there’s a variety of rules applicable at sectorial, national or European level to be considered. On the other, change management and updates in ICT systems in organisations is frequent. Addressing both facets with tools powered with Generative AI brings the potential for a compliance continuum within organisations otherwise limited in time when driven by human intervention only.

All proposals are expected to respect Trustworthy and Responsible AI principles[1] and data privacy.

All proposals should demonstrate the EU added value by fostering the development of EU technology, the use of open-source technologies when technically and economically feasible, the exploitation of available EU data (Data Spaces, EOSC, federated data etc)

Proposals should define key performance indicators (KPI), with baseline targets to measure progress and to demonstrate how the proposed work will bring significant advancement to the state-of-the-art. All technologies and tools developed should be appropriately documented, to support take-up and replicability. Participation of SMEs is encouraged.

Proposals are expected to pay special attention to the Intellectual Property dimension of the results. The usability of the outcomes and results once the project is finished will be closely assessed.

[1] https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en

https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence