Expected Impact:
Action launched by the ECCC to incorporate ‘expected impact’ language set out in the ‘Destination – Increased Cybersecurity’ section of this work programme part.
Destination - Increased Cybersecurity
The strategic plan 2025-2027 identifies the following impact: "Increased cybersecurity and a more secure online environment by developing and using effectively EU and Member States’ capabilities in digital technologies supporting protection of data and networks aspiring to technological sovereignty in this field, while respecting privacy and other fundamental rights; this should contribute to secure services, processes and products, as well as to robust digital infrastructures capable to resist and counter cyber-attacks and hybrid threats".
Under this Work Programme, the Commission intends to conclude a contribution agreement entrusting the European Cybersecurity Competence Centre (ECCC) with the implementation of call topics related to Increased Cybersecurity. Please refer to "Indirectly managed action by the ECCC" in the section "Other Actions" of this Work Programme part – including the Appendix providing the call specifications for information purposes. Those specifications incorporate ‘expected impacts’ set out below.
Expected impacts:
- Support the EU’s technological capabilities by investing in cybersecurity research and innovation to further strengthen its leadership, strategic autonomy, digital sovereignty and resilience;
- Help protect its infrastructures and improve its ability to prevent, protect against, respond to, resist, mitigate, absorb, accommodate and recover from cyber and hybrid incidents, especially given the current context of geopolitical change;
- Support European competitiveness in cybersecurity and European strategic autonomy, by protecting EU products and digital supply chains, as well as critical EU services and infrastructures (both physical and digital) to ensure their robustness and continuity in the face of severe disruptions;
- Encourage the development of the European Cybersecurity Competence Community;
- Particular attention will be given to SMEs, who play a crucial role in the cybersecurity ecosystem and in overall EU digital single market competitiveness, by promoting security and privacy ‘by design’ in existing and emerging technologies.
Expected Outcome:
Projects will develop technologies, tools, processes that reinforce cybersecurity using AI technological components, in particular Generative AI, in line with relevant EU policy, legal and ethical requirements.
Proposals should address at least one of the following expected outcomes:
- Developing, training and testing of Generative AI models for monitoring, detection, response and self-healing capabilities in digital processes, and systems against cyberattacks, including adversarial AI attacks.
- Development of Generative AI tools and technologies for continuous monitoring, compliance and automated remediation. These should consider legal aspects of EU and national regulation as well as ethical and privacy aspects.
Scope:
The use of Artificial intelligence is becoming indispensable with applications where massive data is involved. Understanding all implications for cybersecurity requires deeper analysis and further research and innovation.
Generative AI presents both opportunities and challenges in the field of cybersecurity. This topic supports the research on new opportunities brought by Generative AI for Cybersecurity applications, to develop, train and test AI models to scale up detection of threats and vulnerabilities, enhance response time, cope with the large quantities of data involved, and automate process and decision-making support; for example by generating reports from threat intelligence data, suggesting and writing detection rules, threat hunts, and queries for the Security information and event management (SIEM), creating management, audit and compliance reports and reverse engineering malware.
Proposals addressing expected outcome a)
(a) (i) Advanced threat and anomaly detection and analysis: Current cybersecurity tools may struggle to keep pace with the evolving tactics of cyber attackers. Developing, training and testing of Generative AI models can be used to analyse large volumes of data and accurately identify anomalies and deviations from normal patterns of behaviour, enabling more effective threat detection, analysis and response.
Tools should also support cybersecurity professionals as they may struggle to detect and respond to threats posed by generative AI, particularly as these systems become more sophisticated and difficult to distinguish from genuine human activity.
(a) (ii) Adaptive security measures: Cybersecurity tools often rely on static rules and signatures to detect threats, making them less effective against new and evolving attack methods. In addition, many cybersecurity tools still rely on manual intervention for threat response, which can be time-consuming and ineffective. Generative AI, through development, training, finetuning and testing of Generative AI models can support these tools to adapt and respond to emerging threats in real-time, improving overall security posture.
(a) (iii) Enhanced authentication and access control: The use of AI technologies could improve resilience of authentication and access control systems to unauthorized access and credential theft, making it more difficult for unauthorized users to gain access to sensitive information or systems.
Proposals addressing expected outcome b)
(b) (i) Development of tools powered by Generative AI that analyse and facilitate the Application of the national and EU regulation in digital systems, in particular the Artificial Intelligence Act, the Directive on measures for a high common level of cybersecurity across the Union (NIS2) and the Cyber Resilience Act.
(b) (ii) Adaptation to a dynamic environment. Companies, public sector and organisations face an ever-changing environment which makes keeping up with compliance towards cybersecurity rules challenging. On one hand there’s a variety of rules applicable at sectorial, national or European level to be considered. On the other, change management and updates in ICT systems in organisations is frequent. Addressing both facets with tools powered with Generative AI brings the potential for a compliance continuum within organisations otherwise limited in time when driven by human intervention only.
All proposals are expected to respect Trustworthy and Responsible AI principles[1] and data privacy.
All proposals should demonstrate the EU added value by fostering the development of EU technology, the use of open-source technologies when technically and economically feasible, the exploitation of available EU data (Data Spaces, EOSC, federated data etc)
Proposals should define key performance indicators (KPI), with baseline targets to measure progress and to demonstrate how the proposed work will bring significant advancement to the state-of-the-art. All technologies and tools developed should be appropriately documented, to support take-up and replicability. Participation of SMEs is encouraged.
Proposals are expected to pay special attention to the Intellectual Property dimension of the results. The usability of the outcomes and results once the project is finished will be closely assessed.
[1] https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence