Ce topic appartient à l'appel Increased Cybersecurity 2023
Identifiant du topic: HORIZON-CL3-2023-CS-01-03

Security of robust AI systems

Type d'action : HORIZON Research and Innovation Actions
Nombre d'étapes : Single stage
Date d'ouverture : 29 juin 2023
Date de clôture : 23 novembre 2023 17:00
Budget : €15 000 000
Call : Increased Cybersecurity 2023
Call Identifier : HORIZON-CL3-2023-CS-01
Description :

ExpectedOutcome:

Projects’ results are expected to contribute to some or all of the following outcomes:

  • Security-by-design concept and resilience to adversarial attacks;
  • Inclusion of context awareness in machine learning in order to boost resiliency.

Scope:

Proposals received under this topic will address the security of AI systems, in the line with the following considerations. The availability of very large amounts of data, together with advances in computing capacity, has allowed the development of powerful Artificial Intelligence applications (in particular Machine Learning and Deep Learning). At the same time, concerns have been raised over the security, robustness of the AI algorithms (including AI at the edge), including the risks of adversarial machine learning and data poisoning. Thus, it is important to promote security-compliant AI algorithms, leading to possible certification schemes in the future.

Proposals should demonstrate awareness of the EU approach on Artificial Intelligence[1], such as the proposed Artificial Intelligence Act.

The identification and analysis of potential regulatory aspects and barriers for the developed technologies/solutions is encouraged, where relevant.

[1]A European approach to artificial intelligence: https://digital-strategy.ec.europa.eu/en/policies/european-approach-art…