Ce topic appartient à l'appel A sustainable future for Europe
Identifiant du topic: HORIZON-CL2-2024-TRANSFORMATIONS-01-06

Beyond the horizon: A human-friendly deployment of artificial intelligence and related technologies

Type d'action : HORIZON Research and Innovation Actions
Nombre d'étapes : Single stage
Date d'ouverture : 04 octobre 2023
Date de clôture : 07 février 2024 17:00
Budget : €10 000 000
Call : A sustainable future for Europe
Call Identifier : HORIZON-CL2-2024-TRANSFORMATIONS-01
Description :


Projects should contribute to all of the following expected outcomes:

  • Understanding and awareness raising about successful existing deployment of AI and the impact they have on European economy and society, providing a reality check of capabilities/benefits, but also limitations of current AI solutions, and how the latter are currently addressed.
  • On the basis of lessons from successful deployment, analysis of the implementation of the ethics principles for trustworthy AI.
  • Structurally enhanced capacities to foresee, evaluate and manage the future and longer term opportunities and challenges associated with artificial intelligence and related technologies.
  • Well founded and prioritised recommendations for European policy on R&I and in other key areas aimed at :
    • Ensuring that Europe is prepared to exploit the opportunities for the benefit of citizens and society, and at the same time face the challenges raised by potential developments and deployments of artificial intelligence and related technologies based on science and evidence as well as human rights and European values, and
    • Reinforcing Europe’s capacity to guide the development and deployment of these technologies in ways aligned to human rights and European values.


The history of “artificial intelligence” technologies (AI) is marked by great optimism and expectation, sometimes followed by disappointment. However, we have recently seen a sustained upsurge in interest and the successful uptake and application of AI in a variety of significant areas such as drug discovery, autonomous vehicles, social media, industrial robotics, and logistics, to name a few. We have witnessed significant successes in the development and deployment of machine learning, particularly for tasks normally associated with human perception[1]. We have also seen significant successes in symbolic and logic-driven AI for problems that require reasoning about constraints, automated reasoning, planning, etc.[2] AI has had significant impact in the arts and humanities, and AI-based methods and tools are becoming more widely used in the cultural arena.[3]

Nevertheless, today the collection of computer technologies commonly labelled artificial intelligence, along with related technologies for instance in the fields of data science, neuroscience and biotechnology, already show the potential to disrupt and impact the rights of individuals and the wellbeing of societal structures. For example, there have been many documented case studies where AI-based applications have exhibited undesired gender and racial bias[4]. AI systems have been (mis-)used to micro-target and influence voters in elections as well as in the creation and dissemination of disinformation[5], and otherwise impact on human agency and autonomy. Many ethical issues arise in the development of AI systems, such as their use in medical devices, brain-computer interfaces, reasoning about human mental and emotion state, etc.[6]

Concerns are often raised that AI technologies may imply major societal disruptions such as massive job displacements due to the increasing use of AI-drive automation and robotics, while research show that AI can also help filling gaps in workforce[7][8].

In 2018, the European Commission established the High Level Expert Group on Artificial Intelligence (HLEG-AI), which was tasked with developing a set of ethics guidelines for Europe that would help ensure that AI systems be human-centric and trustworthy. The importance of a human-centric approach to AI has been a cornerstone of EU policymaking in the field for several years and is the clearly articulated position of the EU. The European Commission published a pioneering draft AI Act in April 2021, the first legal framework on AI in Europe, which addresses the potential risks of using AI[9]. The Horizon Europe work programme under Cluster 4 is funding related research and innovation actions under the header ‘Leadership in AI based on trust’.

The common principle across all of these EU initiatives are seven key requirements for trustworthy AI[10], as proposed by the HLEG-AI and adopted by the European Commission, as well as the importance of protecting the fundamental rights of individuals[11].

Against this backdrop, before being faced with a ‘fait-accompli’ in terms of potentially undesirable influence of AI on the European society and economy and to make sure that all the beneficial potential of AI deployment is fully realised, we should anticipate and prepare for possible and high impact scenarios.

The proposal should cover all the following aspects:

  • Decisive contributions to develop a sound European capacity building on the future and long term human and societal implications of AI, building, as appropriate, on previous work of the HLEG-AI, ADRA[12], and current development of the AI Act or other relevant European and national AI initiatives.
  • A solid scientific approach, providing an in-depth analysis of successful existing deployment of AI and the impact they have on European economy and society. Such analysis should also significantly contribute to awareness raising of such deployments, providing a reality check of capabilities/benefits, but also limitations of current AI solutions, and how the latter are currently addressed.
  • Scenario based analysis of future and long term potential benefits to citizens and societies, as well as an analysis of related challenges and threats.
  • Based on this, proposals for development and deployment of AI, should ensure a broad support and appropriate involvement of other relevant AI initiatives, taking into account guiding ethics principles and the current development of the AI Act.
  • Proposals need to take a multi-disciplinary and cross-sectorial approach, and engage with a wide set of stakeholders, including research organisations, enterprises, citizens[13], policymakers, public private partnerships in particular the AI, Data and Robotics Partnership, and other relevant EU projects and initiatives around AI.
  • European policy actions should be proposed in a priority order, notably in the area of research and innovation but not excluding other important policy areas, that would serve to strengthen European preparedness and resilience in the face of future developments within AI and related emerging technologies as well as to guide the development and deployment of these technologies in a desirable direction.

Proposals should build on existing knowledge, activities and networks, such as the HLEG-AI and other initiatives funded by the European Union. Funded proposals should also take into account existing EU policy in the area, such as the development of the AI Act and the Excellence and trust in artificial intelligence under A Europe fit for the digital age[14]. Furthermore, the proposals should seek synergies with closely related actions, such as relevant R&I actions funded by Horizon Europe or Horizon 2020[15].






[6]See for example https://www.technologyreview.com/2018/04/30/143155/with-brain-scanning-hats-china-signals-it-has-no-interest-in-workers-privacy/


[8]The Global Health Care Worker Shortage: 10 Numbers to Note | Project HOPE




[12]Home - Ai Data Robotics Partnership (ai-data-robotics-partnership.eu)

[13]of different age groups incl. children and young people as well as elderly people

[14]See further

[15]Such as the Networks of AI excellence centres funded under H2020 and Horizon Europe, the AI on Demand Platform as well as projects funded under Destination 6 (Leadership in AI based on trust) of Cluster 4 of the HE Work Programme.