Expected Outcome:
Project results are expected to contribute to all of the following expected outcomes:
- CCAM solutions - in hardware and software - with reduced power consumption, latency, and improved speed and accuracy, as domain specific adaptions of sector agnostic advancements in e.g. AI and/or cloud-edge-IoT technologies;
- Enhanced levels of safety, (cyber) security, privacy and ethical standards of data-driven CCAM functionalities by using e.g. edge-AI applications for CCAM;
- Approaches for well-balanced distributions of AI calculations for expanding use cases (e.g. collective perception, decision making and actuation) for connected, cooperative and automated driving applications (using a balanced mix of edge-based solutions, cloud-enabled solutions and vehicle-central solutions), balancing speed and latency, energy use, costs, data sharing and storage needs and availability;
- Validated approaches incorporating edge-AI solutions into the action chain from perception and decision-making up to actuation of advanced CCAM functionalities - both on-board and on the infrastructure side - for systemic applications such as traffic management and remote control, as well as tools and approaches for training of such functionalities, which require optimised and verified edge-AI models.
Scope:
CCAM-enabled vehicles are constantly sensing their surroundings on road conditions, location, nearby vehicles and infrastructure. Such data is shared in real-time, while data from other sources is received. This needs powerful and optimised large data processing algorithms, which requires large amounts of computing power, data processing, real-time operation and high levels of security. However, most existing AI computing tasks for automated vehicle applications are relying on general-purpose hardware, which has limitations in terms of power consumption, speed, accuracy, scalability, memory footprint, size and cost. Hardware advancements driven by initiatives such as the Chips JU calls must be complemented by significant efforts to optimise AI algorithms for CCAM functionalities, ensuring their efficient performance on edge-specific hardware.
To encompass CCAM solutions in future steps towards e.g., the Software Defined Vehicle, this dual approach on AI advancements and hardware advancements is essential. Complementarities with projects funded under Cluster 4 “Digital Industry and Space” of Horizon Europe should also be considered where appropriate, especially in translating sector-agnostic innovations to the specificities of CCAM applications. Requirements on AI algorithm optimisation, latency, on-board energy availability, solutions to gain unbiased datasets for AI training, Electronic Control Unit (ECU) capacity and on potential safety-critical scenarios should be considered to ensure the timely triggering of actions, and in a later stage, anticipatory driving. Solutions should use, as far as possible, building blocks, interfaces, and tools from projects of the Software-Defined Vehicle of the Future (SDVoF) initiative.
Edge-AI involves deploying AI algorithms on edge computing devices, which are hardware systems constrained in proximity to the data source where they operate. This is done without relying on remote resources for the computational efforts. It thus facilitates real-time insights, responses and triggering of actions, with reduced costs as the processing power close to the application is used, greatly reducing networking costs. Combining AI with edge-AI can facilitate stable solutions to include the full activity chain from sensing, perception, decision-making up to actuation of advanced CCAM solutions, gaining speed and resilience which are essential in safety-critical situations.
To successfully overcome these challenges, proposed actions are expected to address all of the following aspects:
- For next major advancements in AI applications in CCAM solutions, huge AI applications need to fit into limited hardware, to make it fit for purpose. Edge-AI devices often have limited computational resources, making it challenging to deploy large and complex AI models. Thus, it is essential to develop and reshape approaches and building blocks for CCAM solutions, viable to be run on edge-hardware. Use cases for the approaches and building blocks should focus on time-critical applications (such as the chain from (collective) perception, decision making and actuation of functionalities) and can be linked to the activities and results from projects AI4CCAM[1] and AIthena[2].
- Develop optimised edge-AI algorithms and demonstrate their applicability and scalability, using real-world CCAM scenarios such as in the databases resulting from projects such as SYNERGIES[3]. The development and demonstration use case should include in-vehicle perception and understanding, such as object detection, segmentation, road surface tracking, sign and signal recognition, etc. Decision making and actuation of countermeasures is to be part of the chain of actions. The approaches for these building blocks and enabling technologies should facilitate a quick uptake in adjacent or following projects;
- Optimisation of the models for edge deployment. This involves adjusting the size and complexity of models to allow it to run on the relevant edge devices and include training and verification approaches. Techniques such as model quantization, pruning, and knowledge distillation can be used to reduce the size of AI models without significant loss in performance. Additionally, over-the-air (OTA) updates can be used to manage and update models across a fleet of devices efficiently;
- Develop tools and approaches for edge-AI model monitoring, to ensure that edge-AI systems continue to operate as expected and ensure resilience to failure conditions or attacks, and monitoring model outputs to ensure they are accurate even as real-life conditions and datasets change.
The research will require due consideration of cyber security, connectivity and both personal and non-personal data protection rules, including compliance with the GDPR, and ensure that gender and other social categories (such as but not limited to disability, age, socioeconomic status, ethnic or racial origin, sexual orientation, etc.), and their intersections are duly considered where appropriate, as well as Explainable AI to enhance trust and regulatory compliance including alignment with the AI Act.
In order to achieve the expected outcomes, international cooperation is encouraged in particular with Japan and the United States but also with other relevant strategic partners in third countries. Such cooperation should exploit synergies in edge AI approaches for mobility and for CCAM, as well as its integration into the vehicle architecture.
This topic implements the co-programmed European Partnership on ‘Connected, Cooperative and Automated Mobility’ (CCAM). As such, projects resulting from this topic will be expected to report on results to the European Partnership ‘Connected, Cooperative and Automated Mobility’ (CCAM) in support of the monitoring of its KPIs.
Projects resulting from this topic are expected to apply the European Common Evaluation Methodology (EU-CEM) for CCAM[4].
Projects funded under this topic are encouraged to explore potential complementarities with the activities of the European Commission's Joint Research Centre’s Sustainable, Smart, and Safe Mobility Unit and, where appropriate, establish formal collaboration.
[1] Trustworthy AI for CCAM, grant agreement ID: 101076911.
[2] AI-based CCAM: Trustworthy, Explainable, and Accountable, grant agreement ID: 101076754.
[3] Real and synthetic scenarios generated for the development, training, virtual testing and validation of CCAM systems, grant agreement ID: 101146542.
[4] See the evaluation methodology here.