Imagine a scenario where drones autonomously navigate disaster zones, searching for survivors, or assessing damage without human intervention. The technology is already there, but a critical challenge remains: How can we trust these artificial intelligence (AI-) driven systems to make reliable decisions in life-or-death situations?
This research project addresses the pressing need for reliable autonomous systems, particularly when these systems are drones guided by AI algorithms and operate in environments where safety and security is challenging. As AI technologies become more integrated into sectors like transportation, healthcare, and security, the reliability of these systems becomes crucial. The black-box nature of many AI algorithms, such as neural networks, often introduces unpredictability. This can lead to significant risks and hesitance in their deployment.
The research design combines theoretical study, algorithm and framework development, and empirical testing. To evaluate and improve the reliability of an entire system, both the system architecture and the whole chain of information flow should be evaluated, from gathering training data to deploying AI models.
The primary objective of this project is to create a framework that evaluates and ensures an acceptable level of reliability in AI systems, even if absolute reliability is unattainable for many systems and environments.
The project's outcomes are expected to enhance the understanding of AI reliability, improve design and verification practices, and potentially influence policy-making related to AI systems.
As AI continues to evolve, ensuring its reliability becomes paramount. This research stands at the forefront of this challenge, paving the way for a future where we can confidently deploy AI-driven autonomous systems for even the most critical scenarios.
The proposed research project aims to address the critical need for reliability in AI-driven autonomous systems, focusing on those deployed in safety- and security-critical environments. More specifically, the research focuses on cyber-physical systems, i.e. embodied AI. As AI technologies advance, their integration into various sectors, including transportation, healthcare, and security has significantly increased. However, the unreliable nature of AI algorithms such as neural networks can pose serious risks in these applications, leading to hesitation from both industry and consumers regarding their deployment. This project will focus on a framework to increase and verify reliability in AI-driven autonomous systems, in line with ethical considerations and ongoing regulatory efforts such as the EU AI Act.
The reliability of an AI model refers to its ability to produce trustworthy and repeatable results. To evaluate the reliability of a complete system, it is necessary to examine the entire AI pipeline, which typically involves the stages: data importing, data partitioning and pre-processing, model training, and model deployment. When an AI model is used to control an autonomous system there is potentially a new set of steps in the system that can influence reliability and must thus be included. A challenge for this proposed research project is to investigate the pipeline(s) and framework(s) that apply to the many ever-new AI methods and the different varieties of autonomous systems.