Artificial intelligence (AI) systems are becoming ubiquitous and disruptive to industries such as healthcare, transportation, manufacturing, robotics, retail, banking, and energy. According to a recent European study, AI could contribute up to EUR 13.3 trillion to the global economy by 2030. Most of the recent AI breakthroughs can be attributed to the subfield of deep learning (DL), which has achieved state-of-the-art performance in tasks traditionally challenging for machines, such as object and speech recognition, and beating human champions at the game of Go. However, the complex and unintuitive black-box nature of DL models results in lack of interpretability, lack of robustness, and inability to generalize to situations beyond those encountered during training. These issues must be addressed in order to make AI systems trustworthy and deployable in social environments, industry and business-critical applications.
In the EXAIGON project, the Norwegian University of Science and Technology (NTNU), SINTEF Digital and international research partners will address these challenges by developing user-centric Explainable AI (XAI) methods for understanding how black-box models make their predictions and what are their limitations. Due to the data-driven nature of the topic, the research within EXAIGON will be driven by use cases (including datasets, models, and expert knowledge) provided by seven industry collaborators: DNB, DNV GL, Embron, Equinor, Kongsberg, Telenor and TrønderEnergi. The main focus areas are supervised learning, deep reinforcement learning, deep Bayesian networks and human-machine co-behaviour.
EXAIGON will create an XAI ecosystem around the Norwegian Open AI-Lab, and its outcomes will benefit the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.
By September 2024, the project has graduated 24 MSc students, contributed to numerous dissemination activities and published several scientific results. The first PhD student graduate from the project, Vilde Gjærum (ITK/NTNU), successfully defended her thesis in April 2023. One important element of her work was the focus on methods that are suitable for real-time robotic applications, and explainable to different levels of end users. Moreover, the student spent 2 months in Australia, as part of a research stay at the group of Professor Tim Miller at the University of Melbourne. Prof. Tim Miller is an international expert within XAI, and has co-authored two EXAIGON publications. A second PhD student (ITK/NTNU) is in the process of finalizing his thesis, while all researchers are on track with their schedules.
Two PhD students (IDI/NTNU) have had extended international collaboration with researchers at the Complutense University of Madrid (Spain) and Universidad de Almeria (Spain), including research stays at those universities in 2024. The project’s postdoctoral researcher (IDI/NTNU) is collaborating with the University of Washington (USA) and the Aalto University school of Business (Finland). There is also a planned visit for a PhD researcher (ITK/NTNU) at ETH Zurich between Oct 2024 – Feb 2025.
The project’s researchers have so far had several interactions with the industry collaborators, and one of them has regular collaboration with 3 of the partners. These efforts will be further pursued in 2024-25. We are in the process of preparing a full-scale trial with NTNU’s autonomous passenger ferry, milliAmpere1, where XAI technologies will be tested and communicated to relevant stakeholders.
The recent advances of Artificial Intelligence (AI) hold promise for significant benefits to society in the near future. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first.
Most of the recent AI breakthroughs can be attributed to the subfield of Deep Learning (DL), but, despite their impressive performance, DL models have drawbacks, with some of the most important being a) lack of transparency and interpretability, b) lack of robustness, and c) inability to generalize to situations beyond their past experiences.
Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the potential of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.
EXAIGON will advance the state-of-the-art in XAI by conducting research in four areas:
1. Supervised learning models
2. Deep reinforcement learning models
3. Deep Bayesian networks
4. Human-machine co-behaviour.
Areas 1-3 involve design of new algorithms and will interact continuously with Area 4, to ensure the developed methods provide explanations understandable to humans. The developed methodologies will be evaluated in close collaboration with 7 industry partners, who have provided the consortium with business-critical use cases, including data, models and expert knowledge.
The consortium includes two international partners from the University of Wisconsin-Madison, and University of Melbourne, respectively, who have conducted and published outstanding research in relevant areas over the last few years.