Artificial intelligence (AI) systems are becoming ubiquitous and disruptive to industries such as healthcare, transportation, manufacturing, robotics, retail, banking, and energy. According to a recent European study, AI could contribute up to EUR 13.3 trillion to the global economy by 2030. Most of the recent AI breakthroughs can be attributed to the subfield of deep learning (DL), which has achieved state-of-the-art performance in tasks traditionally challenging for machines, such as object and speech recognition, and beating human champions at the game of Go. However, the complex and unintuitive black-box nature of DL models results in lack of interpretability, lack of robustness, and inability to generalize to situations beyond those encountered during training. These issues must be addressed in order to make AI systems trustworthy and deployable in social environments, industry and business-critical applications.
In the EXAIGON project, the Norwegian University of Science and Technology (NTNU), SINTEF Digital and international research partners will address these challenges by developing user-centric Explainable AI (XAI) methods for understanding how black-box models make their predictions and what are their limitations. Due to the data-driven nature of the topic, the research within EXAIGON will be driven by use cases (including datasets, models, and expert knowledge) provided by seven industry collaborators: DNB, DNV GL, Embron, Equinor, Kongsberg, Telenor and TrønderEnergi. The main focus areas are supervised learning, deep reinforcement learning, deep Bayesian networks and human-machine co-behaviour.
EXAIGON will create an XAI ecosystem around the Norwegian Open AI-Lab, and its outcomes will benefit the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.
The recent advances of Artificial Intelligence (AI) hold promise for significant benefits to society in the near future. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first.
Most of the recent AI breakthroughs can be attributed to the subfield of Deep Learning (DL), but, despite their impressive performance, DL models have drawbacks, with some of the most important being a) lack of transparency and interpretability, b) lack of robustness, and c) inability to generalize to situations beyond their past experiences.
Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the potential of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.
EXAIGON will advance the state-of-the-art in XAI by conducting research in four areas:
1. Supervised learning models
2. Deep reinforcement learning models
3. Deep Bayesian networks
4. Human-machine co-behaviour.
Areas 1-3 involve design of new algorithms and will interact continuously with Area 4, to ensure the developed methods provide explanations understandable to humans. The developed methodologies will be evaluated in close collaboration with 7 industry partners, who have provided the consortium with business-critical use cases, including data, models and expert knowledge.
The consortium includes two international partners from the University of Wisconsin-Madison, and University of Melbourne, respectively, who have conducted and published outstanding research in relevant areas over the last few years.