Back to search

IKTPLUSS-IKT og digital innovasjon

EXAIGON - EXplainable AI systems for Gradual industry adoptiON

Alternative title: EXAIGON - Forklaringsbaserte AI systemer for gradvis industriell utnyttelse

Awarded: NOK 16.0 mill.

In the EXAIGON project, the Norwegian University of Science and Technology (NTNU), SINTEF Digital and international research partners will address these challenges by developing user-centric Explainable AI (XAI) methods for understanding how black-box models make their predictions and what are their limitations. Due to the data-driven nature of the topic, the research within EXAIGON will be driven by use cases (including datasets, models, and expert knowledge) provided by seven industry collaborators: DNB, DNV GL, Embron, Equinor, Kongsberg, Telenor and TrønderEnergi. The main focus areas are supervised learning, deep reinforcement learning, deep Bayesian networks and human-machine co-behaviour. EXAIGON will create an XAI ecosystem around the Norwegian Open AI-Lab, and its outcomes will benefit the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights. By October 2021, the project has graduated 16 MSc students, contributed to numerous dissemination activities and published its first scientific results. More specifically, 2 journal publications and several conference publications have already been accepted and/or presented. One important element is the focus on methods that are explainable to different levels of end users. Moreover, the project has focused on international collaboration, to the extent the COVID situation allowed it, and now has close contact with the University of Melbourne, Australia.

The recent advances of Artificial Intelligence (AI) hold promise for significant benefits to society in the near future. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first. Most of the recent AI breakthroughs can be attributed to the subfield of Deep Learning (DL), but, despite their impressive performance, DL models have drawbacks, with some of the most important being a) lack of transparency and interpretability, b) lack of robustness, and c) inability to generalize to situations beyond their past experiences. Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the potential of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights. EXAIGON will advance the state-of-the-art in XAI by conducting research in four areas: 1. Supervised learning models 2. Deep reinforcement learning models 3. Deep Bayesian networks 4. Human-machine co-behaviour. Areas 1-3 involve design of new algorithms and will interact continuously with Area 4, to ensure the developed methods provide explanations understandable to humans. The developed methodologies will be evaluated in close collaboration with 7 industry partners, who have provided the consortium with business-critical use cases, including data, models and expert knowledge. The consortium includes two international partners from the University of Wisconsin-Madison, and University of Melbourne, respectively, who have conducted and published outstanding research in relevant areas over the last few years.

Publications from Cristin

No publications found

Activity:

IKTPLUSS-IKT og digital innovasjon