The last decade has witnessed an explosive rise in the use of decision systems built on modern AI techniques that are often opaque, such as deep learning. These black-box systems, based on large amounts of data, are a key tool in making important decisions for both individuals, companies and society at large. It is of greatest importance that the users can evaluate and trust these decisions. The field of explainable artificial intelligence (XAI) addresses this issue, to give human users a better understanding of the behavior of complex AI systems.
This project is directed at example-based explanations, with a novel focus on the simplicity of examples. We develop mathematical formulations of simplicity across various representation domains, that correlate well with simplicity for the human learner. Based on earlier joint work in the field of machine teaching we develop the conceptual and practical setting of example-based explanation, thereby expanding the techniques of machine teaching and breaking new ground by applying them to XAI in an innovative way.
The project is a partnership between the University of Bergen and VRAIN - the Valencian Research Institute for AI, and also Equinor and BKK AS, Norwegian companies in the energy sector who have identified explainability as crucial for their business.
Equinor has 21.000 employees and is one of the world's largest offshore operators, while BKK AS has 1.300 employees with hydroelectric power production as a main activity. For both companies, more and more decisions are made through the use of machine learning models, and the results of this project will provide the assistance of explainability. This is needed for safety and efficiency, to help users and experts understand when the models make decisions on a wrong or a weak basis, and to better understand the foundation of the model.
The field of explainable artificial intelligence (XAI) is concerned with giving human users a better understanding of the behavior of complex AI systems. The last decade has witnessed an explosive rise in the use of decision systems built on modern AI techniques that are often opaque, such as deep learning. These are black-box systems used to predict sensitive individual information like credit score, insurance risk, and health status. Meanwhile, the GDPR contains clauses that introduce a right for all individuals to obtain “meaningful explanations of the logic involved” when automated decision-making takes place. This right represents an urgent scientific challenge and this is what XAI aims to deliver on.
Machine teaching is an emerging field that has recently attracted attention in AI. Briefly, machine teaching can be considered as an inverse problem to machine learning where the goal is for the teacher to find the smallest training set that produces a target concept. This goal can be seen as compatible with the goals of explainable AI. We take the learner to be the human user, and the target concept to be (a part of) the black-box AI system that needs explanation. The machine teaching algorithm must find a small set of labelled examples that will allow the human to build her own model of the AI system and thereby arrive at an explanation of the target concept.
We argue that since any machine learning model has been trained on data, then data is indeed the natural common language between the user and this model. The goal of this project is to develop the conceptual and practical setting of example-based explanation, applying and expanding the techniques of machine teaching to reach the goals of explainable AI and to meet the need for software and knowledge acquisition of the industrial partners Equinor and BKK AS.