The project is developing new types of interpretable deep learning (AI), with a particular focus on the healthcare field. Since the project’s inception, it has produced a number of methods for interpretable AI in healthcare, and these results have been published in leading journals and conferences. Two directions within interpretable AI have been especially developed:
Methods that highlight elements in the input (pixels in images or time steps in time series) by randomly masking out elements in the inputs for a given AI model, calculating the effect of masking these elements, and then producing an estimate of the “importance” of the input elements.
Methods that identify certain “prototypical” inputs, and then relate an AI model’s prediction (e.g., a classification) to these prototypes. The idea is that the weight each prototype receives in relation to the prediction provides a form of interpretability.
In 2025, these two directions have been further developed and applied in several use cases. The most important application has been the analysis of mammography images in the context of breast cancer screening. This is related to a collaboration with the Norwegian Institute of Public Health, specifically the Cancer Registry. Among other things, the project has developed methods aimed at performing interpretable analysis of breast density simultaneously with cancer risk assessment, and methods for leveraging the temporal information that is latent in the breast cancer screening program.
Patient and population specific data from heterogeneous Electronic Health Records (EHR) are becoming ubiquitous sources for data-driven decision and diagnosis support systems. Deep learning artificial intelligence technologies are emerging as the state-of-the-art for EHR analysis due to their ability to learn complex representations from raw clinical data to obtain strong predictive power combined with an inherent ability to accept multiple data types as input for heterogeneous data fusion. However, key problems and constraints for deep learning systems for health are their lack of interpretability, their inability to exploit vast amounts of unannotated patient data, and their hitherto inability to exploit contextual information to perform well in the low volume data regime, e.g. due to stratification. As a key solution, the DEEPehr project will develop interpretable deep learning predictive systems for a range of EHR input sources, focusing particularly on prediction and prevention of postoperative adverse events. Adverse events, such as infections, are potentially lethal, causing huge suffering for patients and huge costs for healthcare. DEEPehr will develop novel unsupervised and weakly supervised deep learning methodology to exploit the wealth of unannotated patient data for better quality of care, and will leverage the unique hierarchical nature of EHRs for utilizing contextual and prior information to extract new clinical knowledge from low data volumes. Project results and outcomes will impact DEEPehr's clinical stakeholders, and the potential to impact data-driven health and science beyond is great given the generic methodology development core of the project. DEEPehr is high risk because of the profound challenges and interdisciplinary nature of the endeavor, yet feasible due to the high quality of the team, the extensive mobility, and the top international collaborators, creating the synergy effects needed to reach the ambitious project objectives.