Back to search

FRIPRO-Fri prosjektstøtte

Responsible Explainable Machine Learning for Sleep-related Respiratory Disorders (Respire)

Alternative title: Ansvarlig forklarlig maskinlæring for søvn-relaterte luftveislidelser (Respire)

Awarded: NOK 12.6 mill.

Devices, like smart-watches, that can collect health data from "everybody" all the time, and machine learning (ML) to analyze this data will strongly impact future health solutions. They can enable low-cost large-scale screening and long-term monitoring of individuals to automatically detect changes in their health status, for early detection of undiagnosed diseases, and to personalize treatment of patients. If applied without reflection there are also substantial challenges, like (1) protection and control of use of collected data, (2) false alarms, health anxiety, overdiagnosis, subsequent overtreatment, and medicalization, (3) reliability, relevance, and validity of data analysis results, and (4) inability to explain results obtained with modern ML. This undermines basic ethical principles and legal rights and may hamper fruitful use of ML in the health sector. These challenges and opportunities will be addressed by researchers from computer science, medicine, law, and ethics. The medical focus will be on sleep-related respiratory disorders, in particular for infants with respiratory obstructions in the upper respiratory tract and patients that receive via a mask long-term nocturnal mechanical support of ventilation. The core of Respire will be a framework to define what good explanations are for different users (e.g., health professionals, patients, and ML developers), and how their quality can be evaluated. As groundwork we will investigate: (1) the use of monitoring data from mechanical ventilators and ML to improve and personalize treatment of patients, (2) consumer electronic for sleep monitoring of infants at home and ML to analyze the monitoring data, (3) major legal and ethical concerns, with focus on ethical principles of autonomy and privacy, EU data protection law and health legislation, and (4) the relationship between detecting and defining entities such as indicators, indexes, diagnoses, and diseases; potential challenges caused by false alarms. To tackle the first issue, we combined an advanced lung simulator with a mechanical ventilator, along with custom-built parts that mimic a patient's airways and different types of mask leaks. This simulation setup is now being used to generate data, helping us develop and test ML solutions to automatically analyze data collected by mechanical ventilators. The second issue is being addressed with new, contactless sleep monitoring prototypes, which are based on the radar-based sleep monitor Somnofy from Vitalthings AS. After resolving all technical and regulatory challenges, these monitoring systems have been in use since summer 2024 to track patients both in hospitals and at home. Parental consent for their children's participation in the study has been overwhelmingly positive, reflecting strong interest in the project. To ensure we meet legal and ethical standards, we are (1) studying how the concepts of "transparency" and "explainability" from the EU's General Data Protection Regulation (GDPR) and the EU Artificial Intelligence (AI) Act apply to the research, development, and use of AI, and (2) contributing to the scientific discussion by expanding on the well-known bioethical principles by Beauchamp and Childress. These principles include beneficence (doing good), non-maleficence (avoiding harm), respect for autonomy (the ability to make decisions), and justice (fair distribution of benefits and harms). We are proposing a fifth principle: "explicability"—the quality of being clear and understandable enough to facilitate accountability. Our interdisciplinary research into responsible and explainable AI has also led to an analysis of the essential functions an AI system must have to be considered responsible and explainable. In addition, we are conducting experiments with publicly available sleep monitoring data and using ML to detect sleep apnea. These experiments aim to assess (1) how well current solutions can determine the confidence of an ML model in its decisions and (2) how effective these solutions are in making ML more understandable. Our findings in sleep apnea detection indicate that while current explainability tools are somewhat useful for ML developers, they are still difficult for sleep experts and other stakeholders to understand.

To achieve the project objectives, Respire follows with an interdisciplinary research team (medicine, ethics, law, computer science) a participatory research approach that involves all stakeholders (patients, medical personnel and researchers, ML developers and researchers, and policy makers). The sleep-related breathing disorders to be investigated include pediatric patients with laryngomalacia and patients that receive long-term treatment with non-invasive respiratory support devices. To gain new insights in laryngomalacia, xML solutions will be developed to (1) explore and analyze existing electronic health records and (2) detect respiratory events in sleep-monitoring data. To enable patient-friendly sleep monitoring at home we will tailor and evaluate existing sensing technology for pediatric patients. The xML model for analysis of data from non-invasive respiratory devices will be based on data generated by a physical lung simulator and tests with ventilation devices. To systematically develop good xML solutions we will establish an explainability framework that comprises (1) the basic definitions and requirements for explanations, (2) identifies the type, level, and presentation form of explanations for the different stakeholders, and (3) an assessment framework for the evaluation of explainability. We will perform an interdisciplinary use case study with the target audience and an epistemological analysis of uncertainty and validation to lay the foundation for the explainability framework. Further legal and ethical studies (including legal dogmatic analysis "de lege lata" and legal analysis "de lege ferenda") shall enable the project to publish articles that identify and relate the risk and opportunities of mHealth and to make guidelines and best practices for future monitoring and xML solutions in medicine. The qualitative evaluation of our xML solutions will involve all stakeholders.

Publications from Cristin

No publications found

No publications found

Funding scheme:

FRIPRO-Fri prosjektstøtte

Funding Sources