Back to search

NAERINGSPH-Nærings-phd

Deep learning for sleep, mental, and emotional stage recognition from non-contact sensors of mental health patients.

Alternative title: Maskinlæring for gjenkjenning av søvnstadie, mental og emosjonell tilstand hos psykiatriske pasienter ved bruk av sensorer uten kontakt

Awarded: NOK 2.3 mill.

Project Number:

310278

Application Type:

Project Period:

2019 - 2024

Funding received from:

Location:

Quality care and patient safety are of vital importance for clinical personnel and hospital management. The project aims to design, develop, test, and evaluate the performance of machine learning-based monitoring and decision-making systems with physiological and behavioral data. The research divides into two studies. In the first study, we are doing nighttime monitoring based on a Polysomnographic dataset. In the second study, we will be doing daytime monitoring based on data collected from Video surveillance and physiological signals collected from non-contact sensors. The results we have achieved so far in nighttime monitoring look promising as we achieved higher accuracy when compared to state-of-the-art algorithms, especially when classifying sleep stage N1. The second study will focus on daytime monitoring based on video surveillance and input acquired from doctors. We will do the object detection and annotate the action with the help of machine learning algorithms based on video action clips. First, we will collect the information from the doctors and nurses for the telltale action they look for when monitoring a patient who tends to hurt themselves and require intervention. Then, we will try to mimic those actions from drama students and train the machine learning models, which will automatically label those actions.

R1: Predicting sleep stages Human activity monitoring is a well-established field but is mostly based on approaches which are inconvenient and intrusive. Sleep stage monitoring also relies on Polysomnography, which is conducted in hospitals or sleep labs, and the patient is required to wear several sensors. It is not yet known if the same level of results and accuracy can be achieved using machine learning algorithms with data collected from non- intrusive sensor. This thesis will, therefore, explore the potential of (R1) predicting the sleep stages using single non- intrusive sensors with deep learning. More specifically: • Relate sleep stages from EEG data to the sensors implemented during the research. • Model the dynamics of the sensor readings to sleep stages and stage transitions. R2: Detecting mental and emotional stages Emotion recognition from biometrics is relevant in a wide range of application domains including healthcare. Monitoring the emotional stages could be useful in identifying concerned behavior before they become serious. In this thesis, we will (R2) detect mental/emotional stages (using deep learning algorithms), which are precursors to suicide attempts from the patient behavior and vital signs by using non-intrusive sensors. More specifically: • We need to collect data by interviewing the psychiatric nurses/doctors to find out what tell-tale signs they are looking for while observing the patients on the suicide watch. • We need to model the behavior of the patients in conjunction with vital sign monitoring to detect the critical stage which requires intervention. R3: Improve the prediction using multiple sensors It is an open question of whether we can improve the prediction of sleep stages and emotional states using multiple non-intrusive sensors (R3).

Funding scheme:

NAERINGSPH-Nærings-phd