There are people not able to communicate verbally due to different cognitive challenges. Even if the verbal language is missing, they communicate, to different extent, by means of sound, facial expression and gestures. The goal with this project is to use passive sensors to observe the person and interpret the expression to meaningful information by means of advanced machine-learning models. The sensors to be used includes 2D image sensor, 3D/depth sensors and microphone. The system shall shall be trained to interpret the emotional expressions of a person in real time. This will offer a unique too for communication between clients without verbal language and the caretaker. It is a challenging project that potentially can have great impact on many peoples quality of life.
-
Peoples who are not able to communicate verbally needs augmentative and alternative communication (AAC) to be understood and to interact with the surroundings.
In order to improve the quality of life of these peoples, and to facilitate the work of guardians and other caregivers, there is a strong need for a solution that can translate non-verbal communication consisting of sound, facial expressions and body gestures, to something understandable.
Today's methodology for translating expressions of persons in need of AAC is based on written notes describing expressions and signs (indexical signs), and the interpretation and response hypothesis. Caregivers often provide assistance to many different people, making it extremely difficult to learn the repertoire of expressions their individual clients have. This results in misinterpretation that leads to frustration, often violent behavior and resignation. To look up written notes take time, and the caregiver’s response time is relevant to how communication is perceived.
This innovation aims to develop a system that records sound, facial expressions and body gestures and uses machine learning to interpret the compound expression in real-time. The goal is to facilitate quick and adequate response from the caregiver. The innovation shall be used in everyday situations, both indoors and outdoors. This will ease the everyday life for caregivers, improve quality of life for the person in question and most likely contribute to a positive development of cognitive skills and an extended expression repertoire. The solution will thus support the UN Convention on the Rights of Persons with Disabilities.
The innovation is unique to the market and challenges the state of art in image sensor usage and machine learning, and hence is in need of extensive research. The final product will have great impact on the company growth and financial development.
The project will be run in close cooperation with Sintef and Norsk Regnesentral.