Back to search

IKTPLUSS-IKT og digital innovasjon

MAXSENSE - Maximizing the value of sensors data using human avatars

Alternative title: MAXSENSE - Bruk av menneskelige avatarer for å øke utnyttelsen av sensordata

Awarded: NOK 12.0 mill.

Project Number:

332848

Project Period:

2022 - 2026

In an increasingly “technologified” world, we may worry whether increased use of a home office laptop during a lockdown has predictable, specific, negative outcomes like back pain (the cost of back pain at work in the US alone is estimated at $226 billion a year). When working out at the gym, we worry if an exercise actually helps us get fitter or just makes us tighten up. Body-worn sensors, sensors in furniture, clothes and floors can help to measure functioning and well-being, improve product design, work and exercise routines. By using accurate sensor technology systems we can prevent injury onset, monitor/track potential degradation or improvement, and prescribe specific activities or changes to an individual's environment, reducing public health costs and improve wellbeing. However, without a good computer model of us, the users, the value of the data coming from sensors is limited. It is hard then to say, with any degree of certainty, that sitting in a certain way is bad for your posture, or that a chair is poorly designed. Maxsense's goal is to create a portable artificial intelligence-based, 3D modelling system to generate personalized human body models (or "avatars"). Utilizing advances in programming tools and integrating fundamental knowledge about human anatomy, we will build individualized avatars from minutes of 3D data recordings. Combining our new models with data from sensors on and around the user, we get high-value information that can be used to improve training, work situations or product design. The output of our system can be used, for example, by sport coaches targeting individual needs or workspace designers working to make work environments safer.

In an increasingly “technologified” world, we may worry whether increased use of a home office laptop during a lockdown has predictable, specific, negative outcomes like back pain. Working out at the gym, we worry if an exercise actually helps us get fitter or just makes us tighten up. Body-worn sensors, sensors in furniture, clothes and floors can help to measure functioning and well-being, improve product design, work and exercise routines. However, without a good model of us, the users, the value of those data is limited. It is hard then to say, with any degree of certainty, that sitting in a certain way is bad for your posture, or that a chair is poorly designed. Our ambitious goal is to create a portable AI-based, marker-free 3D modelling system to generate personalized musculoskeletal models (or "avatars"). Utilizing advances in programming tools for hybrid physics-AI problems (JAX) & integrating fundamental knowledge abut bone structures, tissue and skin, we build individualized avatars from minutes of 3D data recordings. Combining our new models with data from sensors on & around the user, we get high-value data that can be used to improve training, work situations or product design. The data can be used by sport coaches targeting individual needs or workspace designers working to make work environments safer. Computational power has recently reached a level where it is possible to create realistic, individualized models of humans. In 2019, Facebook demoed a 2D web-camera based system for creating avatars for better virtual reality experiences. We believe that such models should be refined and made broadly available for other, more humanistic purposes. The combined experience of SINTEF in sensors, the University of Oslo's in modelling complex physical phenomena & our partners knowledge of training (SATS), wireless sensors (Nordic Semi), product development (NxTech), furniture (Flokk), HCI (Cornell) biomechanics (NIH) provides a unique, timed opportunity.

Funding scheme:

IKTPLUSS-IKT og digital innovasjon