In the SILENSE project research was done on new acoustic technologies and new concepts have been developed to be able to activate and control devices by gestures, data communication and indoor navigation based on those acoustic technologies. The concepts that have been developed can be used in different domains e.g. mobile and smart home Applications. In the project new types of new sensors and actuators (microphones, loudspeakers)have been developed. The components have been combined and by using newly developed algorithms and software to build technology demonstrators. At the start of the project, Elliptic Labs defined use cases for the demonstrators. We decided to focus our R&D efforts on the two areas: a) smart home appliances and b) infrastructure in public space and/or retail. For the first, the demonstrator was a gesture-controlled lamp, based on ultrasound signals that are sent and received from transducer and microphones embedded in the device. For the second, the demonstrator consisted of an ultrasound-based people counter to determine people going in and out of an entrance. In addition, there are motion sensing and tracking beacons to determine the amount of people traffic in a specific area over time. At the 1.5 year mark, we completed the design specifications of the demonstrators and developed the first prototypes using off-the-shelf components which our software runs on. Elliptic Labs also had successful demonstrations of the gesture-controlled lamp at Mobile World Congress 2018. During the 2nd year of the project, Elliptic Labs had developed a machine learning (ML) based framework to process and analyse the received ultrasound echoes. In particular, an ultrasound recording platform has been built for collecting training data and generate ground truth which is needed for supervised learning. The recording platform uses a 3D stereo camera along with computer vision techniques to extract distance information from the user motion to the ultrasound device. The visual data stream is synchronized with the acoustic data such that the exact time when certain event occurred can be marked on the recorded ultrasound data. Depending on the use case, different types of training data can be collected to infer a classification model using a simple, low complexity neural network. The choice of neural network is based on its ability to retain temporal information and overall memory and processing footprint. The recording platform and the ML classifier have been validated through the motion sensing use case. A Knowles smart speaker development board was used to embed the trained classifier. Ultrasound was transmitted and received using the onboard speaker and microphone. The developed prototype has been tested and presented at Silense project meeting, industry conventions and shown to potential clients. The developed ML framework is being extended to incorporate multiple microphones and possibility to collect training data for gestures. This work has been continued and finalized in the third project year. SINTEF has worked together with Elliptic Labs to develop their use cases and developed an optical system as a reference for their people Counter. SINTEF has been the work package leader for the work package dealing with the use case descriptions and specifications for the rest of the project. SINTEF has defined an own demonstrator that uses ultrasound to improve the measurement of body movement with other miniaturised sensors. SINTEF has implemented the physical demonstrator and algorithms. The demonstrator will be used for the gait measurement of MS patients in the first place, but this should be extended to applications in sports in future projects. SINTEF has performed a lot of measurements on this demonstrator. In addition, SINTEF has made a first version of a new miniaturized transducer based on piezoMEMS technology. The transducers were characterized and worked almost as specified as loudspeakers. In the third project year this transducer has been incorporated in the gait analysis demonstrator to promote miniaturization and increase performance.This has been finally tested in the developed software framework coupling this data with video. These transducers were also shared with some of the project partners for testing. SINTEF has also started with the development of a new generation of MEMS transducers with new thin film technology which is better matched with ultrasound microphones for high performance. AlN and PZT with low dielectric constant has been investigated and a new MEMS design has been made. The manufacturing of these devices has succesfully been finished at MiNaLab. The devices were characterized and they showed good results regarding frequency response and sensitivity.
I SILENSE prosjektet forskes det på nye akustiske teknologier og det utvikles nye konsepter for å kunne aktivere og styre utstyr ved hjelp av gest, data kommunikasjon og innendørs posisjonering, basert på disse nye akustiske teknologiene. Konseptene som utvikles kan brukes i forskjellige domener: bærbar, bil og smarte hjem anvendelser. Det skal både utvikles nye sensorer og aktuatorer (mikrofoner, høyttalere) som er miniatyriserte og ny elektronikk for akustisk signalbehandling, spesielt for frekvenser i ultralydområdet. Komponentene skal da kunne settes sammen og ved hjelp av nye algoritmer som også skal utvikles og tilpasset programvare vil teknologien bli demonstrert i de forskjellige nevnte bruksområdene på slutten av prosjektet.
Elliptic Labs vil i prosjektet jobbe med nye algoritmer, use cases og software samt bidrar i demonstratorarbeidet.
SINTEF vil jobbe sammen med Elliptic og de andre partnere innenfor algoritmer, use case og software. I tillegg vil SINTEF bidrar med nye miniatyriserte transdusere basert på piezoMEMS teknologi og demonstratorer. MEMS stor for miniatyriserte elektromekaniske systemer.