Back to search

IKTPLUSS-IKT og digital innovasjon

Multi-Sensor Data Timing, Synchronization and Fusion for Intelligent Robots

Alternative title: Synkronisering, tidsstempling og fusjonering av data fra multiple sensorer for intelligente roboter

Awarded: NOK 12.0 mill.

Autonomous robots and vehicles need to navigate with high accuracy and safety. This means that they need build an awareness of their situation and environment by processing data from many different sensors. This must be done very quickly and repeated many times per second in order to capture changes due to its own motion, and changes in the environment. For a robot that is creating a map of its environment by for example a camera that takes snapshots 100 times per second, the accuracy of this map depends on knowing its own position and where the camera is located for each snapshot. Developing sensor fusion systems that can solve the above problems is demanding and cost intensive due to hardware development and integration. The state-of-the-art is to develop proprietary solutions that have to be altered each time one of the sensors is modified. Our objective is to create the foundation for a high-precision navigation and sensor fusion platform with hardware and software that is independent of the sensors chosen. This will lead to more rapid and profitable autonomous systems. In this project NTNU will collaborate with leading industrial players that will ensure that the research is indeed driven by industrial needs. SentiSystems is a spin-off company from NTNU that is commercializing sensor timing and fusion solutions. Maritime Robotics provides unmanned aerial and surface vehicles for mapping and monitoring. Zeabuz is developing autonomous zero-emission urban passenger ferries.

The intelligence of autonomous systems and service robots critically depend on the ability to fuse data from multiple sensors for perception, navigation and remote sensing tasks such as mapping, monitoring or surveillance. Precision, data rates, cost, size, weight and power consumption of sensor technologies are improving at rapid pace. This includes sensors such as inertial measurement units (IMU), global navigation satellite systems (GNSS), cameras, radar, LiDAR, ultrasound ranging, ultrawideband (UWB) radio ranging, and magnetometers. However, the data from such sensors must be fused into user-centric data such as the robot’s position coordinates, velocity vector, angles representing orientation, maps of the environment, and states of static or dynamic objects in the environment. This requires first a sequence of sensor data processing steps (front-end), before the data is available for use in a multi-sensor data fusion algorithm (back-end). The standard choice of back-end algorithm is a nonlinear least-squares or Kalman filter that can extract estimates of the user-centric variables of interest. Artificial intelligence (AI) systems with machine learning (ML) algorithms may be part of the front-end processing chain to extract features, such as landmarks, based on raw images or LiDAR point clouds. These challenges are addressed by the project through the following main work packages: - Hardware architecture for sensor timing and low-latency data processing - SW/HW co-design, middleware and network synchronization - Autonomous ferry navigation case study - Robotic mapping and surveillance case studies

Funding scheme:

IKTPLUSS-IKT og digital innovasjon