Back to search

IKTPLUSS-IKT og digital innovasjon

GentleMAN-Gentle and Advanced Robotic Manipulation of 3D Compliant Objects

Alternative title: GentleMAN-Skånsom og Avansert Robotisert Manipulasjon av 3D Føyelige Objekter

Awarded: NOK 16.0 mill.

Humans are naturally equipped with fantastic visual and tactile capabilities, outstanding manipulative dexterity, and an ability rapidly to learn new tasks in a way that continues to defy our understanding. We learn new and complex manipulation tasks by combining our senses with previous experience, and by learning from demonstration and self-exploration. If we want robots to perform complex manipulation tasks, we need to develop a new technology that equips them with vision to 'see' the scene and objects to be manipulated, tactile sensing to enable them to feel the objects, and a 'brain' that combines these senses to achieve new learning. The GentleMAN project addresses these challenges by equipping robots with advanced 3D vision, tactile sensing, and a 'brain' that uses artificial intelligence to enable them to learn new manipulation skills that reproduce human-like dexterity and fine motor skills. The 'brain' we are developing, using robot learning, will provide robots with new capabilities, enabling them to perform complex manipulation tasks working alongside humans. Since the last reporting we have been investigating 3D shape completion and reconstruction of volumetric objects from a single view, to enable a robot arm controller to make inference of the 3D objects shape during the manipulation stage when equipped with 3D vision. Objects may from a single viewpoint be only partially observable by a visual sensor due to various reasons. Our developed shape completion method is based on deep neural fields searching for the shape embedded in latent space that best conforms to the single-view observation data, using stochastic gradient descent. Additionally, we have developed a framework for tracking the deformation of soft objects using a RGB-D camera by utilizing the physically-based model of the considered object. A coarse, 3D template of the object being tracked is the only prior information required by the proposed method. The proposed approach does not rely on the accurate knowledge of the material properties of the object being tracked. The method integrates computer vision-based tracking methodology with physical model-based deformation representation without requiring expensive numerical optimization for minimizing nonlinear error terms. The proposed method has been validated both on synthetic data (with ground truth) and real data. Based on an image-based tactile sensor, an early prototype of the gripper finger is designed to characterize soft and granular media as a step towards developing a sensor for tactile sensing of the volumetric compliant objects. The method combines high resolution tactile imaging with soft mechanical vibration to characterize the media. One of the main goals of the novel sensor design is to achieve human finger like form factor so that the sensor can easily be fitted on existing robot hands. When it comes to robot control, learning and manipulation of compliant objects, learning new models and training new grasping agents with the focus on the grasping stage, namely, the interaction between the gripper and the compliant objects, have been the core of our research investigations. In particular, design and formulation of suitable reward functions to complete the grasp successfully and without quality degradation of the object/product at hand. The environment for training these agents has primarily been simulation and now we are working with the transfer learning of these methods to the real world. In the next phase, the focus will be on increasing the robustness of the visual-based sensing methods developed so far in the project, as a steppingstone to the multi-modal learning and control, as well as trials with tactile sensors and compliant objects. The focus will also be on training and learning of novel models for grasping of 3D compliant objects utilizing the multi-modal integration. Fine-tuning and validation of the agents trained in simulation and their domain adaptation into the real world will also be performed.

GentleMAN will result in a novel robot control and learning framework enabling real-world manipulation of 3D compliant objects. This framework will be based on visual and force/tactile sensing modalities and multi-modal learning models by careful balance and tighter integration between the components responsible for object localization and pose estimation, based on visual information, and the one responsible for manipulation based on the force/tactile information. The robotic manipulation of 3D compliant objects remains a key, yet relatively poorly-researched, field of study. Currently, most approaches to robotic manipulation focus on rigid objects. These are primarily vision-based and require a 3D model of the object or attempt to build one. The interaction of a robot with 3D compliant objects is one of the greatest challenges facing robotics today, due to complex aspects such as shape deformation during manipulation, and the need for real-time perception of deformation and the compliancy of the objects. To these are added coordination of the visual, force and tactile sensing required to control and accomplish specific manipulation tasks. These challenges become even more acute if the objects are slippery, made of soft tissue, or have irregular 3D shapes. Such objects are common in the agriculture, manufacturing, food, ocean space, health and other sectors in both industrial and non-industrial settings. The GentleMAN addresses these challenges by focusing on providing robots with advanced manipulation skills that reproduce human-like movements and fine motor skills. Robots will thus learn intelligently how to induce and apply the necessary manipulative forces while generating occlusion-resilient vision control, real-time 3D deformation tracking and a shape-servoing strategy. A highly qualified and expert interdisciplinary consortium, consisting of SINTEF, NTNU, NMBU, INRIA, MIT and QUT has been assembled to conduct the proposed research.

Publications from Cristin

No publications found

Activity:

IKTPLUSS-IKT og digital innovasjon