Back to search

IKTPLUSS-IKT og digital innovasjon

GentleMAN-Gentle and Advanced Robotic Manipulation of 3D Compliant Objects

Alternative title: GentleMAN-Skånsom og Avansert Robotisert Manipulasjon av 3D Føyelige Objekter

Awarded: NOK 16.0 mill.

Humans are naturally equipped with fantastic visual and tactile capabilities, outstanding manipulative dexterity, and an ability rapidly to learn new tasks in a way that continues to defy our understanding. If we want robots to perform complex manipulation tasks, we need to develop a new technology that equips them with vision to 'see' the scene and objects to be manipulated, tactile sensing to enable them to feel the objects, and a 'brain' that combines these senses to achieve new learning. The GentleMAN project addresses these challenges by equipping robots with advanced 3D vision, tactile sensing, and a 'brain' that uses artificial intelligence to enable them to learn new manipulation skills that reproduce human-like dexterity and fine motor skills. The 'brain' we are developing, using robot learning, will provide robots with new capabilities, enabling them to perform complex manipulation tasks working alongside humans. When it comes to grasping we have finalized our 4DoF-grasping framework which combines Deep Reinforcement Learning, GAN, and IBVS in a single controller scheme. The framework is validated on four benchmarks, including one benchmark for deformable objects, and the framework achieves a remarkable closed-loop grasping success rate of over 90%. This makes our framework one of the grasping frameworks with the highest grasping success rates. When it comes to image-based tactile sensing, in the project is developed a fingertip-like, omnidirectional, camera-based tactile sensor capable of producing depth maps of objects deforming the sensor's surface. In addition, the sensor design is of such art that allows the sensor to easily be reconfigured and attached to different grippers of varying DOFs. With this work, the aim is to aid the roboticists to quickly and easily customize high-resolution tactile sensors to fit their robotic system's needs. Additionally, we have investigated how one can establish stable grasps to infer object properties using finger-tip tactile sensors. Based on empirical experiments, new insights for future finger-tip tactile sensor usage and design have been acquired: one tactile sensor instead of a pair of sensors is sufficient for symmetric objects and interaction motions; dense taxels are beneficial for texture-related adjectives but can be distracting to non-texture-related ones. We have also elaborated a control approach that relies on a coarse model of the soft object to be manipulated. This model is composed by a 3D mesh and we chose to represent the mechanical behavior of the object using a mass-spring model (MSM) because it provides real-time capability. Based on this coarse model, we derived the analytical expression of the controller that allows to indirectly move a feature point belonging to the soft object to a desired 3D position by acting with a robotic manipulator on a distant manipulated contact point. Since the MSM provides an approximation of the object behavior, which in practice may lead to a drift between the real object and its model, an online realignment of the model was performed by visually tracking the object's 3D deformation.

GentleMAN will result in a novel robot control and learning framework enabling real-world manipulation of 3D compliant objects. This framework will be based on visual and force/tactile sensing modalities and multi-modal learning models by careful balance and tighter integration between the components responsible for object localization and pose estimation, based on visual information, and the one responsible for manipulation based on the force/tactile information. The robotic manipulation of 3D compliant objects remains a key, yet relatively poorly-researched, field of study. Currently, most approaches to robotic manipulation focus on rigid objects. These are primarily vision-based and require a 3D model of the object or attempt to build one. The interaction of a robot with 3D compliant objects is one of the greatest challenges facing robotics today, due to complex aspects such as shape deformation during manipulation, and the need for real-time perception of deformation and the compliancy of the objects. To these are added coordination of the visual, force and tactile sensing required to control and accomplish specific manipulation tasks. These challenges become even more acute if the objects are slippery, made of soft tissue, or have irregular 3D shapes. Such objects are common in the agriculture, manufacturing, food, ocean space, health and other sectors in both industrial and non-industrial settings. The GentleMAN addresses these challenges by focusing on providing robots with advanced manipulation skills that reproduce human-like movements and fine motor skills. Robots will thus learn intelligently how to induce and apply the necessary manipulative forces while generating occlusion-resilient vision control, real-time 3D deformation tracking and a shape-servoing strategy. A highly qualified and expert interdisciplinary consortium, consisting of SINTEF, NTNU, NMBU, INRIA, MIT and QUT has been assembled to conduct the proposed research.

Publications from Cristin

No publications found

Funding scheme:

IKTPLUSS-IKT og digital innovasjon