If we are to develop robust robot-based automation we need to develop solid visual and tactile based perception and thus equip robots with better perception capabilities than they currently possess today. Only then robots could aim to perform complex manipulation tasks which are currently exclusively carried out by humans. Humans are naturally equipped with amazing visual and tactile capabilities, outstanding manipulative dexterity, and an ability rapidly to learn new tasks in a way that is still beyond our understanding. All these capabilities, and specifically the combination of visual and tactile perception, enable them to learn new and complex manipulation skills.
Robots, on the other hand, lack such visual and tactile perception skills and this is especially emphasized during manipulation of 3D compliant objects, which pose an additional challenge compared to manipulation of rigid objects. Clearly, robotic manipulation of compliant objects is limited due to the obvious challenges such manipulation incurs but also due to compelling scientific and technological challenges linked to the visual and tactile perception, control and learning.
The BIFROST project addresses these challenges by developing a visual-tactile perception, control, and learning framework in order to enable robots manipulate 3D compliant objects for a set of specific scenarios. By doing so, the project will generate ground-breaking research and new knowledge as a fundament for a future robot technology that may be capable to address in the future, e.g. challenging real-world robotic manipulation in domains characterized as critical society functions, such as those in food/seafood processing, involving compliant food object robotic manipulation.
BIFROST involves the development of a novel visual-tactile perception and control framework for the advanced robotic manipulation of 3D compliant objects. The ability of robots to manipulate such objects remains key to the advancement of robotic manipulation technologies. Despite tremendous progress in robotic manipulation, modern robots are generally only capable of manipulating rigid objects. As with humans, in order to plan and perform complex manipulation tasks on 3D compliant objects, robots need to "see" and "feel by touch" the objects they manipulate, and to understand their shape, compliancy, environment and context. Current visual-only robotic manipulation suffers from an inability to perceive 3D information due to real-world physical occlusions and intra-object or self-occlusions.
BIFROST will enable the simultaneous perception of 3D object shapes and their compliancy by means of visual perception by an RGB-D sensor, augmented with touch through active exploration using an image-based tactile sensor for physically occluded objects, inaccessible to the visual sensor. Based on visual and tactile perception, BIFROST will achieve active manipulation and shape servoing of 3D compliant objects by which robot control tasks are generated in sensor space by mapping raw sensor observations to actions. In order to achieve the learning of complex manipulation tasks and active deformation, we will develop a high-level multi-step reasoning framework for the automatic selection of action types designed to achieve desired states and shapes.
As with the rainbow bridge of Norse mythology, BIFROST connects two worlds. Inspired by human innate perception, understanding, manipulation and learning, the aim is to develop similar capabilities in robots.