Learning-based physical human robot interaction

July 30, 2012 11:00AM

302- 309-1

 

Abstract:

 

In this talk, research activities of physical human robot interaction in Dynamic Human Robot Interaction lab at TUM will be presented. In this context of autonomous robotic learning, unsupervised incremental learning techniques for segmentation and clustering are applied to enhance human-robot cooperation tasks over time with the special focus on prediction of human partner’s behavior. The learned skills can be further improved by kinesthetic coaching, which leads to eliminating kinematic mapping errors and learning synchronized whole body motions. Finally the extension towards learning physical human robot interaction, where a robot companion is capable of handling intentional physical contacts with a human user, will be discussed. Direct physical interaction with a human during task execution is a widely undiscovered challenge. In our method, communication is designed in both symbolic and physical domains. The communication in the symbolic domain is realized through the concept of motion primitives and interaction primitives. In the physical domain, the trajectory of the motion primitive is reshaped in accordance with the human in real-time.

This page is maintained by Yumi Yi (ymyi@bi.snu.ac.kr).
Last update: July 30, 2012