RobotPuzzle

RobotPuzzle

Duration: 2013 - 2014
Involved Scientists Gregor Mehlmann, Kathrin Janowski, Markus Häring

Description

For this application, the human's task was to place different objects on the given fields. The robot's instructions for this were kept intentionally ambiguous to make the human ask for clarification.

The eye-tracking glasses worn by the user, as well as special markers on the objects, enabled the system to detect which object the user was currently looking at. This, in turn, enabled the robot to resolve ambiguities in the user's spoken question so it could answer correctly.

Furthermore, this information allowed the robot to show gaze behavior that expressed joint attention. For example, it followed the user's gaze to the respective objects or made eye contact when the former looked at the robot.

This eye contact also served as a signal for conversational floor management. When the user asked a question, the robot delayed its answer and waited until the user looked at it directly.

All these behavior patterns served to establish the common ground to avoid misunderstandings or at least resolve them quickly.

Publications

"Modeling Grounding for Interactive Social Companions"  

Gregor Mehlmann, Kathrin Janowski, Elisabeth André

Zeitschrift "KI – Künstliche Intelligenz", Special Issue "Companion Technologies" (Springer)

SpringerLink | BibTeX | PDF


"Exploring a Model of Gaze for Grounding in Multimodal HRI" 
Gregor Mehlmann, Kathrin Janowski, Markus Häring, Tobias Baur, Patrick Gebhard, Elisabeth André
16th ACM International Conference on Multimodal Interaction (ICMI 2014)

"Modeling Gaze Mechanisms for Grounding in HRI" 

Gregor Mehlmann, Kathrin Janowski, Tobias Baur, Markus Häring, Elisabeth André, Patrick Gebhard

21st European Conference on Artificial Intelligence (ECAI 2014)

ACM Digital Library |IOS Press | BibTeX | PDF

Share by: