Visualizing Robot Intent for Object Handovers with Augmented Reality
Newbury, Rhys, Cosgun, Akansel, Crowley-Davis, Tysha, Chan, Wesley P., Drummond, Tom, and Croft, Elizabeth A.

Publication: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)

Abstract: Humans are highly skilled in communicating their intent for when and where a handover would occur. However, even the state-of-the-art robotic implementations for handovers display a general lack of communication skills. This study aims to visualize the internal state and intent of robots for Human-to-Robot Handovers using Augmented Reality. Specifically, we aim to visualize 3D models of the object and the robotic gripper to communicate the robot’s estimation of where the object is and the pose in which the robot intends to grasp the object. We tested this design via a user study with 16 participants, in which each participant handed over a cube-shaped object to the robot 12 times. Results show that visualizing robot intent using augmented reality substantially improves the subjective experience of the users for handovers. Results also indicate that the effectiveness of augmented reality is even more pronounced for the perceived safety and fluency of the interaction when the robot makes errors in localizing the object.

Bibtex:

@article{newbury2021visualizing,
  author = {Newbury, Rhys and Cosgun, Akansel and Crowley-Davis, Tysha and Chan, Wesley P. and Drummond, Tom and Croft, Elizabeth A.},
  booktitle = {2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)},
  title = {Visualizing Robot Intent for Object Handovers with Augmented Reality},
  year = {2022},
  volume = {},
  number = {},
  pages = {1264-1270},
  doi = {10.1109/RO-MAN53752.2022.9900524},
    }