Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images
Schmidt, Philipp, Vahrenkamp, Nikolaus, Wächter, Mirko, and Asfour, Tamim

Publication: 2018 IEEE International Conference on Robotics and Automation (ICRA)

Abstract: We present a data-driven, bottom-up, deep learning approach to robotic grasping of unknown objects using Deep Convolutional Neural Networks (DCNNs). The approach uses depth images of the scene as its sole input for synthesis of a single-grasp solution during execution, adequately portraying the robot’s visual perception during exploration of a scene. The training input consists of precomputed high-quality grasps, generated by analytical grasp planners, accompanied with rendered depth images of the training objects. In contrast to previous work on applying deep learning techniques to robotic grasping, our approach is able to handle full end-effector poses and therefore approach directions other than the view direction of the camera. Furthermore, the approach is not limited to a certain grasping setup (e. g. parallel jaw gripper) by design. We evaluate the method regarding its force-closure performance in simulation using the KIT and YCB object model datasets as well as a big data grasping database. We demonstrate the performance of our approach in qualitative grasping experiments on the humanoid robot ARMAR-III.

Bibtex:

@inproceedings{Schmidt2018,
  author = {Schmidt, Philipp and Vahrenkamp, Nikolaus and Wächter, Mirko and Asfour, Tamim},
  booktitle = {2018 IEEE International Conference on Robotics and Automation (ICRA)},
  title = {Grasping of Unknown Objects Using Deep Convolutional Neural Networks Based on Depth Images},
  year = {2018},
  volume = {},
  number = {},
  pages = {6831-6838},
  doi = {10.1109/ICRA.2018.8463204}
}