6-dof graspnet: Variational grasp generation for object manipulation
Mousavian, Arsalan, Eppner, Clemens, and Fox, Dieter

Publication: IEEE/CVF International Conference on Computer Vision

Abstract: Generating grasp poses is a crucial component for any robot object manipulation task. In this work, we formulate the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled grasps using a grasp evaluator model. Both Grasp Sampler and Grasp Refinement networks take 3D point clouds observed by a depth camera as input. We evaluate our approach in simulation and real-world robot experiments. Our approach achieves 88% success rate on various commonly used objects with diverse appearances, scales, and weights. Our model is trained purely in simulation and works in the real world without any extra steps. The video of our experiments can be found at: https://research.nvidia.com/publication/2019-10_6-dof-graspnet-variational-grasp-generation-object-manipulation

Bibtex:

@inproceedings{mousavian20196,
  title = {6-dof graspnet: Variational grasp generation for object manipulation},
  author = {Mousavian, Arsalan and Eppner, Clemens and Fox, Dieter},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  pages = {2901--2910},
  year = {2019}
}