Model-less Estimation Method for Robot Grasping Parameters Using 3D Shape Primitive Approximation
Torii, Takuya, and Hashimoto, Manabu

Publication: IEEE International Conference on Automation Science and Engineering (CASE)

Abstract: The vision systems of intelligent robots, such as home robots, must be capable of two important tasks: detecting objects and estimating the position for grasping them. Deep Neural Network (DNN) -based techniques are applicable for object detection; however, estimating grasping parameters is still difficult, especially when a 3D model of each object is unavailable. We propose a method to estimate the parameters (position, direction, angle, and opening width) required for a robot to grasp objects without model. First, the 3D surface shape of the object is recognized using a DNN. Next, by integrating these result of recognition, an appropriate object primitive (hexahedron, cylinder, or sphere) is fitted to the target object. Finally, the optimal grasping parameters for the object are determined using a rule database that is prepared in advance. The success rate for approximating object primitives with our method was 94.7%, which is 6.7% higher than the existing method using 3D ShapeNets. The success rate for grasping with our method was 85.6% in grasping simulations using Gazebo. This is 17.8% higher than the GPD method using a DNN.

Bibtex:

@inproceedings{Torii2018,
  author = {Torii, Takuya and Hashimoto, Manabu},
  booktitle = {IEEE International Conference on Automation Science and Engineering (CASE)},
  title = {Model-less Estimation Method for Robot Grasping Parameters Using 3D Shape Primitive Approximation},
  year = {2018},
  volume = {},
  number = {},
  pages = {580-585},
  doi = {10.1109/COASE.2018.8560417}
}