An End-to-End Spatial Grasp Prediction Model for Humanoid Multi-fingered Hand Using Deep Network
Li, Shiqi, Li, Zhuo, Han, Ke, Li, Xiao, Xiong, Youjun, and Xie, Zheng

Publication: 2021 6th International Conference on Control, Robotics and Cybernetics (CRC)

Abstract: High-Dof grasping with humanoid multi-fingered hands is of great significance because of its wide application in dexterous manipulation scenarios. Unfortunately, due to the complexity of grasp representation and high dimensionality for multi-fingered hand, most of the current research works on obtaining two-fingered grasp candidates which require a grasp sampling process. This approach is time-consuming, especially when the grasp pose is spatial. Therefore, in this paper, we abandon this scheme and propose an end-to-end data-driven model to directly predict the spatial grasp contact points for the humanoid multi-fingered hand that considers both the object geometric attributes and gripper characteristics. It takes the object point clouds and 3-D model of the multi-fingered hand as inputs and requires no sampling or search process. Additionally, a grasping dataset is built for our model to ensure that the predicted grasp points satisfy the force closure metrics and grasp reachability. We verify our model both on the simulation test dataset and an actual humanoid service robot with two multi-fingered hands, the results demonstrate that the proposed model is able to realize finely grasping on novel objects.

Bibtex:

@inproceedings{Li2021,
  author = {Li, Shiqi and Li, Zhuo and Han, Ke and Li, Xiao and Xiong, Youjun and Xie, Zheng},
  booktitle = {2021 6th International Conference on Control, Robotics and Cybernetics (CRC)},
  title = {An End-to-End Spatial Grasp Prediction Model for Humanoid Multi-fingered Hand Using Deep Network},
  year = {2021},
  volume = {},
  number = {},
  pages = {130-136},
  doi = {10.1109/CRC52766.2021.9620132}
}