Publication: IEEE Robotics & Automation Magazine
Abstract: We propose a novel approach to multifingered grasp planning that leverages learned deep neural network (DNN) models. We trained a voxel-based 3D convolutional neural network (CNN) to predict grasp-success probability as a function of both visual information of an object and grasp configuration. From this, we formulated grasp planning as inferring the grasp configuration that maximizes the probability of grasp success. In addition, we learned a prior over grasp configurations as a mixture-density network (MDN) conditioned on our voxel-based object representation. We show that this object-conditional prior improves grasp inference when used with the learned grasp success-prediction network compared to a learned, objectagnostic prior or an uninformed uniform prior. Our work is the first to directly plan high-quality multifingered grasps in configuration space using a DNN without the need of an external planner. We validated our inference method by performing multifinger grasping on a physical robot. Our experimental results show that our planning method outperforms existing grasp-planning methods for neural networks (NNs).
Bibtex:
@article{lu2020multifingered, title = {Multifingered grasp planning via inference in deep neural networks: Outperforming sampling by learning differentiable models}, author = {Lu, Qingkai and Van der Merwe, Mark and Sundaralingam, Balakumar and Hermans, Tucker}, journal = {IEEE Robotics \& Automation Magazine}, volume = {27}, number = {2}, pages = {55--65}, year = {2020}, publisher = {IEEE} }