Publication: IEEE-RAS International Conference on Humanoid Robots
Abstract: Grasping unknown objects is a challenging task for humanoid robots, as planning and execution have to cope with noisy sensor data. This work presents a framework, which integrates sensing, planning and acting in one visuo-haptic grasping pipeline. Visual and tactile perception are fused using Gaussian Process Implicit Surfaces to estimate the object surface. Two grasp planners then generate grasp candidates, which are used to train a neural network to determine the best grasp. The main contribution of this work is the introduction of a discriminative deep neural network for scoring grasp hypotheses for underactuated humanoid hands. The pipeline delivers full 6D grasp poses for multi-fingered humanoid hands but it is not limited to any specific gripper. The pipeline is trained and evaluated in simulation, based on objects from the YCB and KIT object sets, resulting in a 95 % success rate regarding force-closure. To prove the validity of the proposed approach, the pipeline is executed on the humanoid robot ARMAR-6 in experiments with eight non-trivial objects using an underactuated five finger hand.
Bibtex:
@inproceedings{ottenhaus2019visuo, author = {Ottenhaus, Simon and Renninghoff, Daniel and Grimm, Raphael and Ferreira, Fabio and Asfour, Tamim}, booktitle = {IEEE-RAS International Conference on Humanoid Robots}, title = {Visuo-Haptic Grasping of Unknown Objects based on Gaussian Process Implicit Surfaces and Deep Learning}, year = {2019}, volume = {}, number = {}, pages = {402-409}, doi = {10.1109/Humanoids43949.2019.9035002} }