Learning to Interact with Environment via Geometry-Based Robot Grasping
- Author(s): Qin, Yuzhe
- Advisor(s): Su, Hao
- Atanasov, Nikolay
- et al.
The ability to learning from interaction with environments shapes an intelligent agent. For exploratory robots, they need specific structured action to interact with the physical world efficiently. Geometry-based grasping, which serves as the primary action for many complex manipulation tasks, can be of great help for robot exploration. With a learned grasping strategy, the robot can directly execute object-specific action. This thesis studies the problem of 6-DoF geometric grasping by a parallel gripper captured using a commodity depth sensor from a single viewpoint. We address the problem in a learning-based framework with point cloud input. At the higher level, we rely on a single-shot grasp proposal network built upon the PointNet++ backbone. Our single-shot neural network architecture can predict grasp proposals efficiently and effectively. At the lower level, we proposed a method to generate training data automatically. Our training data synthesis pipeline can generate scenes of complex object configuration and leverage an innovative gripper contact model to create dense and high-quality grasp annotations. Experiments in synthetic and real environments have demonstrated that the proposed approach can outperform the state-of-the-art geometry-based grasping method by a large margin. The grasp proposal network trained in a synthetic scene can work well in real-world scenarios, which also shows the point-based method have high potential to bridge the sim-to-real gap. We hope the work of the geometric grasping algorithm will help future research for more complex robot manipulation skills.