NG-Net: No-Grasp annotation grasp detection network for stacked scenes

文献类型: 外文期刊

第一作者: Shi, Min

作者: Shi, Min;Hou, Jingzhao;Li, Zhaoxin;Zhu, Dengming

作者机构:

关键词: Grasp detection; No-Grasp annotation; Stacked scenes; Robotic grasping

期刊名称:JOURNAL OF INTELLIGENT MANUFACTURING ( 影响因子:8.3; 五年影响因子:6.9 )

ISSN: 0956-5515

年卷期: 2024 年

页码:

收录情况: SCI

摘要: Achieving a high grasping success rate in a stacked environment is the core of the robot's grasping task. Most methods achieve a high grasping success rate by training the network on a dataset containing a large number of grasping annotations which requires a lot of manpower and material resources. Therefore, achieving a high grasping success rate for stacked scenes without grasping annotations is a challenging task. To address this, we propose a No-Grasp annotation grasp detection network for stacked scenes (NG-Net). Our network consists of two modules: an object selection module and a grasp generation module. Specifically, the object selection module performs instance segmentation on the raw point cloud to select the object with the highest score as the object to be grasped, and the grasp generation module uses mathematical methods to analyze the geometric features of the point cloud surface to achieve grasping pose generation without grasping annotations. Experiments show that on the modified IPA-Binpicking dataset G, NG-Net has an average grasp success rate of 97% in the stacked scene grasp experiment, 14-22% higher than PointNetGPD.

分类号:

  • 相关文献
作者其他论文 更多>>