A two-stage grasp detection method for sequential robotic grasping in stacking scenarios
-
Published:2024
Issue:2
Volume:21
Page:3448-3472
-
ISSN:1551-0018
-
Container-title:Mathematical Biosciences and Engineering
-
language:
-
Short-container-title:MBE
Author:
Zhang Jing12, Yin Baoqun1, Zhong Yu2, Wei Qiang3, Zhao Jia2, Bilal Hazrat1
Affiliation:
1. Department of Automation, University of Science and Technology of China, Hefei 230027, China 2. School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China 3. The 14th Research Institute of China Electronics Technology Group Corporation, Nanjing 210039, China
Abstract
<abstract>
<p>Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.</p>
</abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference43 articles.
1. Y. Liu, Z. Li, H. Liu, Z. Kan, Skill transfer learning for autonomous robots and human-robot cooperation: A survey, Rob. Auton. Syst., 128 (2020), 103515. https://doi.org/10.1016/j.robot.2020.103515 2. J. Luo, W. Liu, W. Qi, J. Hu, J. Chen, C. Yang, A vision-based virtual fixture with robot learning for teleoperation, Rob. Auton. Syst., 164 (2023), 104414. https://doi.org/10.1016/j.robot.2023.104414 3. Y Liu, Z. Li, H. Liu, Z. Kan, B. Xu, Bioinspired embodiment for intelligent sensing and dexterity in fine manipulation: A survey, IEEE Trans. Ind. Inf., 16 (2020), 4308–4321. https://doi.org/10.1109/TⅡ.2020.2971643 4. A. Bicchi, V. Kumar, Robotic grasping and contact: A review, in IEEE International Conference on Robotics and Automation, 1 (2020), 348–353. https://doi.org/10.1109/ROBOT.2000.844081 5. A. T. Miller, S. Knoop, H. I. Christensen, P. K. Allen, Automatic grasp planning using shape primitives, in 2003 IEEE International Conference on Robotics and Automation, 2 (2003), 1824–1829. https://doi.org/10.1109/ROBOT.2003.1241860
|
|