Synergistic Pushing and Grasping for Enhanced Robotic Manipulation Using Deep Reinforcement Learning

Author:

Shiferaw Birhanemeskel Alamir1ORCID,Agidew Tayachew F.1,Alzahrani Ali Saeed2ORCID,Srinivasagan Ramasamy2ORCID

Affiliation:

1. Department of Electromechanical Engineering, Mechatronics, Addis Ababa Science and Technology University, Addis Ababa 16417, Ethiopia

2. Department of Computer Engineering, College of Computer Science and Information Technology, King Faisal University, Al Hufuf 31982, Saudi Arabia

Abstract

In robotic manipulation, achieving efficient and reliable grasping in cluttered environments remains a significant challenge. This study presents a novel approach that integrates pushing and grasping actions using deep reinforcement learning. The proposed model employs two fully convolutional neural networks—Push-Net and Grasp-Net—that predict pixel-wise Q-values for potential pushing and grasping actions from heightmap images of the scene. The training process utilizes deep Q-learning with a reward structure that incentivizes both successful pushes and grasps, encouraging the robot to create favorable conditions for grasping through strategic pushing actions. Simulation results demonstrate that the proposed model significantly outperforms traditional grasp-only policies, achieving an 87% grasp success rate in cluttered environments, compared to 60% for grasp-only approaches. The model shows robust performance in various challenging scenarios, including well-ordered configurations and novel objects, with completion rates of up to 100% and grasp success rates as high as 95.8%. These findings highlight the model’s ability to generalize to unseen objects and configurations, making it a practical solution for real-world robotic manipulation tasks.

Funder

King Faisal University

Publisher

MDPI AG

Reference17 articles.

1. Liu, R., Nageotte, F., Zanne, P., de Mathelin, M., and Dresp-Langley, B. (2021). Deep reinforcement learning for the control of robotic manipulation: A focussed mini-review. Robotics, 10.

2. Human-level control through deep reinforcement learning;Mnih;Nature,2015

3. A Comprehensive Study of Artificial Neural Networks;Sharma;Int. J. Adv. Res. Comput. Sci. Softw. Eng.,2012

4. No play, bad work, and poor health;Sutton;Lancet,1998

5. An Introduction to Deep Reinforcement Learning;Henderson;Found. Trends® Mach. Learn.,2018

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3