A decision-making of autonomous driving method based on DDPG with pretraining

Author:

Ma Jinlin1,Zhang Mingyu1ORCID,Ma Kaiping2,Zhang Houzhong3,Geng Guoqing1ORCID

Affiliation:

1. School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang, China

2. College of Information Management, Nanjing Agricultural University, Nanjing, China

3. School of Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China

Abstract

Present the DDPGwP (DDPG with Pretraining) model, grounded in the framework of deep reinforcement learning, designed for autonomous driving decision-making. The model incorporates imitation learning by utilizing expert experience for supervised learning during initial training and weight preservation. A novel loss function is devised, enabling the expert experience to jointly guide the Actor network’s update alongside the Critic network while also participating in the Critic network’s updates. This approach allows imitation learning to dominate the early stages of training, with reinforcement learning taking the lead in later stages. Employing experience replay buffer separation techniques, we categorize and store collected superior, ordinary, and expert experiences. We select sensor inputs from the TORCS (The Open Racing Car Simulator) simulation platform and conduct experimental validation, comparing the results with the original DDPG, A2C, and PPO algorithms. Experimental outcomes reveal that incorporating imitation learning significantly accelerates early-stage training, reduces blind trial-and-error during initial exploration, and enhances algorithm stability and safety. The experience replay buffer separation technique improves sampling efficiency and mitigates algorithm overfitting. In addition to expediting algorithm training rates, our approach enables the simulated vehicle to learn superior strategies, garnering higher reward values. This demonstrates the superior stability, safety, and policy-making capabilities of the proposed algorithm, as well as accelerated network convergence.

Publisher

SAGE Publications

Subject

Mechanical Engineering,Aerospace Engineering

Reference31 articles.

1. Human-level control through deep reinforcement learning

2. Pomerleau DA. ALVINN: An autonomous land vehicle in a neural network. In Proceedings of the 1st International Conference on Neural Information Processing Systems (NIPS'88), 1989, pp.305–313. Cambridge, MA, USA: MIT Press.

3. End-to-End Learning of Driving Models from Large-Scale Video Datasets

4. On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3