Affiliation:
1. School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang, China
2. College of Information Management, Nanjing Agricultural University, Nanjing, China
3. School of Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, China
Abstract
Present the DDPGwP (DDPG with Pretraining) model, grounded in the framework of deep reinforcement learning, designed for autonomous driving decision-making. The model incorporates imitation learning by utilizing expert experience for supervised learning during initial training and weight preservation. A novel loss function is devised, enabling the expert experience to jointly guide the Actor network’s update alongside the Critic network while also participating in the Critic network’s updates. This approach allows imitation learning to dominate the early stages of training, with reinforcement learning taking the lead in later stages. Employing experience replay buffer separation techniques, we categorize and store collected superior, ordinary, and expert experiences. We select sensor inputs from the TORCS (The Open Racing Car Simulator) simulation platform and conduct experimental validation, comparing the results with the original DDPG, A2C, and PPO algorithms. Experimental outcomes reveal that incorporating imitation learning significantly accelerates early-stage training, reduces blind trial-and-error during initial exploration, and enhances algorithm stability and safety. The experience replay buffer separation technique improves sampling efficiency and mitigates algorithm overfitting. In addition to expediting algorithm training rates, our approach enables the simulated vehicle to learn superior strategies, garnering higher reward values. This demonstrates the superior stability, safety, and policy-making capabilities of the proposed algorithm, as well as accelerated network convergence.
Reference31 articles.
1. Human-level control through deep reinforcement learning
2. Pomerleau DA. ALVINN: An autonomous land vehicle in a neural network. In Proceedings of the 1st International Conference on Neural Information Processing Systems (NIPS'88), 1989, pp.305–313. Cambridge, MA, USA: MIT Press.
3. End-to-End Learning of Driving Models from Large-Scale Video Datasets
4. On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects