Affiliation:
1. College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, PR China
2. State Key Laboratory of Engines, Tianjin University, Tianjin, PR China
Abstract
Deep reinforcement learning (DRL) based car-following control (CFC) models are widely applied in the longitudinal motion control tasks of automated vehicles by self-learning for the optimal control policy. However, DRL algorithms easily produce unsafe commands and have low robustness, especially in complex car-following scenarios. To improve the DRL-based CFC model, this paper combines the deep deterministic policy gradient (DDPG) based CFC model with the deep optical flow estimation (DOFE) based CFC model that can overcome the shortcomings of DDPG-based one which is denoted as cooperative car-following model (DDPGoF). The DDPG-based CFC model utilizes prioritized experience replay which can intrinsically accelerate the learning speed. Meanwhile, the proposed DOFE-based CFC model employs the recurrent all-pairs field transforms algorithm (RAFT) and EfficientNet to perceive the motion variation of the surrounding vehicles, motorcycles, etc. The real vehicle driving data sets are applied to calibrate and validate the proposed DDPGoF-based CFC model while several assessment criteria are established to evaluate its overall performance. As a result, the DDPGoF-based CFC model is superior to DDPG-based one in avoiding crashes, improving car-following stability, riding comfort, and fuel economy of HEV.
Subject
Mechanical Engineering,Aerospace Engineering
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Self Balancing Motorcycle Using Reinforcement Learning;2024 International Conference on Emerging Systems and Intelligent Computing (ESIC);2024-02-09