Author:
Zhou Wei,Chen Dong,Yan Jun,Li Zhaojian,Yin Huilin,Ge Wanchen
Abstract
AbstractAutonomous driving has attracted significant research interests in the past two decades as it offers many potential benefits, including releasing drivers from exhausting driving and mitigating traffic congestion, among others. Despite promising progress, lane-changing remains a great challenge for autonomous vehicles (AV), especially in mixed and dynamic traffic scenarios. Recently, reinforcement learning (RL) has been widely explored for lane-changing decision makings in AVs with encouraging results demonstrated. However, the majority of those studies are focused on a single-vehicle setting, and lane-changing in the context of multiple AVs coexisting with human-driven vehicles (HDVs) have received scarce attention. In this paper, we formulate the lane-changing decision-making of multiple AVs in a mixed-traffic highway environment as a multi-agent reinforcement learning (MARL) problem, where each AV makes lane-changing decisions based on the motions of both neighboring AVs and HDVs. Specifically, a multi-agent advantage actor-critic (MA2C) method is proposed with a novel local reward design and a parameter sharing scheme. In particular, a multi-objective reward function is designed to incorporate fuel efficiency, driving comfort, and the safety of autonomous driving. A comprehensive experimental study is made that our proposed MARL framework consistently outperforms several state-of-the-art benchmarks in terms of efficiency, safety, and driver comfort.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Reference44 articles.
1. B. Paden, M. Cáp, S.Z. Yong, D.S. Yershov, E. Frazzoli, A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Trans. Intell. Veh. 1(1), 33–55 (2016)
2. D. Desiraju, T. Chantem, K. Heaslip, Minimizing the disruption of traffic flow of automated vehicles during lane changes. IEEE Trans. Intell. Transp. Syst. 16(3), 1249–1258 (2015)
3. T. Li, J. Wu, C.-Y. Chan, M. Liu, C. Zhu, W. Lu, K. Hu, A cooperative lane change model for connected and automated vehicles. IEEE Access 8, 54940–54951 (2020)
4. D. Chen, L. Jiang, Y. Wang, Z. Li, Autonomous driving using safe reinforcement learning by incorporating a regret-based human lane-changing decision model, in American Control Conference (ACC) (2020), pp. 4355–4361
5. P. Wang, H. Li, C.-Y. Chan, Continuous control for automated lane change behavior based on deep deterministic policy gradient algorithm, in IEEE Intelligent Vehicles Symposium (IV) (2019), pp. 1454–1460
Cited by
44 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献