A Comparative Study of Traffic Signal Control Based on Reinforcement Learning Algorithms
-
Published:2024-06-04
Issue:6
Volume:15
Page:246
-
ISSN:2032-6653
-
Container-title:World Electric Vehicle Journal
-
language:en
-
Short-container-title:WEVJ
Author:
Ouyang Chen1ORCID, Zhan Zhenfei1, Lv Fengyao1
Affiliation:
1. School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 400074, China
Abstract
In recent years, the increasing production and sales of automobiles have led to a notable rise in congestion on urban road traffic systems, particularly at ramps and intersections with traffic signals. Intelligent traffic signal control represents an effective means of addressing traffic congestion. Reinforcement learning methods have demonstrated considerable potential for addressing complex traffic signal control problems with multidimensional states and actions. In this research, the team propose Q-learning and Deep Q-Network (DQN) based signal control frameworks that use variable phase sequences and cycle times to adjust the order and the duration of signal phases to obtain a stable traffic signal control strategy. Experiments are simulated using the traffic simulator Simulation of Urban Mobility (SUMO) to test the average speed and the lane occupancy rate of vehicles entering the ramp to evaluate its safety performance and test the vehicle’s traveling time to assess its stability. The simulation results show that both reinforcement learning algorithms are able to control cars in dynamic traffic environments with higher average speed and lower lane occupancy rate than the no-control method and that the DQN control model improves the average speed by about 10% and reduces the lane occupancy rate by about 30% compared to the Q-learning control model, providing a higher safety performance.
Funder
Open Fund of the National Key Laboratory of Intelligent Vehicle Safety Technology Chongqing Jiaotong University-Yangtse Delta Advanced Material Research Institute Provincial-level Joint Graduate Student Cultivation Base
Reference40 articles.
1. Deep reinforcement learning based control for Autonomous Vehicles in CARLA;Barea;Multimed. Tools Appl.,2022 2. Miao, W., Li, L., and Wang, Z. (2021, January 22–24). A Survey on Deep Reinforcement Learning for Traffic Signal Control. Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China. 3. Majstorovic, Ä., Tisljaric, L., Ivanjko, E., and Caric, T. (2023). Urban Traffic Signal Control under Mixed Traffic Flows: Literature Review. Appl. Sci., 13. 4. Zhu, T.M., Boada, M.J.L., and Boada, B.L. (2022, January 11–13). Intelligent Signal Control Module Design for Intersection Traffic Optimization. Proceedings of the IEEE 7th International Conference on Intelligent Transportation Engineering (ICITE), Beijing, China. 5. Mu, Y., Chen, S.F., Ding, M.Y., Chen, J.Y., Chen, R.J., and Luo, P. (2022, January 17–23). CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer. Proceedings of the 39th International Conference on Machine Learning (ICML), Baltimore, MD, USA.
|
|