Abstract
Adaptive traffic signal control (ATSC) based on deep reinforcement learning (DRL) has shown promising prospects to reduce traffic congestion. Most existing methods keeping traffic signal phases fixed adopt two agent actions to match a four-phase suffering unstable performance and undesirable operation in a four-phase signalized intersection. In this paper, a Double Deep Q-Network (DDQN) with a dual-agent algorithm is proposed to obtain a stable traffic signal control policy. Specifically, two agents are denoted by two different states and shift the control of green lights to make the phase sequence fixed and control process stable. State representations and reward functions are presented by improving the observability and reducing the leaning difficulty of two agents. To enhance the feasibility and reliability of two agents in the traffic control of the four-phase signalized intersection, a network structure incorporating DDQN is proposed to map states to rewards. Experiments under Simulation of Urban Mobility (SUMO) are carried out, and results show that the proposed traffic signal control algorithm is effective in improving traffic capacity.
Funder
National Natural Science Foundation of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献