Abstract
Route game is recognized as an effective method to alleviate Braess’ paradox, which generates a new traffic congestion since numerous vehicles obey the same guidance from the selfish route guidance (such as Google Maps). The conventional route games have symmetry since vehicles’ payoffs depend only on the selected route distribution but not who chose, which leads to the precise Nash equilibrium being able to be solved by constructing a special potential function. However, with the arrival of smart cities, the real-time of route schemes is more of a concerned of engineers than the absolute optimality in real traffic. It is not an easy task to re-construct the new potential functions of the route games due to the dynamic traffic conditions. In this paper, compared with the hard-solvable potential function-based precise method, a matched Q-learning algorithm is designed to generate the approximate Nash equilibrium of the classic route game for real-time traffic. An experimental study shows that the Nash equilibrium coefficients generated by the Q-learning-based approximate solving algorithm all converge to 1.00, and still have the required convergence in the different traffic parameters.
Funder
Shanghai Soft Science Key Project
Subject
Management, Monitoring, Policy and Law,Renewable Energy, Sustainability and the Environment,Geography, Planning and Development,Building and Construction
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献