Author:
Wang Liwen,Yang Shuo,Yuan Kang,Huang Yanjun,Chen Hong
Abstract
AbstractModel predictive control is widely used in the design of autonomous driving algorithms. However, its parameters are sensitive to dynamically varying driving conditions, making it difficult to be implemented into practice. As a result, this study presents a self-learning algorithm based on reinforcement learning to tune a model predictive controller. Specifically, the proposed algorithm is used to extract features of dynamic traffic scenes and adjust the weight coefficients of the model predictive controller. In this method, a risk threshold model is proposed to classify the risk level of the scenes based on the scene features, and aid in the design of the reinforcement learning reward function and ultimately improve the adaptability of the model predictive controller to real-world scenarios. The proposed algorithm is compared to a pure model predictive controller in car-following case. According to the results, the proposed method enables autonomous vehicles to adjust the priority of performance indices reasonably in different scenarios according to risk variations, showing a good scenario adaptability with safety guaranteed.
Publisher
Springer Science and Business Media LLC
Subject
Industrial and Manufacturing Engineering,Mechanical Engineering
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献