Affiliation:
1. California PATH, University of California, Berkeley, CA, USA
Abstract
Path tracking is an essential task for autonomous vehicles (AV), for which controllers are designed to issue commands so that the AV will follow the planned path properly to ensure operational safety, comfort, and efficiency. While solving the time-varying nonlinear vehicle dynamic problem is still challenging today, deep neural network (NN) methods, with their capability to deal with nonlinear systems, provide an alternative approach to tackle the difficulties. This study explores the potential of using deep reinforcement learning (DRL) for vehicle control and applies it to the path tracking task. In this study, proximal policy optimization (PPO) is selected as the DRL algorithm and is combined with the conventional pure pursuit (PP) method to structure the vehicle controller architecture. The PP method is used to generate a baseline steering control command, and the PPO is used to derive a correction command to mitigate the inaccuracy associated with the baseline from PP. The blend of the two controllers makes the overall operation more robust and adaptive and attains the optimality to improve tracking performance. In this paper, the structure, settings and training process of the PPO are described. Simulation experiments are carried out based on the proposed methodology, and the results show that the path tracking capability in a low-speed driving condition is significantly enhanced.
Funder
Berkeley DeepDrive, industry-academia consortium
Subject
Mechanical Engineering,Aerospace Engineering
Cited by
38 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献