Affiliation:
1. Automated Driving Laboratory, Ohio State University, Columbus, OH 43212, USA
Abstract
The application of autonomous driving system (ADS) technology can significantly reduce potential accidents involving vulnerable road users (VRUs) due to driver error. This paper proposes a novel hierarchical deep reinforcement learning (DRL) framework for high-performance collision avoidance, which enables the automated driving agent to perform collision avoidance maneuvers while maintaining appropriate speeds and acceptable social distancing. The novelty of the DRL method proposed here is its ability to accommodate dynamic obstacle avoidance, which is necessary as pedestrians are moving dynamically in their interactions with nearby ADSs. This is an improvement over existing DRL frameworks that have only been developed and demonstrated for stationary obstacle avoidance problems. The hybrid A* path searching algorithm is first applied to calculate a pre-defined path marked by waypoints, and a low-level path-following controller is used under cases where no VRUs are detected. Upon detection of any VRUs, however, a high-level DRL collision avoidance controller is activated to prompt the vehicle to either decelerate or change its trajectory to prevent potential collisions. The CARLA simulator is used to train the proposed DRL collision avoidance controller, and virtual raw sensor data are utilized to enhance the realism of the simulations. The model-in-the-loop (MIL) methodology is utilized to assess the efficacy of the proposed DRL ADS routine. In comparison to the traditional DRL end-to-end approach, which combines high-level decision making with low-level control, the proposed hierarchical DRL agents demonstrate superior performance.
Funder
Carnegie Mellon University’s Safety21 National University Transportation Center
Reference41 articles.
1. World Health Organization (2015). Global Status Report on Road Safety 2015, World Health Organization. Available online: https://iris.who.int/handle/10665/189242.
2. Relationship between Infrastructure, Driver Error, and Critical Incidents;Medina;Proc. Hum. Factors Ergon. Soc. Annu. Meet.,2004
3. (2023, October 24). J3016_202104: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. Available online: https://www.sae.org/standards/content/j3016_202104/.
4. Ye, F., Zhang, S., Wang, P., and Chan, C.-Y. (2021, January 11–17). A Survey of Deep Reinforcement Learning Algorithms for Motion Planning and Control of Autonomous Vehicles. Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan.
5. Deep Reinforcement Learning for Autonomous Driving: A Survey;Kiran;IEEE Trans. Intell. Transp. Syst.,2022