A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control
Author:
Kim Jihun1, Park Sanghoon1ORCID, Kim Jeesu2ORCID, Yoo Jinwoo3ORCID
Affiliation:
1. Graduate School of Automotive Engineering, Kookmin University, Seoul 02707, Republic of Korea 2. Departments of Cogno-Mechatronics Engineering and Optics and Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea 3. Department of Automobile and IT Convergence, Kookmin University, Seoul 02707, Republic of Korea
Abstract
As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the importance of safety features, such as fail-safe mechanisms in the event of sensor failures, has gained prominence. Therefore, this paper proposes a reinforcement learning (RL) control method for lane-keeping, which uses surrounding object information derived through LiDAR sensors instead of camera sensors for LKS. This approach uses surrounding vehicle and object information as observations for the RL framework to maintain the vehicle’s current lane. The learning environment is established by integrating simulation tools, such as IPG CarMaker, which incorporates vehicle dynamics, and MATLAB Simulink for data analysis and RL model creation. To further validate the applicability of the LiDAR sensor data in real-world settings, Gaussian noise is introduced in the virtual simulation environment to mimic sensor noise in actual operational conditions.
Funder
National Research Foundation of Korea Korean government
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference34 articles.
1. Biggi, G., and Stilgoe, J. (2021). Artificial intelligence in self-driving cars research and innovation: A scientometric and bibliometric analysis. SSRN Electron. J. 2. A survey of deep learning techniques for autonomous driving;Grigorescu;J. Field Robot.,2020 3. Behringer, R., Sundareswaran, S., Gregory, B., Elsley, R., Addison, B., Guthmiller, W., Daily, R., and Bevly, D. (2004, January 14–17). The DARPA grand challenge-development of an autonomous vehicle. Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy. 4. Advancements, prospects, and impacts of automated driving systems;Chan;Int. J. Transp. Sci. Technol.,2017 5. Autonomous vehicles: US regulatory policy challenges;Hemphill;Technol. Soc.,2020
|
|