Abstract
We propose a new, hierarchical architecture for behavioral planning of vehicle models usable as realistic non-player vehicles in serious games related to traffic and driving. These agents, trained with deep reinforcement learning (DRL), decide their motion by taking high-level decisions, such as “keep lane”, “overtake” and “go to rightmost lane”. This is similar to a driver’s high-level reasoning and takes into account the availability of advanced driving assistance systems (ADAS) in current vehicles. Compared to a low-level decision making system, our model performs better both in terms of safety and speed. As a significant advantage, the proposed approach allows to reduce the number of training steps by more than one order of magnitude. This makes the development of new models much more efficient, which is key for implementing vehicles featuring different driving styles. We also demonstrate that, by simply tweaking the reinforcement learning (RL) reward function, it is possible to train agents characterized by different driving behaviors. We also employed the continual learning technique, starting the training procedure of a more specialized agent from a base model. This allowed significantly to reduce the number of training steps while keeping similar vehicular performance figures. However, the characteristics of the specialized agents are deeply influenced by the characteristics of the baseline agent.
Subject
Applied Mathematics,Artificial Intelligence,Computer Graphics and Computer-Aided Design,Human-Computer Interaction,Education,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Introduction to the Special Issue on GaLA Conf 2022;International Journal of Serious Games;2023-11-25