Abstract
Abstract
Making decisions in complex and multiple scenarios presents great challenges for autonomous driving systems (ADS). In recent years, deep reinforcement learning algorithms (DRL) have made remarkable breakthroughs in decision-making. However, there remain many problems, such as sparse reward and slow convergence in traditional DRL when facing multiple sub-goals. In this paper, we propose a hierarchical deep deterministic policy gradient (DDPG) based on the BDIK model for autonomous driving, which enables ADS to have the ability to make decisions in human-like deliberation ways as well as deal with uncertainties in the environment. First, we propose the BDIK model based on the Beliefs-Desires-Intentions (BDI) model so that the agents are guided by domain knowledge when generating their sub-goals. Furthermore, in contrast to traditional BDI systems making plans by hand, a BDIK hierarchical DDPG (BDIK HDDPG) algorithm is employed to deduce the optimal actions automatically in an uncertain environment. The results show that our method outperforms the standard DDPG for both processing speed and effectiveness in multiple and complex scenarios.