Author:
He Xiangkun,Lou Baichuan,Yang Haohan,Lv Chen
Abstract
<p>Reinforcement learning has demonstrated its potential in a series of challenging domains.</p>
<p>However, many real-world decision making tasks involve unpredictable environmental changes or unavoidable perception errors that are often enough to mislead an agent into making suboptimal decisions and even cause catastrophic failures.</p>
<p>In light of these potential risks, reinforcement learning with application in safety-critical autonomous driving domain remains tricky without ensuring robustness against environmental uncertainties (e.g., road adhesion changes or measurement noises). </p>
<p>Therefore, this paper proposes a novel constrained adversarial reinforcement learning approach for robust decision making of autonomous vehicles at highway on-ramps.</p>
<p>Environmental disturbance is modelled as an adversarial agent that can learn an optimal adversarial policy to thwart the autonomous driving agent.</p>
<p>Meanwhile, observation perturbation is approximated to maximize the variation of the perturbed policy through a white-box adversarial attack technique.</p>
<p>Furthermore, a constrained adversarial actor-critic algorithm is presented to optimize an on-ramp merging policy while keeping the variations of the attacked driving policy and action-value function within bounds.</p>
<p>Finally, the proposed robust highway on-ramp merging decision making method of autonomous vehicles is evaluated in three stochastic mixed traffic flows with different densities, and its effectiveness is demonstrated in comparison with the competitive baselines.</p>
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献