Abstract
Deep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the main drawbacks of this algorithm, even though SAC is robust for complex and dynamic environments. One of the proposed solutions to overcome this issue is to utilize human feedback. In this paper, we investigate different forms of human feedback: head direction vs. steering and discrete vs. continuous feedback. To this end, a real-time human demonstration from steer and human head direction with discrete or continuous actions were employed as human feedback in an autonomous driving task in the CARLA simulator. We used alternating actions from a human expert and SAC to have a real-time human demonstration. Furthermore, to test the method without potential individual differences in human performance, we tested the discrete vs. continuous feedback in an inverted pendulum task, with an ideal controller to stand in for the human expert. The results for both the CARLA and the inverted pendulum tasks showed a significant reduction in the training time and a significant increase in gained rewards with discrete feedback, as opposed to continuous feedback, while the action space remained continuous. It was also shown that head direction feedback can be almost as good as steering feedback. We expect our findings to provide a simple yet efficient training method for Deep RL for autonomous driving, utilizing multiple sources of human feedback.
Subject
Electrical and Electronic Engineering,Industrial and Manufacturing Engineering,Control and Optimization,Mechanical Engineering,Computer Science (miscellaneous),Control and Systems Engineering
Reference44 articles.
1. Practical Reinforcement Learning in Continuous Spaces
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.9314&rep=rep1&type=pdf
2. Autonomous reinforcement learning on raw visual input data in a real world application;Lange;Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN),2012
3. Improved deep reinforcement learning with expert demonstrations for urban autonomous driving;Liu;arXiv,2021
4. Trust region policy optimization;Schulman;Proceedings of the International Conference on Machine Learning,2015
5. Learning from Interventions Using Hierarchical Policies for Safe Learning
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献