Dyna-Validator: A Model-based Reinforcement Learning Method with Validated Simulated Experiences
-
Published:2023-08-31
Issue:5
Volume:18
Page:
-
ISSN:1841-9844
-
Container-title:INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL
-
language:
-
Short-container-title:INT J COMPUT COMMUN, Int. J. Comput. Commun. Control
Author:
Zhang Hengsheng,Li Jingchen,He Ziming,Zhu Jinhui,Shi Haobin
Abstract
Dyna is a planning paradigm that naturally weaves learning and planning together through environmental models. Dyna-style reinforcement learning improves the sample efficiency using the simulation experience generated by the environment model to update the value function. However, the existing Dyna-style planning methods are usually based on tabular methods, only suitable for tasks with low-dimensional and small-scale space. In addition, the quality of the simulation experience generated by the existing methods cannot be guaranteed, which significantly limits its application in tasks such as continuous control of high-dimensional robots and autonomous driving. To this end, we propose a model-based approach that controls planning through a validator. The validator filters high-quality experiences for policy learning and decides whether to stop planning. To deal with the exploration and exploitation dilemma in reinforcement learning, a combination of ϵ-greedy strategy and simulated annealing (SA) cooling schedule control is designed as an action selection strategy. The excellent performance of the proposed method is demonstrated in a set of classical Atari games. Experimental results show that learning dynamic models in some games can improve sample efficiency. This benefit is maximized by choosing the proper planning steps. In the optimization planning phase, our method maintains a smaller gap with the current state-of-the-art model-based reinforcement learning (MuZero). In order to achieve a good compromise between model accuracy and optimal programming step size, it is necessary to control the programming reasonably. The practical application of this method in a physical robot system helps reduce the influence of an imprecise depth prediction model on the task. Without human supervision, it is easier to collect training data and learn complex skills (such as grabbing and carrying items) while being more effective at scaling tasks that have never been seen before.
Publisher
Agora University of Oradea