The Sufficiency of Off-Policyness and Soft Clipping: PPO Is Still Insufficient according to an Off-Policy Measure
-
Published:2023-06-26
Issue:6
Volume:37
Page:7078-7086
-
ISSN:2374-3468
-
Container-title:Proceedings of the AAAI Conference on Artificial Intelligence
-
language:
-
Short-container-title:AAAI
Author:
Chen Xing,Diao Dongcui,Chen Hechang,Yao Hengshuai,Piao Haiyin,Sun Zhixiao,Yang Zhiwei,Goebel Randy,Jiang Bei,Chang Yi
Abstract
The popular Proximal Policy Optimization (PPO) algorithm approximates the solution in a clipped policy space. Does there exist better policies outside of this space? By using a novel surrogate objective that employs the sigmoid function (which provides an interesting way of exploration), we found that the answer is "YES", and the better policies are in fact located very far from the clipped space. We show that PPO is insufficient in "off-policyness", according to an off-policy metric called DEON. Our algorithm explores in a much larger policy space than PPO, and it maximizes the Conservative Policy Iteration (CPI) objective better than PPO during training. To the best of our knowledge, all current PPO methods have the clipping operation and optimize in the clipped policy space. Our method is the first of this kind, which advances the understanding of CPI optimization and policy gradient methods. Code is available at https://github.com/raincchio/P3O.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献