Solving Partially Observable 3D-Visual Tasks with Visual Radial Basis Function Network and Proximal Policy Optimization

Author:

Hautot Julien1,Teulière Céline1ORCID,Azzaoui Nourddine2ORCID

Affiliation:

1. Institut Pascal, Clermont Auvergne INP, Université Clermont Auvergne, CNRS, 63178 Aubière, France

2. Laboratoire de Mathématiques Blaise Pascal, University of Clermont Auvergne, CNRS, 63178 Aubière, France

Abstract

Visual Reinforcement Learning (RL) has been largely investigated in recent decades. Existing approaches are often composed of multiple networks requiring massive computational power to solve partially observable tasks from high-dimensional data such as images. Using State Representation Learning (SRL) has been shown to improve the performance of visual RL by reducing the high-dimensional data into compact representation, but still often relies on deep networks and on the environment. In contrast, we propose a lighter, more generic method to extract sparse and localized features from raw images without training. We achieve this using a Visual Radial Basis Function Network (VRBFN), which offers significant practical advantages, including efficient and accurate training with minimal complexity due to its two linear layers. For real-world applications, its scalability and resilience to noise are essential, as real sensors are subject to change and noise. Unlike CNNs, which may require extensive retraining, this network might only need minor fine-tuning. We test the efficiency of the VRBFN representation to solve different RL tasks using Proximal Policy Optimization (PPO). We present a large study and comparison of our extraction methods with five classical visual RL and SRL approaches on five different first-person partially observable scenarios. We show that this approach presents appealing features such as sparsity and robustness to noise and that the obtained results when training RL agents are better than other tested methods on four of the five proposed scenarios.

Publisher

MDPI AG

Subject

Artificial Intelligence,Engineering (miscellaneous)

Reference43 articles.

1. Human-level control through deep reinforcement learning;Mnih;Nature,2015

2. Long Short-Term Memory;Hochreiter;Neural Comput.,1997

3. Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment;Brejl;Eai Endorsed Trans. Creat. Technol.,2018

4. Romac, C., and Béraud, V. (2019). Deep Recurrent Q-Learning vs Deep Q-Learning on a simple Partially Observable Markov Decision Process with Minecraft. arXiv.

5. Deep learning application pros and cons over algorithm deep learning application pros and cons over algorithm;Moshayedi;EAI Endorsed Trans. AI Robot.,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3