Task-Based Visual Attention for Continually Improving the Performance of Autonomous Game Agents
-
Published:2023-10-25
Issue:21
Volume:12
Page:4405
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Ulu Eren12ORCID, Capin Tolga2, Çelikkale Bora3ORCID, Celikcan Ufuk1ORCID
Affiliation:
1. Department of Computer Engineering, Hacettepe University, 06800 Ankara, Türkiye 2. Department of Computer Engineering, TED University, 06420 Ankara, Türkiye 3. Department of Software Engineering, Cankaya University, 06790 Ankara, Türkiye
Abstract
Deep Reinforcement Learning (DRL) has been effectively performed in various complex environments, such as playing video games. In many game environments, DeepMind’s baseline Deep Q-Network (DQN) game agents performed at a level comparable to that of humans. However, these DRL models require many experience samples to learn and lack the adaptability to changes in the environment and handling complexity. In this study, we propose Attention-Augmented Deep Q-Network (AADQN) by incorporating a combined top-down and bottom-up attention mechanism into the DQN game agent to highlight task-relevant features of input. Our AADQN model uses a particle-filter -based top-down attention that dynamically teaches an agent how to play a game by focusing on the most task-related information. In the evaluation of our agent’s performance across eight games in the Atari 2600 domain, which vary in complexity, we demonstrate that our model surpasses the baseline DQN agent. Notably, our model can achieve greater flexibility and higher scores at a reduced number of time steps.Across eight game environments, AADQN achieved an average relative improvement of 134.93%. Pong and Breakout games both experienced improvements of 9.32% and 56.06%, respectively. Meanwhile, SpaceInvaders and Seaquest, which are more intricate games, demonstrated even higher percentage improvements, with 130.84% and 149.95%, respectively. This study reveals that AADQN is productive for complex environments and produces slightly better results in elementary contexts.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference54 articles.
1. The arcade learning environment: An evaluation platform for general agents;Bellemare;J. Artif. Intell. Res.,2013 2. Human-level control through deep reinforcement learning;Mnih;Nature,2015 3. Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction, MIT Press. 4. Q-learning;Watkins;Mach. Learn.,1992 5. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv.
|
|