eDA3-X: Distributed Attentional Actor Architecture for Interpretability of Coordinated Behaviors in Multi-Agent Systems
-
Published:2023-07-21
Issue:14
Volume:13
Page:8454
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Motokawa Yoshinari1, Sugawara Toshiharu1
Affiliation:
1. Department of Computer Science, Waseda University, Tokyo 169-8555, Japan
Abstract
In this paper, we propose an enhanced version of the distributed attentional actor architecture (eDA3-X) for model-free reinforcement learning. This architecture is designed to facilitate the interpretability of learned coordinated behaviors in multi-agent systems through the use of a saliency vector that captures partial observations of the environment. Our proposed method, in principle, can be integrated with any deep reinforcement learning method, as indicated by X, and can help us identify the information in input data that individual agents attend to during and after training. We then validated eDA3-X through experiments in the object collection game. We also analyzed the relationship between cooperative behaviors and three types of attention heatmaps (standard, positional, and class attentions), which provided insight into the information that the agents consider crucial when making decisions. In addition, we investigated how attention is developed by an agent through training experiences. Our experiments indicate that our approach offers a promising solution for understanding coordinated behaviors in multi-agent reinforcement learning.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference74 articles.
1. Milani, S., Topin, N., Veloso, M., and Fang, F. (2022). A Survey of Explainable Reinforcement Learning. arXiv, arXiv:2202.08434. 2. Holzinger, A., Kieseberg, P., Tjoa, A.M., and Weippl, E. (2020, January 25–28). Explainable Reinforcement Learning: A Survey. Proceedings of the Machine Learning and Knowledge Extraction, Dublin, Ireland. 3. Explainability in deep reinforcement learning;Heuillet;Knowl.-Based Syst.,2021 4. EDGE: Explaining Deep Reinforcement Learning Policies;Guo;Adv. Neural Inf. Process. Syst.,2021 5. Anderson, A., Dodge, J., Sadarangani, A., Juozapaitis, Z., Newman, E., Irvine, J., Chattopadhyay, S., Fern, A., and Burnett, M. (2019, January 10–16). Explaining Reinforcement Learning to Mere Mortals: An Empirical Study. Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI’19, Macao, China.
|
|