Author:
Sabri Oussama,Lehéricy Luc,Muzy Alexandre
Abstract
AbstractWe consider the situation in which cooperating agents learn to achieve a common goal based solely on a global return that results from all agents’ behavior. The method proposed is based on taking into account the agents’ activity, which can be any additional information to help solving multi-agent decentralized learning problems. We propose a gradient ascent algorithm and assess its performance on synthetic data.
Funder
Agence Nationale de la Recherche
Publisher
Springer Science and Business Media LLC
Reference27 articles.
1. Weiss, G. Multi-agent Systems: A Modern Approach to Distributed Artificial Intelligence (MIT press, Cambridge, 2013).
2. Panait, L. & Luke, S. Cooperative multi-agent learning: The state of the art. Auton. Agent. Multi-Agent Syst. 11(3), 387–434 (2005).
3. Slivkins, A. et al. Introduction to multi-armed bandits. Found. Trends Mach. Learn. 12(1–2), 1–286 (2019).
4. Auer, P., Cesa-Bianchi, N. & Fischer, P. Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256. https://doi.org/10.1023/A:1013689704352 (2002).
5. Hossain, S., Micha, E. & Shah, N. Fair algorithms for multi-agent multi-armed bandits. Adv. Neural. Inf. Process. Syst. 34, 24005–24017 (2021).