Abstract
AbstractThe advent of AlphaGo and its successors marked the beginning of a new paradigm in playing games using artificial intelligence. This was achieved by combining Monte Carlo tree search, a planning procedure, and deep learning. While the impact on the domain of games has been undeniable, it is less clear how useful similar approaches are in applications beyond games and how they need to be adapted from the original methodology. We perform a systematic literature review of peer-reviewed articles detailing the application of neural Monte Carlo tree search methods in domains other than games. Our goal is to systematically assess how such methods are structured in practice and if their success can be extended to other domains. We find applications in a variety of domains, many distinct ways of guiding the tree search using learned policy and value functions, and various training methods. Our review maps the current landscape of algorithms in the family of neural monte carlo tree search as they are applied to practical problems, which is a first step towards a more principled way of designing such algorithms for specific problems and their requirements.
Funder
Deutsche Forschungsgemeinschaft
Publisher
Springer Science and Business Media LLC
Reference161 articles.
1. Al-Saffar M, Musilek P (2020) Reinforcement learning-based distributed BESS management for mitigating overvoltage issues in systems with high PV penetration. IEEE Trans Smart Grid 11(4):2980–2994
2. Anthony T, Tian Z, Barber D (2017) Thinking fast and slow with deep learning and tree search. Adv Neural Inf Process Syst 30
3. Audrey G, Francesco M (2019) Deep neural network and Monte Carlo tree search applied to fluid-structure topology optimization. Sci Rep 9(1):15916
4. Auer P, Cesa-Bianchi N, Fischer P (2002) Finite-time analysis of the multiarmed bandit problem. Mach Learn 47(2):235–256
5. Bai F, Meng F, Liu J, Wang J, Meng MQ (2022) Hierarchical policy with deep-reinforcement learning for nonprehensile multiobject rearrangement. Biomim Intell Robot 2(3):100047