Abstract
AbstractIn this paper, we introduce a natural learning rule for mean field games with finite state and action space, the so-called myopic adjustment process. The main motivation for these considerations is the complexity of the computations necessary to determine dynamic mean field equilibria, which makes it seem questionable whether agents are indeed able to play these equilibria. We prove that the myopic adjustment process converges locally towards strict stationary equilibria under rather broad conditions. Moreover, we also obtain a global convergence result under stronger, yet intuitive conditions.
Publisher
Springer Science and Business Media LLC
Subject
Statistics, Probability and Uncertainty,Economics and Econometrics,Social Sciences (miscellaneous),Mathematics (miscellaneous),Statistics and Probability
Reference45 articles.
1. Asmussen S (2003) Applied probability and queues, Stochastic modelling and applied probability, vol 51, 2nd edn. Springer, New York, ISBN 0-387-00211-1
2. Aubin J-P, Cellina A (1984) Differential inclusions: set-valued maps and viability theory, Grundlehren der mathematischen Wissenschaften, vol 264. Springer, Berlin. https://doi.org/10.1007/978-3-642-69512-4
3. Bang-Jensen J, Gutin GZ (2010) Digraphs: theory, algorithms and applications. Springer Monographs in Mathematics, 2nd edn. Springer, London, ISBN 978-0-85729-041-0
4. Belak C, Hoffmann D, Seifried FT (2021) Continuous-time mean field games with finite state space and common noise. Appl Math Optim 84:3173–3216. https://doi.org/10.1007/s00245-020-09743-7
5. Bensoussan A, Frehse J, Yam P (2013) Mean field games and mean field type control theory. Springer Briefs in Mathematics. Springer, New York. ISBN 978-1-4614-8507-0