Author:
Gerstenberg Julian,Neininger Ralph,Spiegel Denis
Abstract
<abstract><p>In distributional reinforcement learning (RL), not only expected returns but the complete return distributions of a policy are taken into account. The return distribution for a fixed policy is given as the solution of an associated distributional Bellman equation. In this note, we consider general distributional Bellman equations and study the existence and uniqueness of their solutions, as well as tail properties of return distributions. We give necessary and sufficient conditions for the existence and uniqueness of return distributions and identify cases of regular variation.</p>
<p>We link distributional Bellman equations to multivariate affine distributional equations. We show that any solution of a distributional Bellman equation can be obtained as the vector of marginal laws of a solution to a multivariate affine distributional equation. This makes the general theory of such equations applicable to the distributional reinforcement learning setting.</p></abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference29 articles.
1. M. G. Bellemare, W. Dabney, R. Munos, A distributional perspective on reinforcement learning, in International Conference on Machine Learning, (2017), 449–458.
2. M. G. Bellemare, W. Dabney, M. Rowland, Distributional Reinforcement Learning, MIT Press, 2023.
3. M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming, John Wiley & Sons, Inc., New York, 1994. https://doi.org/10.1002/9780470316887
4. M. Rowland, M. Bellemare, W. Dabney, R. Munos, Y. W. Teh, An analysis of categorical distributional reinforcement learning, in International Conference on Artificial Intelligence and Statistics, (2018), 29–37.
5. E. Krasheninnikova, J. García, R. Maestre, F. Fernández, Reinforcement learning for pricing strategy optimization in the insurance industry, Eng. Appl. Artif. Intell., 80 (2019), 8–19. https://doi.org/10.1016/j.engappai.2019.01.010