Affiliation:
1. Cornell University, New York, New York 10044
Abstract
We study the regret of offline reinforcement learning in an infinite-horizon discounted Markov decision process (MDP). While existing analyses of common approaches, such as fitted Q-iteration (FQI), suggest root-n convergence for regret, empirical behavior exhibits much faster convergence. In this paper, we present a finer regret analysis that exactly characterizes this phenomenon by providing fast rates for the regret convergence. First, we show that given any estimate for the optimal quality function, the regret of the policy it defines converges at a rate given by the exponentiation of the estimate’s pointwise convergence rate, thus speeding up the rate. The level of exponentiation depends on the level of noise in the decision-making problem, rather than the estimation problem. We establish such noise levels for linear and tabular MDPs as examples. Second, we provide new analyses of FQI and Bellman residual minimization to establish the correct pointwise convergence guarantees. As specific cases, our results imply one-over-n rates in linear cases and exponential-in-n rates in tabular cases. We extend our findings to general function approximation by extending our results to regret guarantees based on Lp-convergence rates for estimating the optimal quality function rather than pointwise rates, where L2 guarantees for nonparametric estimation can be ensured under mild conditions. Funding: This work was supported by the Division of Information and Intelligent Systems, National Science Foundation [Grant 1846210].
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)