Affiliation:
1. University of Southern California, Los Angeles, CA, USA
Abstract
We consider multiple parallel Markov decision processes (MDPs) coupled by global constraints, where the time varying objective and constraint functions can only be observed after the decision is made. Special attention is given to how well the decision maker can perform in T slots, starting from any state, compared to the best feasible randomized stationary policy in hindsight. We develop a new distributed online algorithm where each MDP makes its own decision each slot after observing a multiplier computed from past information. While the scenario is significantly more challenging than the classical online learning context, the algorithm is shown to have a tight O(√T) regret and constraint violations simultaneously. To obtain such a bound, we combine several new ingredients including ergodicity and mixing time bound in weakly coupled MDPs, a new regret analysis for online constrained optimization, a drift analysis for queue processes, and a perturbation analysis based on Farkas' Lemma.
Funder
Division of Computing and Communication Foundations
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture,Software
Reference14 articles.
1. Travis Dick Andras Gyorgy and Csaba Szepesvari. 2014. Online learning in Markov decision processes with changing cost sequences Proceedings of the 31st International Conference on Machine Learning (ICML-14). 512--520. Travis Dick Andras Gyorgy and Csaba Szepesvari. 2014. Online learning in Markov decision processes with changing cost sequences Proceedings of the 31st International Conference on Machine Learning (ICML-14). 512--520.
2. Online Markov Decision Processes
3. Online Markov Decision Processes With Kullback–Leibler Control Cost
4. Introduction to Online Convex Optimization
5. Logarithmic regret algorithms for online convex optimization