Affiliation:
1. University of Pennsylvania, Philadelphia, PA
2. Duke University, Durham, NC
Abstract
The
restless bandit
problem is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit (MAB) problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any nontrivial factor, and little progress has been made on this problem despite its significance in modeling activity allocation under uncertainty.
In this article, we consider the Feedback MAB problem, where the reward obtained by playing each of
n
independent arms varies according to an underlying on/off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies.
We develop a novel duality-based algorithmic technique that yields a surprisingly simple and intuitive (2+ϵ)-approximate greedy policy to this problem. We show that both in terms of approximation factor and computational efficiency, our policy is closely related to the
Whittle index
, which is widely used for its simplicity and efficiency of computation. Subsequently we define a multi-state generalization, that we term Monotone bandits, which remains subclass of the restless bandit problem. We show that our policy remains a 2-approximation in this setting, and further, our technique is robust enough to incorporate various side-constraints such as blocking plays, switching costs, and even models where determining the state of an arm is a separate operation from playing it.
Our technique is also of independent interest for other restless bandit problems, and we provide an example in nonpreemptive machine replenishment. Interestingly, in this case, our policy provides a constant factor guarantee, whereas the Whittle index is provably polynomially worse.
By presenting the first O(1) approximations for nontrivial instances of restless bandits as well as of POMDPs, our work initiates the study of approximation algorithms in both these contexts.
Funder
Division of Computer and Network Systems
Division of Computing and Communication Foundations
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Hardware and Architecture,Information Systems,Control and Systems Engineering,Software
Reference55 articles.
1. Relaxations of Weakly Coupled Stochastic Dynamic Programs
2. Ahmad S. H. A. Liu M. Javidi T. Zhao Q. and Krishnamachari B. 2008. Optimality of myopic sensing in multi-channel opportunistic access. CoRR~abs/0811.0637. Ahmad S. H. A. Liu M. Javidi T. Zhao Q. and Krishnamachari B. 2008. Optimality of myopic sensing in multi-channel opportunistic access. CoRR~abs/0811.0637.
3. Whittle's index policy for a multi-class queueing system with convex holding costs
4. Bayes and Minimax Solutions of Sequential Decision Problems
Cited by
34 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献