A class of bandit problems yielding myopic optimal strategies
-
Published:1992-09
Issue:03
Volume:29
Page:625-632
-
ISSN:0021-9002
-
Container-title:Journal of Applied Probability
-
language:en
-
Short-container-title:J. Appl. Probab.
Author:
Banks Jeffrey S.,Sundaram Rangarajan K.
Abstract
We consider the class of bandit problems in which each of the n ≧ 2 independent arms generates rewards according to one of the same two reward distributions, and discounting is geometric over an infinite horizon. We show that the dynamic allocation index of Gittins and Jones (1974) in this context is strictly increasing in the probability that an arm is the better of the two distributions. It follows as an immediate consequence that myopic strategies are the uniquely optimal strategies in this class of bandit problems, regardless of the value of the discount parameter or the shape of the reward distributions. Some implications of this result for bandits with Bernoulli reward distributions are given.
Publisher
Cambridge University Press (CUP)
Subject
Statistics, Probability and Uncertainty,General Mathematics,Statistics and Probability
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献