Article contents
A class of bandit problems yielding myopic optimal strategies
Published online by Cambridge University Press: 14 July 2016
Abstract
We consider the class of bandit problems in which each of the n ≧ 2 independent arms generates rewards according to one of the same two reward distributions, and discounting is geometric over an infinite horizon. We show that the dynamic allocation index of Gittins and Jones (1974) in this context is strictly increasing in the probability that an arm is the better of the two distributions. It follows as an immediate consequence that myopic strategies are the uniquely optimal strategies in this class of bandit problems, regardless of the value of the discount parameter or the shape of the reward distributions. Some implications of this result for bandits with Bernoulli reward distributions are given.
MSC classification
- Type
- Research Papers
- Information
- Copyright
- Copyright © Applied Probability Trust 1992
Footnotes
Financial support from the National Science Foundation and the Sloan Foundation to the first author is gratefully acknowledged.
References
- 11
- Cited by