Published online by Cambridge University Press: 14 July 2016
We consider the class of bandit problems in which each of the n ≧ 2 independent arms generates rewards according to one of the same two reward distributions, and discounting is geometric over an infinite horizon. We show that the dynamic allocation index of Gittins and Jones (1974) in this context is strictly increasing in the probability that an arm is the better of the two distributions. It follows as an immediate consequence that myopic strategies are the uniquely optimal strategies in this class of bandit problems, regardless of the value of the discount parameter or the shape of the reward distributions. Some implications of this result for bandits with Bernoulli reward distributions are given.
Financial support from the National Science Foundation and the Sloan Foundation to the first author is gratefully acknowledged.