Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T04:22:25.844Z Has data issue: false hasContentIssue false

Characterization and simplification of optimal strategies in positive stochastic games

Published online by Cambridge University Press:  16 November 2018

János Flesch*
Affiliation:
Maastricht University
Arkadi Predtetchinski*
Affiliation:
Maastricht University
William Sudderth*
Affiliation:
University of Minnesota
*
* Postal address: Department of Quantitative Economics, Maastricht University, PO Box 616, 6200 MD, The Netherlands. Email address: [email protected]
** Postal address: Department of Economics, Maastricht University, PO Box 616, 6200 MD, The Netherlands. Email address: [email protected]
*** Postal address: School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA. Email address: [email protected]

Abstract

We consider positive zero-sum stochastic games with countable state and action spaces. For each player, we provide a characterization of those strategies that are optimal in every subgame. These characterizations are used to prove two simplification results. We show that if player 2 has an optimal strategy then he/she also has a stationary optimal strategy, and prove the same for player 1 under the assumption that the state space and player 2's action space are finite.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Blackwell, D. (1970). On stationary policies. J. R. Statist. Soc. 133, 3337.Google Scholar
Dubins, L. E. and Savage, L. J. (1965). How to Gamble If You Must: Inequalities for Stochastic Processes. McGraw-Hill, New York.Google Scholar
Flesch, J., Predtetchinski, A. and Sudderth, W. (2018). Simplifying optimal strategies in limsup and liminf stochastic games. To appear in Discrete Appl. Math.. Available at https://doi.org/10.1016/j.dam.2018.05.038.Google Scholar
Flesch, J., Thuijsman, F. and Vrieze, O. J. (1998). Simplifying optimal strategies in stochastic games. SIAM J. Control Optimization 36, 13311347.Google Scholar
Frid, E. B. (1973). On stochastic games. Theory Prob. Appl. 18, 389393.Google Scholar
JaŘkiewicz, A. and Nowak, A. S. (2016). Zero-sum stochastic games. In Handbook of Dynamic Game Theory, Springer, Cham, pp. 215279.Google Scholar
Kumarm, P. R. and Shiau, T. H. (1981). Existence of value and randomized strategies in zero-sum discrete-time stochastic dynamic games. SIAM J. Control Optimization 19, 617634.Google Scholar
Maitra, A. and Parthasarathy, T. (1971). On stochastic games. II. J. Optimization Theory Appl. 8, 154160.Google Scholar
Maitra, A. P. and Sudderth, W. D. (1996). Discrete Gambling and Stochastic Games. Springer, New York.Google Scholar
Nowak, A. S. (1985). Universally measurable strategies in zero-sum stochastic games. Ann. Prob. 13, 269287.Google Scholar
Nowak, A. S. and Raghavan, T. E. S. (1991). Positive stochastic games and a theorem of Ornstein. In Stochastic Games and Related Topics, Springer, Dordrecht, pp. 127134.Google Scholar
Parthasarathy, T. (1971). Discounted and positive stochastic games. Bull. Amer. Math. Soc. 77, 134136.Google Scholar
Parthasarathy, T. (1973). Discounted, positive, and noncooperative stochastic games. Internat. J. Game Theory 2, 2537.Google Scholar
Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley, New York.Google Scholar
Strauch, R. E. (1966). Negative dynamic programming. Ann. Math. Statist. 37, 871890.Google Scholar
Sudderth, W. (2016). Optimal Markov strategies. Preprint.Google Scholar