Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-28T03:37:30.486Z Has data issue: false hasContentIssue false

The asymptotic behaviour of the minimal total expected cost for the denumerable state Markov decision model

Published online by Cambridge University Press:  14 July 2016

Arie Hordijk
Affiliation:
Mathematisch Centrum, Amsterdam
Paul J. Schweitzer
Affiliation:
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
Henk Tijms
Affiliation:
Mathematisch Centrum, Amsterdam

Abstract

This paper considers the discrete time Markov decision model with a denumerable state space and finite action space. Under certain conditions it is proved that the minimal total expected cost for a planning horizon of n epochs minus n times the minimal long-run average expected cost per unit time has a finite limit as n → ∞ for each initial state.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1975 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bather, J. (1973) Optimal decision procedures for finite Markov chains. Part II: Communicating systems. Adv. Appl. Prob. 5, 521540.Google Scholar
Brown, B. W. (1965) On the iterative method of dynamic programming on a finite space discrete time Markov process. Ann. Math. Statist. 36, 12791285.Google Scholar
Chung, K. L. (1960) Markov Chains with Stationary Transition Probabilities. Springer-Verlag, Berlin.Google Scholar
Denardo, E. V. (1973) A Markov decision problem. In Mathematical Programming , by Hu, T. C. and Robinson, S. M. Academic Press, New York.Google Scholar
Hordijk, A. and Tijms, H. C. (1974) Convergence results and approximations for optimal (S, S) policies. Management Sci. 20, 14321438.Google Scholar
Lanery, E. (1967) Etude asymptotique des systèmes Markoviens à commande. Rev. Informat. Recherche Opérationelle 3, 356.Google Scholar
Lembersky, M. R. (1973) On maximal rewards and e-optimal policies in continuous time Markov decision chains. Ann. Statist. 2, 159169.Google Scholar
Ross, S. M. (1968) Arbitrary state Markovian decision processes. Ann. Math. Statist. 39, 21182122.Google Scholar
Royden, H. (1968) Real Analysis (2nd. ed.) MacMillan New York.Google Scholar
Schweitzer, P. J. (1965) Perturbation theory and Markovian decision processes. Technical Report No. 15, Operations Research Center, M. I. T., Cambridge, Massachusetts.Google Scholar
Schweitzer, P. J. (1969) Perturbation theory and undiscounted Markov renewal programming. Operations Res. 17, 716727.Google Scholar
Schweitzer, P. J. (1974) Asymptotic convergence of undiscounted value iteration. To appear.Google Scholar