Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T20:46:20.475Z Has data issue: false hasContentIssue false

Truncation approximations of invariant measures for Markov chains

Published online by Cambridge University Press:  14 July 2016

R. L. Tweedie*
Affiliation:
Colorado State University
*
Postal address: Department of Statistics, Colorado State University, Fort Collins CO 80523, USA. E-mail address: [email protected]

Abstract

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n)P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n)P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1998 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Work supported in part by NSF Grant DMS-9504561.

References

Asmussen, S. (1987). Applied Probability and Queues. Wiley, New York.Google Scholar
Golub, G. H., and Seneta, E. (1974). Computation of the stationary distribution of an infinite stochastic matrix of special form. Bull. Austral. Math. Soc. 10, 255262.CrossRefGoogle Scholar
Hart, A. G. (1998). Convergence of measures of truncation approximations to positive recurrent Markov chains. Unpublished manuscript.Google Scholar
Hart, A. G., and Tweedie, R. L. (1998). Convergence of invariant measures of truncation approximations to Markov processes. In preparation.Google Scholar
Hordijk, A., and Spieksma, F. M. (1992). On ergodicity and recurrence properties of a Markov chain with an application. Adv. Appl. Prob. 24, 343376.Google Scholar
Hordijk, A., Spieksma, F. M., and Tweedie, R. L. (1998). Uniform geometric ergodicity for general space Markov decision chains. In preparation.Google Scholar
Lund, R. B., Meyn, S. P., and Tweedie, R. L. (1996). Computable exponential convergence rates for stochastically ordered Markov processes. Ann. Appl. Prob. 6, 218237.Google Scholar
Lund, R. B., and Tweedie, R. L. (1996). Geometric convergence rates for stochastically ordered Markov chains. Math. Operat. Res. 21, 182194.Google Scholar
Lund, R. B., Wilson, D. B., Foss, S. G., and Tweedie, R. L. (1998). Exact and approximate simulation of the invariant measures of Markov chains. In preparation.Google Scholar
Meyn, S. P., and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.Google Scholar
Meyn, S. P., and Tweedie, R. L. (1994). Computable bounds for convergence rates of Markov chains. Ann. Appl. Prob. 4, 9811011.CrossRefGoogle Scholar
Rosenthal, J. S. (1996). Markov chain convergence: From finite to infinite. Stoch. Proc. Appl. 62, 5572.Google Scholar
Scott, D. J., and Tweedie, R. L. (1996). Explicit rates of convergence of stochastically ordered Markov chains. Proc. Athens Conference on Applied Probability and Time Series Analysis: Papers in Honour of J.M. Gani and E.J. Hannan. Springer, New York, pp. 176191.Google Scholar
Seneta, E. (1967). Finite approximations to infinite non-negative matrices. Proc. Camb. Phil. Soc. 63, 983992.CrossRefGoogle Scholar
Seneta, E. (1968). Finite approximations to infinite non-negative matrices II: refinements and applications. Proc. Camb. Phil. Soc. 64, 465470.Google Scholar
Seneta, E. (1980). Computing the stationary distribution for infinite Markov chains. Linear Algebra Appl. 34, 259267.Google Scholar
Seneta, E. (1980). Non-negative Matrices and Markov Chains. Springer, New York, 2nd edition.Google Scholar
Stoyan, D. (1983). Comparison Methods for Queues and Other Stochastic Models. Wiley, London.Google Scholar
Tweedie, R. L. (1971). Truncation procedures for non-negative matrices. J. Appl. Prob. 8, 311320.Google Scholar
Tweedie, R.L. (1973). The calculation of limit probabilities for denumerable Markov processes from infinitesimal properties. J. Appl. Prob. 10, 8499.Google Scholar