Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-25T20:42:52.275Z Has data issue: false hasContentIssue false

The censored Markov chain and the best augmentation

Published online by Cambridge University Press:  14 July 2016

Y. Quennel Zhao*
Affiliation:
University of Winnipeg
Danielle Liu
Affiliation:
Case Western Reserve University
*
Postal address: Department of Mathematics and Statistics, University of Winnipeg, Winnipeg, Manitoba, Canada R3B 2E9.

Abstract

Computationally, when we solve for the stationary probabilities for a countable-state Markov chain, the transition probability matrix of the Markov chain has to be truncated, in some way, into a finite matrix. Different augmentation methods might be valid such that the stationary probability distribution for the truncated Markov chain approaches that for the countable Markov chain as the truncation size gets large. In this paper, we prove that the censored (watched) Markov chain provides the best approximation in the sense that, for a given truncation size, the sum of errors is the minimum and show, by examples, that the method of augmenting the last column only is not always the best.

MSC classification

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1996 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

∗∗

Current address: AT&T Bell Laboratories, Holmdel, NJ 07733–3030, USA.

References

[1] Chung, K. L. (1967) Markov Chains with Stationary Transition Probabilities. 2nd edn. Springer, Berlin.Google Scholar
[2] Freedman, D. (1983) Approximating Countable Markov Chains. 2nd edn. Springer, New York.CrossRefGoogle Scholar
[3] Gibson, D. and Seneta, E. (1987) Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600608.Google Scholar
[4] Gibson, D. and Seneta, E. (1987) Monotone infinite stochastic matrices and their augmented truncations. Stock Proc. Appl. 24, 287292.Google Scholar
[5] Grassmann, W. K. and Heyman, D. P. (1990) Equilibrium distribution of block-structured Markov chains with repeating rows. J. Appl. Prob. 27, 557576.Google Scholar
[6] Grassmann, W. K. and Heyman, D. P. (1993) Computation of steady-state probabilities for infinite-state Markov chains with repeating rows. ORSA J. Comput. 5, 292303.Google Scholar
[7] Heyman, D. P. (1991) Aproximating the stationary distribution of an infinite stochastic matrix. J. Appl. Prob. 28, 96103.CrossRefGoogle Scholar
[8] Kemeny, J. G., Snell, J. L. and Knapp, A. W. (1976) Denumerable Markov Chains. 2nd edn. Springer, New York.CrossRefGoogle Scholar
[9] Lévy, P. (1951) Systèmes markoviens et stationnaires. Cas dénombrable. Ann. Sci. Ecole Norm. Sup. 68, 327381.Google Scholar
[10] Lévy, P. (1952) Complément à l'étude des processus de Markoff. Ann. Sci. école Norm. Sup. 69, 203212.Google Scholar
[11] Lévy, P. (1958) Processus markoviens et stationnaires. Cas dénombrable. Ann. Inst. H. Poincaré 18, 725.Google Scholar
[12] Seneta, E. (1967) Finite approximation to infinite non-negative matrices. Proc. Camb. Phil. Soc. 63, 983992.Google Scholar
[13] Williams, D. (1966) A new method of approximation in Markov chain theory and its application to some problems in the theory of random time substitution. Proc. Lond. Math. Soc. 16, 213240.Google Scholar
[14] Wolf, D. (1980) Approximation of the invariant probability measure of an infinite stochastic matrix. Adv. Appl. Prob. 12, 710726.CrossRefGoogle Scholar