Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-24T04:28:32.176Z Has data issue: false hasContentIssue false

Sensitivity and convergence of uniformly ergodic Markov chains

Published online by Cambridge University Press:  14 July 2016

A. Yu. Mitrophanov*
Affiliation:
Saratov State University
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

For uniformly ergodic Markov chains, we obtain new perturbation bounds which relate the sensitivity of the chain under perturbation to its rate of convergence to stationarity. In particular, we derive sensitivity bounds in terms of the ergodicity coefficient of the iterated transition kernel, which improve upon the bounds obtained by other authors. We discuss convergence bounds that hold in the case of finite state space, and consider numerical examples to compare the accuracy of different perturbation bounds.

Type
Research Papers
Copyright
© Applied Probability Trust 2005 

References

Anisimov, V. V. (1988). Estimates for the deviations of the transition characteristics of nonhomogeneous Markov processes. Ukrainian Math. J. 40, 588592.CrossRefGoogle Scholar
Cho, G. E. and Meyer, C. D. (2001). Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra Appl. 335, 137150.CrossRefGoogle Scholar
Diaconis, P. and Strook, D. (1991). Geometric bounds for eigenvalues of Markov chains. Ann. Appl. Prob. 1, 3661.CrossRefGoogle Scholar
Dobrushin, R. (1956). Central limit theorem for non-stationary Markov chains. I. Theory Prob. Appl. 1, 6380.Google Scholar
Dobrushin, R. (1956). Central limit theorem for non-stationary Markov chains. II. Theory Prob. Appl. 1, 329383.CrossRefGoogle Scholar
Fill, J. A. (1991). Eigenvalue bounds on convergence to stationarity for nonreversible Markov chains, with an application to the exclusion process. Ann. Appl. Prob. 1, 6287.CrossRefGoogle Scholar
Horn, R. A. and Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press.CrossRefGoogle Scholar
Ingrassia, S. (1994). On the rate of convergence of the Metropolis algorithm and Gibbs sampler by geometric bounds. Ann. Appl. Prob. 4, 347389.CrossRefGoogle Scholar
Kartashov, N. V. (1986). Inequalities in theorems of ergodicity and stability for Markov chains with common state space. I. Theory Prob. Appl. 30, 247259.CrossRefGoogle Scholar
Kartashov, N. V. (1986). Inequalities in theorems of ergodicity and stability for Markov chains with common state space. II. Theory Prob. Appl. 30, 507515.CrossRefGoogle Scholar
Kartashov, N. V. (1996). Strong Stable Markov Chains. VSP, Utrecht.CrossRefGoogle Scholar
Kirkland, S. (2002). On a question concerning condition numbers for Markov chains. SIAM J. Matrix Anal. Appl. 23, 11091119.CrossRefGoogle Scholar
Kirkland, S. (2003). Conditioning properties of the stationary distribution for a Markov chain. Electron. J. Linear Algebra 10, 115.CrossRefGoogle Scholar
Kirkland, S. (2004). Digraph-based conditioning for Markov chains. Linear Algebra Appl. 385, 8193.CrossRefGoogle Scholar
Kirkland, S. J. and Neumann, M. (2000). Regular Markov chains for which the transition matrix has large exponent. Linear Algebra Appl. 316, 4565.CrossRefGoogle Scholar
Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24, 101121.CrossRefGoogle Scholar
Meyer, C. D. (2000). Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia, PA.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (1994). Computable bounds for convergence rates of Markov chains. Ann. Appl. Prob. 4, 9811011.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (1996). Markov Chains and Stochastic Stability, 2nd edn. Springer, London.Google Scholar
Mitrophanov, A. Yu. (2003). Stability and exponential convergence of continuous-time Markov chains. J. Appl. Prob. 40, 970979.CrossRefGoogle Scholar
Neumann, M. and Xu, J. (2004). Improved bounds for a condition number of Markov chains. Linear Algebra Appl. 386, 225241.CrossRefGoogle Scholar
Pulford, G. W. et al. (1995). Evaluation and estimation of various Markov models with applications to membrane channel kinetics. Biom. J. 37, 3963.CrossRefGoogle Scholar
Roberts, G. O. and Polson, N. G. (1994). On the geometric convergence of the Gibbs sampler. J. R. Statist. Soc. B 56, 377384.Google Scholar
Roberts, G. O. and Rosenthal, J. S. (1998). On convergence rates of Gibbs samplers for uniform distributions. Ann. Appl. Prob. 8, 12911302.CrossRefGoogle Scholar
Roberts, G. O., Rosenthal, J. S. and Schwartz, P. O. (1998). Convergence properties of perturbed Markov chains. J. Appl. Prob. 35, 111.CrossRefGoogle Scholar
Seneta, E. (1984). Explicit forms for ergodicity coefficients and spectrum localization. Linear Algebra Appl. 60, 187197.CrossRefGoogle Scholar
Shmulevich, I., Dougherty, E. R. and Zhang, W. (2002). Gene perturbation and intervention in probabilistic Boolean networks. Bioinf. 18, 13191331.Google ScholarPubMed
Sinclair, A. (1992). Improved bounds for mixing rates of Markov chains and multicommodity flow. Comb. Prob. Comput. 1, 351370.CrossRefGoogle Scholar
Solan, E. and Vieille, N. (2003). Perturbed Markov chains. J. Appl. Prob. 40, 107122.CrossRefGoogle Scholar
Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22, 17011728.Google Scholar