Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T08:25:24.611Z Has data issue: false hasContentIssue false

Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms

Published online by Cambridge University Press:  14 July 2016

Gareth O. Roberts*
Affiliation:
Lancaster University
Jeffrey S. Rosenthal*
Affiliation:
University of Toronto
*
Postal address: Department of Mathematics and Statistics, Fylde College, Lancaster University, Lancaster LA1 4YF, UK. Email address: [email protected]
∗∗ Postal address: Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 3G3. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of large numbers. We also give counterexamples to demonstrate that the assumptions we make are not redundant.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2007 

References

Andrieu, C. and Moulines, E. (2006). On the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms. Ann. Appl. Prob. 16, 14621505.CrossRefGoogle Scholar
Andrieu, C. and Robert, C. P. (2002). Controlled MCMC for optimal sampling. Preprint.Google Scholar
Atchadé, Y. F. and Rosenthal, J. S. (2005). On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11, 815828.CrossRefGoogle Scholar
Baxendale, P. H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Prob. 15, 700738.CrossRefGoogle Scholar
Bédard, M. (2006). On the robustness of optimal scaling for Metropolis–Hastings algorithms. , University of Toronto.Google Scholar
Brockwell, A. E. and Kadane, J. B. (2005). Identification of regeneration times in MCMC simulation, with application to adaptive schemes. J. Comput. Graph. Statist. 14, 436458.Google Scholar
Fort, G. and Moulines, E. (2000). Computable bounds for subgeometrical and geometrical ergodicity. Preprint. Available at http://citeseer.ist.psu.edu/fort00computable.html.Google Scholar
Fort, G. and Moulines, E. (2003). Polynomial ergodicity of Markov transition kernels. Stoch. Process. Appl. 103, 5799.CrossRefGoogle Scholar
Fristedt, B. and Gray, L. (1997). A Modern Approach to Probability Theory. Birkhäuser, Boston, MA.CrossRefGoogle Scholar
Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo. J. Amer. Statist. Assoc. 93, 10451054.CrossRefGoogle Scholar
Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7, 223242.Google Scholar
Häggström, O. (2001). A note on disagreement percolation. Random Structures Algorithms 18, 267278.CrossRefGoogle Scholar
Jarner, S. F. and Roberts, G. O. (2002). Polynomial convergence rates of Markov chains. Ann. Appl. Prob. 12, 224247.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (1994). Computable bounds for convergence rates of Markov chains. Ann. Appl. Prob. 4, 9811011.Google Scholar
Pasarica, C. and Gelman, A. (2003). Adaptively scaling the Metropolis algorithm using the average squared Jumped distance. Preprint.Google Scholar
Pemantle, R. and Rosenthal, J. S. (1999). Moment conditions for a sequence with negative drift to be uniformly bounded in Lr . Stoch. Process. Appl. 82, 143155.CrossRefGoogle Scholar
Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Math. Statist. 22, 400407.CrossRefGoogle Scholar
Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16, 351367.Google Scholar
Roberts, G. O. and Rosenthal, J. S. (2002). One-shot coupling for certain stochastic recursive sequences. Stoch. Process. Appl. 99, 195208.CrossRefGoogle Scholar
Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Prob. Surveys 1, 2071.Google Scholar
Roberts, G. O. and Tweedie, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stoch. Process. Appl. 80, 211229. (Correction: 91 (2001), 337–338.)CrossRefGoogle Scholar
Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Prob. 7, 110120.Google Scholar
Roberts, G. O., Rosenthal, J. S. and Schwartz, P. O. (1998). Convergence properties of perturbed Markov chains. J. Appl. Prob. 35, 111.Google Scholar
Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90, 558566.Google Scholar
Rosenthal, J. S. (1997). Faithful couplings of Markov chains: now equals forever. Adv. Appl. Math. 18, 372381.CrossRefGoogle Scholar
Rosenthal, J. S. (2000). A First Look at Rigorous Probability Theory. World Scientific, Singapore.CrossRefGoogle Scholar
Rosenthal, J. S. (2002). Quantitative convergence rates of Markov chains: a simple account. Electron. Commun. Prob. 7, 123128.Google Scholar
Rosenthal, J. S. (2004). Adaptive MCMC Java applet. Available at http://probability.ca/jeff/java/adapt.html.Google Scholar
Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22, 17011762.Google Scholar