Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-24T16:10:17.006Z Has data issue: false hasContentIssue false

Markov Chain Monte Carlo for Computing Rare-Event Probabilities for a Heavy-Tailed Random Walk

Published online by Cambridge University Press:  19 February 2016

Thorbjörn Gudmundsson*
Affiliation:
KTH Royal Institute of Technology
Henrik Hult*
Affiliation:
KTH Royal Institute of Technology
*
Postal address: Department of Mathematics, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden.
Postal address: Department of Mathematics, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.

Type
Research Article
Copyright
© Applied Probability Trust 

References

Asmussen, S. (2003). Applied Probability and Queues (Stoch. Modelling Appl. 51). Springer, New York.Google Scholar
Asmussen, S. and Binswanger, K. (1997). Simulation of ruin probabilities for subexponential claims. Astin Bull. 27, 297318.Google Scholar
Asmussen, S. and Glynn, P. W. (2007). Stochastic Simulation (Stoch. Modelling Appl. 57). Springer, New York.Google Scholar
Asmussen, S. and Kroese, D. P. (2006). Improved algorithms for rare event simulation with heavy tails. Adv. Appl. Prob. 38, 545558.Google Scholar
Blanchet, J. and Li, C. (2011). Efficient rare-event simulation for heavy-tailed compound sums. ACM Trans. Model. Comput. Simul. 21, 23pp.CrossRefGoogle Scholar
Blanchet, J and Liu, J. C. (2008). State-dependent importance sampling for regularly varying random walks. Adv. Appl. Prob. 40, 11041128.Google Scholar
Chan, K. S. (1993). Asymptotic behavior of the Gibbs sampler. J. Amer. Statist. Assoc. 88, 320326.Google Scholar
Cline, D. B. H. and Hsing, T. (1994). Large deviation probabilities for sums of random variables with heavy or subexponential tails. Tech. Rep., Texas A&M University.Google Scholar
Denisov, D., Dieker, A. B. and Shneer, V. (2008). Large deviations for random walks under subexponentiality: the big-jump domain. Ann. Prob. 36, 19461946.Google Scholar
Dupuis, P., Leder, K. and Wang, H. (2007). Importance sampling for sums of random variables with regularly varying tails. ACM Trans. Model. Comput. Simul. 17, 21pp.Google Scholar
Gelman, A. and Meng, X. L. (1998). Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Statist. Sci. 13, 163185.Google Scholar
Gilks, W. R., Richardsson, S. and Spiegelhalter, D. J. (1996). Markov Chain Monte Carlo in Practice. Chapman & Hall, London.Google Scholar
Hult, H. and Svensson, J. (2012). On importance sampling with mixtures for random walks with heavy tails. ACM Trans. Model. Comput. Simul. 22, 21pp.Google Scholar
Jones, G. L. (2004). On the Markov chain central limit theorem. Prob. Surveys 1, 299320.CrossRefGoogle Scholar
Juneja, S. and Shahabuddin, P. (2002). Simulation heavy tailed processes using delayed hazard rate twisting. ACM Trans. Model. Comput. Simul. 12, 25pp.Google Scholar
Klüppelberg, C. and Mikosch, T. (1997). Large deviations for heavy-tailed random sums with applications to insurance and finance. J. Appl. Prob. 37, 293308.Google Scholar
Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24, 101121.Google Scholar
Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.Google Scholar
Nummelin, E. (1984). General Irreducible Markov Chains and Non-negative Operators. Cambridge University Press.Google Scholar
Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90, 558566.Google Scholar
Smith, A. F. M. and Gelfand, A. E. (1992). Bayesian statistics without tears: a sampling-resampling perspective. Amer. Statist. 46, 8488.Google Scholar
Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22, 17011762.Google Scholar