Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-27T08:54:31.107Z Has data issue: false hasContentIssue false

ASYMPTOTIC VARIANCE OF PASSAGE TIME ESTIMATORS IN MARKOV CHAINS

Published online by Cambridge University Press:  27 February 2007

Michael A. Zazanis
Affiliation:
Department of Statistics, Athens University of Economics and Business, Athens 104 34, Greece, E-mail: [email protected]

Abstract

We consider the problem of estimating passage times in stochastic simulations of Markov chains. Two types of estimator are considered for this purpose: the “simple” and the “overlapping” estimator; they are compared in terms of their asymptotic variance. The analysis is based on the regenerative structure of the process and it is shown that when estimating the mean passage time, the simple estimator is always asymptotically superior. However, when the object is to estimate the expectation of a nonlinear function of the passage time, such as the probability that the passage time exceeds a given threshold, then it is shown that the overlapping estimator can be superior in some cases. Related results in the Reinforcement Learning literature are discussed.

Type
Research Article
Copyright
© 2007 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Bertsekas, D.P. & Tsitsiklis, J.N. (1996). Neuro-dynamic programming. Nashua, NH: Athena Press.
Calvin, J.M. & Nakayama, M.K. (1998). Using permutations in regenerative simulations to reduce variance. ACM Transactions on Modelling and Computer Simulation 8(2): 153193.Google Scholar
Cao, X.R. & Chen, H.F. (1997). Perturbation realization, potentials, and sensitivity analysis of Markov processes. IEEE Transactions on Automatic Control 42(10): 13821393.Google Scholar
Cho, G.E. & Meyer, C.D. (2000). Markov chain sensitivity measured by mean first passage times. Linear Algebra and its Applications 316: 2128.Google Scholar
Cramér, H. (1946). Mathematical methods of statistics. Princeton, NJ: Princeton University Press.
Glynn, P.W. & Iglehart, D.L. (1989). Importance sampling for stochastic simulations. Management Science 35(11): 13671392.Google Scholar
Harrison, P.G. & Knottenbelt, W.J. (2002). Passage time distributions in large Markov chains. In Proceedings ACM SIGMETRICS, June 2002, Marina Del Rey, CA, pp. 7785.
Heidergott, B. & Hordijk, A. (2003). Taylor series expansions for stationary Markov chains. Advances in Applied Probability 35: 10461070.Google Scholar
Karamichalakou, C. & Zazanis, M.A. (2000). Asymptotic variance properties of passage time estimators for Markov chains. Technical report, Athens University of Economics and Business, Athens, Greece.
Kemperman, J.H.B. (1961). The passage problem for a stationary Markov chain. Chicago: University of Chicago Press.
Meyer, C.D. (1994). Sensitivity of the stationary distribution of a Markov chain. SIAM Journal of Matrix Analysis and Applications 15: 715728.Google Scholar
Nakayama, M.K., Goyal, A., & Glynn, P.W. (1994). Likelihood ratio sensitivity analysis for Markovian models of highly dependable systems. Operations Research 42: 137157.Google Scholar
Prakasa Rao, B.L.S. (1987). Asymptotic theory of statistical inference. New York: Wiley.
Ross, S.M. & Schechner, Z. (1985). Using simulation to estimate first passage distribution. Management Science 31(2): 224234.Google Scholar
Shahabuddin, P. (1994). Importance sampling for highly reliable Markovian systems. Management Science 40(3): 333352.Google Scholar
Singh, S.P. & Sutton, R.S. (1996). Reinforcement learning with replacing eligibility traces. Machine Learning 22: 123158.Google Scholar
Syski, R. (1992). Passage times for Markov chains. Amsterdam: IOS Press.