Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-25T08:22:35.806Z Has data issue: false hasContentIssue false

Markovian bounds on functions of finite Markov chains

Published online by Cambridge University Press:  01 July 2016

James Ledoux*
Affiliation:
Institut National des Sciences Appliquées, Rennes
Laurent Truffet*
Affiliation:
Institut de Recherche en Communications et Cybernetique de Nantes
*
Postal address: INSA, 20 avenue des Buttes de Coesmes, 35043 Rennes Cedex, France. Email address: [email protected]
∗∗ Postal address: Ecole des Mines de Nantes, Département Automatique–Productique, 4 rue Alfred Kastler BP 20722, 44307 Nantes Cedex 3, France.

Abstract

In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well-known results on stochastic majorization of Markov chains and the Rogers–Pitman lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains, although we obtain same kind of majorization. We derive necessary and sufficient conditions for existence of our Markovian bounds. We also discuss the choice of the geometric invariant related to the lumpability condition that we use.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2001 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Buchholz, P. (1994). Exact and ordinary lumpability in finite Markov chains. J. Appl. Prob. 31, 5975.Google Scholar
[2] Courtois, P. J. and Semal, P. (1984). Bounds for the positive eigenvectors of nonnegative matrices and for their approximations by decomposition. J. Assoc. Comput. Mach. 31, 804825.Google Scholar
[3] Courtois, P. J. and Semal, P. (1985). On polyhedra of Perron-Frobenius eigenvectors. Linear Algebra Appl. 65, 157170.Google Scholar
[4] Doisy, M. (2000). A coupling technique for comparison of functions of Markov processes. J. Appl. Math. Decision Sci. 4, 3964.Google Scholar
[5] Keilson, J. and Kester, A. (1977). Monotone matrices and monotone Markov processes. Stoch. Proc. Appl. 5, 231241.Google Scholar
[6] Kemeny, J. and Snell, J. (1960). Finite Markov Chains. Princeton University Press.Google Scholar
[7] Kijima, M. (1997). Markov Processes for Stochastic Modeling. Chapman and Hall, London.Google Scholar
[8] Ledoux, J. (1997). A geometric invariant in weak lumpability of finite Markov chains. J. Appl. Prob. 34, 847858.Google Scholar
[9] Rogers, L. C. G. and Pitman, J. W. (1981). Markov functions. Ann. Prob. 9, 573582.Google Scholar
[10] Schweitzer, P. (1984). Aggregation methods for large Markov chains. Mathematical Computer Performance and Reliability, eds Iazeola, G. et al. Elsevier–North-Holland, Amsterdam, pp 275286.Google Scholar
[11] Seneta, E. Non-negative Matrices and Markov Chains. Springer, Berlin.Google Scholar
[12] Stoyan, D. (1976). Comparison Methods for Queues and Other Stochastic Models. John Wiley, Chichester.Google Scholar
[13] Truffet, L. (2000). Reduction techniques for discrete-time Markov chains on totally ordered state space using stochastic comparisons. J. Appl. Prob. 37, 795806.CrossRefGoogle Scholar