Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-25T07:24:59.147Z Has data issue: false hasContentIssue false

Linear dynamics for the state vector of Markov chain functions

Published online by Cambridge University Press:  01 July 2016

James Ledoux*
Affiliation:
Centre de Mathématiques INSA, Rennes
*
Postal address: Centre de Mathématiques INSA, 20 ave des Buttes de Coesmes, CS 14315, 35043 Rennes cedex, France. Email address: [email protected]

Abstract

Let (φ(Xn))n be a function of a finite-state Markov chain (Xn)n. In this article, we investigate the conditions under which the random variables φ(n) have the same distribution as Yn (for every n), where (Yn)n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (Xn)n is weakly lumpable for the state vector. We show that the set of all probability distributions of X0, such that (Xn)n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2004 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Aoki, M. (1968). Control of large-scale dynamic systems by aggregation. IEEE Trans. Automatic Control 13, 246253.Google Scholar
[2] Berman, A. and Plemmons, R. J. (1979). Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York.Google Scholar
[3] Buchholz, P. (1994). Exact and ordinary lumpability in finite Markov chains. J. Appl. Prob. 31, 5975.Google Scholar
[4] Cheung, R. C. (1980). A user-oriented software reliability model. IEEE Trans. Software Eng. 6, 118125.CrossRefGoogle Scholar
[5] Coxson, P. G. (1984). Lumpability and observability of linear systems. J. Math. Anal. Appl. 99, 435446.Google Scholar
[6] Delebecque, F., Quadrat, J. P. and Kokotovic, P. V. (1984). A unified view of aggregation and coherency in networks and Markov chains. Internat. J. Control 40, 939952.CrossRefGoogle Scholar
[7] Gross, D. and Miller, D. R. (1984). The randomization technique as a modeling tool and solution procedure for transient Markov processes. Operat. Res. 32, 343361.Google Scholar
[8] Jensen, A. (1953). Markoff chains as an aid in the study of Markoff processes. Skand. Aktuarietidskr. 36, 8791.Google Scholar
[9] Kemeny, J. G. and Snell, J. L. (1976). Finite Markov Chains. Springer, New York.Google Scholar
[10] Ledoux, J. (1997). A geometric invariant in weak lumpability of finite Markov chains. J. Appl. Prob. 34, 847858.Google Scholar
[11] Ledoux, J. and Rubino, G. (1997). Simple formulae for counting processes in reliability models. Adv. Appl. Prob. 29, 10181038.Google Scholar
[12] Matheiss, T. H. and Rubin, D. S. (1980). A survey and comparison of methods for finding all vertices of convex polyhedral sets. Math. Operat. Res. 5, 167185.Google Scholar
[13] Nicola, V. F. (1989). Lumping in Markov reward processes. Res. Rep. 14719, IBM.Google Scholar
[14] Nicola, V. F. (1995). Lumping in Markov reward processes. In Computations with Markov Chains, ed. Stewart, W. J., Kluwer, Boston, MA, pp. 663666.Google Scholar
[15] Poore, J. H., Mills, H. D. and Mutchler, D. (1993). Planning and certifying software system reliability. IEEE Software 10, 8999.CrossRefGoogle Scholar
[16] Rogers, L. C. G. and Pitman, J. W. (1981). Markov functions. Ann. Prob. 9, 573582.Google Scholar
[17] Rubino, G. and Sericola, B. (1990). Closed-form solution for the distribution of the total time spent in a subset of states of a homogeneous Markov process during a finite observation period. J. Appl. Prob. 27, 713719.Google Scholar
[18] Rubino, G. and Sericola, B. (1991). A finite characterization of weak lumpable Markov processes. I. The discrete time case. 38, 195204.Google Scholar
[19] Schweitzer, P. J. (1984). Aggregation methods for large Markov chains. In Mathematical Computer Performance and Reliability, eds Iazeolla, G. et al., North-Holland, Amsterdam, pp. 275286.Google Scholar
[20] Sumita, U. and Rieders, M. (1989). Lumpability and time reversibility in the aggregation–disaggregation method for large Markov chains. Commun. Statist. Stoch. Models 5, 6381.Google Scholar
[21] Tsertsvadze, G. N. (1989). Aggregation of systems described by Markov chains. Automation Remote Control 5, 628636.Google Scholar