Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-10T02:27:46.031Z Has data issue: false hasContentIssue false

A New Algorithm for Computing the Ergodic Probability Vector for Large Markov Chains

Published online by Cambridge University Press:  27 July 2009

Ushlo Sumita
Affiliation:
William E. Simon Graduate School of Business AdministrationUniversity of Rochester, Rochester, New York 14627
Maria Rieders
Affiliation:
William E. Simon Graduate School of Business AdministrationUniversity of Rochester, Rochester, New York 14627

Extract

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.

Type
Articles
Copyright
Copyright © Cambridge University Press 1990

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Referencces

Cao, W. & Stewart, W.J. (1985). Iterative aggregation/disaggregation techniques for nearly uncoupled Markov chains. Journal of the ACM 32: 702719.CrossRefGoogle Scholar
Courtois, P.J. (1977). Decomposability. London: Academic Press.Google Scholar
Courtois, P.J. (1978). Exact aggregation-disaggregation in queueing networks. Proceedings of the 1st AFCET-SMF meeting on Applied Mathematics, Ecole Polytechn. Palaiseau (France), Tome I, pp. 3552.Google Scholar
Courtois, P.J. (1982). Error minimization in decomposable stochastic models. In Applied Probability–Computer Science, the Interface, Vol. I. Boston: Birkhauser. pp. 189210.Google Scholar
Courtois, P.J. & Semal, P. (1984). Bounds for the positive eigenvectors of nonnegative matrices and for their approximation by decomposition. Journal of the Association for Computing Machines 31(4): 804825.CrossRefGoogle Scholar
Feinberg, B.N. & Chiu, S.S. (1987). A method to calculate steady-state distributions of large Markov chains by aggregating states. Operations Research 35: 282290.CrossRefGoogle Scholar
Grassmann, W.K., Taksar, M.I., & Heyman, D.P. (1985). Regenerative analysis and steady- state distributions of Markov chains. Operations Research 33: 11071116.CrossRefGoogle Scholar
Hajek, B. (1982). Birth and death processes on the integers with phases and general boundaries. Journal of Applied Probability 19: 488499.CrossRefGoogle Scholar
Haviv, M. (1987). Aggregation/disaggregation methods for computing the stationary distribution of a Markov chain. SIAM Journal of Numerical Analysis 24(4): 952966.CrossRefGoogle Scholar
Keilson, J. (1979). Markov chain models–Rarity and exponentiality. New York: Springer-Verlag.CrossRefGoogle Scholar
Keilson, J. & Kester, A. (1977). A circulatory model for human metabolism. William E. Simon School of Business Administration, University of Rochester, Working Paper Series No. 7724.Google Scholar
Keilson, J., Sumita, U.Zachmann, M. (1981). Row continuous finite Markov chains–Structure and algorithms. Laboratory for Information and Decision Systems Report, LIDSP-1078, MIT.CrossRefGoogle Scholar
Keilson, J. & Zachmann, M. (1981). Homogeneous row-continuous bivariate Markov chains with boundaries. William E. Simon School of Business Administration, University of Rochester, Working Paper Series No. QM8120.Google Scholar
Kemeny, J.G., Snell, L.J., & Knapp, A.W. (1966). Denumerable Markov chains. New York: Van Nostrand.Google Scholar
Lal, R. & Bhat, U.N. (1985). Reduced systems in Markov chains. Technical Report, California State University, Fullerton, California.Google Scholar
Latouche, D.M., Jacobs, P.A., & Gaver, D.P. (1984). Finite Markov chain models skip-free in one direction. Naval Research Logistics Quarterly 31: 571588.CrossRefGoogle Scholar
Meyer, C.D. (1987). Uncoupling Markov chains and the Simon-Ando theory of nearly reducible systems. North Carolina State University CRSC Technical Report No. 10018701.Google Scholar
Neuts, M.F. (1981). Matrix-geometric solutions in stochastic models–An algorithmic approach. Baltimore, Maryland: The Johns Hopkins University Press.Google Scholar
Noble, B. (1966). Applied linear algebra. Englewood Cliffs, New Jersey: Prentice-Hall.Google Scholar
Ramaswami, V. & Lucantoni, D.M. (1984). Algorithms for the multiserver queue with phase-type service. Technical Report, Bell Communication Research, Inc., and AT&T Bell Labs.Google Scholar
Ross, S.M. (1982). Stochastic processes. New York: Wiley.Google Scholar
Ross, S.M. (1987). Approximating transition probabilities and mean occupation times in continuous-time Markov chains. Probability in the Engineering and Informational Sciences 1: 251264.CrossRefGoogle Scholar
Schassberger, R. (1984). An aggregation principle for computing invariant probability vectors in semi-Markovian models. In Courtois, P.J., Iazeolla, G., & Hordijk, A. (eds.), Mathematical computer performance and reliability. Amsterdam: North-Holland, pp. 259272.Google Scholar
Schweitzer, P. (1984). Aggregation, methods for large Markov chains. In Iazeolla, G., Courtois, P.J., & Hordijk, A. (eds.), Mathematical computer performance and reliability. Amsterdam: Elsevier North Holland, pp. 275286.Google Scholar
Schweitzer, P., Puterman, M., & Kindle, K. (1985). Iterative aggregation-disaggregation procedures for discounted semi-Markov reward processes. Operations Research 33: 589605.CrossRefGoogle Scholar
Shanthikumar, J.G., Sumita, U., & Rieders, M. (1988). Rank reduction techniques for large Markov chains via mean dwell times and ergodic flows. In preparation.Google Scholar
Sumita, U. & Rieders, M. (1989). Lumpability and time reversibility in the aggregation-disaggregation method for large Markov chains. Stochastic Models 5: 6381.CrossRefGoogle Scholar
Sumita, U. & Rieders, M. (1989). Application of the replacement process approach for computing the ergodic probability vector of large-scale row-continuous Markov chains. William E. Simon School of Business Administration, University of Rochester, Working Paper Series No. QM88−10.Google Scholar
Takahashi, Y. (1975). A lumping method for numerical calculations of stationary distributions of Markov chains. Research Report No. B-18, Department of Information Sciences, Tokyo Institute of Technology.Google Scholar
Takahashi, Y. (1988). Private Communications.Google Scholar
Takahashi, Y. & Takami, Y. (1976). A numerical method for the steady-state probabilities of a G1/G/c queueing system in a general case. Journal of the Operations Research Society (Japan) 19(2): 147157.CrossRefGoogle Scholar
Vantilborgh, H. (1978). Exact aggregation in exponential queueing networks. Journal of the ACM 25: 620629.CrossRefGoogle Scholar
Vantilborgh, H. (1985). Aggregation with an error of O(ε22). Journal of the ACM 32(1): 162190.CrossRefGoogle Scholar