Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T09:15:20.650Z Has data issue: false hasContentIssue false

Discrete-time singularly perturbed Markov chains: aggregation, occupation measures, and switching diffusion limit

Published online by Cambridge University Press:  22 February 2016

G. Yin*
Affiliation:
Wayne State University
Q. Zhang*
Affiliation:
University of Georgia
G. Badowski*
Affiliation:
Wayne State University
*
Postal address: Department of Mathematics, Wayne State University, Detroit, MI 48202, USA. Email address: [email protected]
∗∗ Postal address: Department of Mathematics, University of Georgia, Athens, GA 30602, USA.
∗∗∗ Current address: Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.

Abstract

This work is devoted to asymptotic properties of singularly perturbed Markov chains in discrete time. The motivation stems from applications in discrete-time control and optimization problems, manufacturing and production planning, stochastic networks, and communication systems, in which finite-state Markov chains are used to model large-scale and complex systems. To reduce the complexity of the underlying system, the states in each recurrent class are aggregated into a single state. Although the aggregated process may not be Markovian, its continuous-time interpolation converges to a continuous-time Markov chain whose generator is a function determined by the invariant measures of the recurrent states. Sequences of occupation measures are defined. A mean square estimate on a sequence of unscaled occupation measures is obtained. Furthermore, it is proved that a suitably scaled sequence of occupation measures converges to a switching diffusion.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2003 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Supported in part by the National Science Foundation under grants DMS-9877090.

**

Supported in part by the USAF Grant F30602-99-2-0548 and ONR Grant N00014-96-1-0263.

***

Supported in part by Wayne State University.

References

[1] Abbad, M., Filar, J. A. and Bielecki, T. R. (1992). Algorithms for singularly perturbed limiting average Markov control problems. IEEE Trans. Automatic Control 37, 14211425.Google Scholar
[2] Bielecki, T. R. and Stettner, L. (1998). Ergodic control of a singularly perturbed Markov process in discrete time with general state and compact action spaces. Appl. Math. Optimization 38, 261281.Google Scholar
[3] Blankenship, G. (1981). Singularly perturbed difference equations in optimal control problems. IEEE Trans. Automatic Control 26, 911917.Google Scholar
[4] Courtois, P. J. (1977). Decomposability: Queueing and Computer System Applications. Academic Press, New York.Google Scholar
[5] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and Convergence. John Wiley, New York.Google Scholar
[6] Fleming, W. H. and Rishel, R. W. (1975). Deterministic and Stochastic Optimal Control. Springer, New York.Google Scholar
[7] Hoppensteadt, F. C. and Miranker, W. L. (1977). Multitime methods for systems of difference equations. Studies Appl. Math. 56, 273289.Google Scholar
[8] Iosifescu, M. (1980). Finite Markov Processes and Their Applications. John Wiley, Chichester.Google Scholar
[9] Khasminskii, R. Z. (1966). On stochastic processes defined by differential equations with a small parameter. Theory Prob. Appl. 11, 211228.Google Scholar
[10] Khasminskii, R. Z., Yin, G. and Zhang, Q. (1996). Asymptotic expansions of singularly perturbed systems involving rapidly fluctuating Markov chains. SIAM J. Appl. Math. 56, 277293.Google Scholar
[11] Khasminskii, R. Z., Yin, G. and Zhang, Q. (1997). Constructing asymptotic series for probability distributions of Markov chains with weak and strong interactions. Quart. Appl. Math. 55, 177200.Google Scholar
[12] Kushner, H. J. (1984). Approximation and Weak Convergence Methods for Random Processes, with Applications to Stochastic Systems Theory. MIT Press, Cambridge, MA.Google Scholar
[13] Kushner, H. J. and Huang, H. (1980). Averaging methods for the asymptotic analysis of learning and adaptive systems with small adjustment rate. SIAM J. Control Optimization 19, 635650.Google Scholar
[14] Liu, R. H., Zhang, Q. and Yin, G. (2002). Asymptotically optimal controls of hybrid linear quadratic regulators in discrete time. Automatica 38, 409419.Google Scholar
[15] Naidu, D. S. (1988). Singular Perturbation Methodology in Control Systems. Peter Peregrinus, Stevenage.CrossRefGoogle Scholar
[16] Pan, Z. G. and Başar, T. (1995). H ∞-control of Markovian jump linear systems and solutions to associated piecewise-deterministic differential games. In New Trends in Dynamic Games and Applications, ed. Olsder, G. J., Birkhäuser, Boston, MA, pp. 6194.Google Scholar
[17] Pervozvanskii, A. A. and Gaitsgori, V. G. (1988). Theory of Suboptimal Decisions: Decomposition and Aggregation. Kluwer, Dordrecht.Google Scholar
[18] Phillips, R. G. and Kokotovic, P. V. (1981). A singular perturbation approach to modelling and control of Markov chains. IEEE Trans. Automatic Control 26, 10871094.Google Scholar
[19] Sethi, S. P. and Zhang, Q. (1994). Hierarchical Decision Making in Stochastic Manufacturing Systems. Birkhäuser, Boston.Google Scholar
[20] Simon, H. A. and Ando, A. (1961). Aggregation of variables in dynamic systems. Econometrica 29, 111138.Google Scholar
[21] Tse, D. N. C., Gallager, R. G. and Tsitsiklis, J. N. (1995). Statistical multiplexing of multiple time-scale Markov streams. IEEE J. Selected Areas Commun. 13, 10281038.Google Scholar
[22] Yin, G. and Zhang, Q. (1998). Continuous-time Markov Chains and Applications: A Singular Perturbation Approach. Springer, New York.Google Scholar
[23] Yin, G. and Zhang, Q. (2000). Singularly perturbed discrete-time Markov chains. SIAM J. Appl. Math. 61, 834854.Google Scholar
[24] Yin, G., Zhang, Q. and Badowski, G. (2000). Asymptotic properties of a singularly perturbed Markov chain with inclusion of transient states. Ann. Appl. Prob. 10, 549572.Google Scholar
[25] Zhang, Q. and Yin, G. (1999). On nearly optimal controls of hybrid LQG problems. IEEE Trans. Automatic Control 44, 22712282.Google Scholar