Article contents
Blackwell Optimality for Controlled Diffusion Processes
Published online by Cambridge University Press: 14 July 2016
Abstract
Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
In this paper we study m-discount optimality (m ≥ −1) and Blackwell optimality for a general class of controlled (Markov) diffusion processes. To this end, a key step is to express the expected discounted reward function as a Laurent series, and then search certain control policies that lexicographically maximize the mth coefficient of this series for m = −1,0,1,…. This approach naturally leads to m-discount optimality and it gives Blackwell optimality in the limit as m → ∞.
Keywords
MSC classification
- Type
- Research Article
- Information
- Copyright
- Copyright © Applied Probability Trust 2009
References
[1]
Akella, R. and Kumar, P. R. (1986). Optimal control of production rate in a failure prone manufacturing system. IEEE Trans. Automatic Control
31, 116–126.Google Scholar
[2]
Arapostathis, A., Ghosh, M. K. and Borkar, V. S. (2009). Ergodic Control of Diffusion Processes. To appear.Google Scholar
[3]
Blackwell, D. (1962). Discrete dynamic programming. Ann. Math. Statist.
33, 719–726.Google Scholar
[4]
Borkar, V. S. and Ghosh, M. K. (1990). Ergodic control of multidimensional diffusions. II. Adaptive control. Appl. Math. Optimization
21, 191–220.Google Scholar
[5]
Dekker, R. and Hordijk, A. (1992). Recurrence conditions for average and Blackwell optimality in denumerable state Markov decision chains. Math. Operat. Res.
17, 271–289.Google Scholar
[7]
Fort, G. and Roberts, G. O. (2005). Subgeometric ergodicity of strong Markov processes. Ann. Appl. Prob.
15, 1565–1589.CrossRefGoogle Scholar
[8]
Ghosh, M. K. and Marcus, S. I. (1991). Infinite horizon controlled diffusion problems with nonstandard criteria. J. Math. Systems Estim. Control
1, 45–69.Google Scholar
[9]
Ghosh, M. K., Arapostathis, A. and Marcus, S. I. (1993). Optimal control of switching diffusions with application to flexible manufacturing systems. SIAM J. Control Optimization
31, 1183–1204.Google Scholar
[10]
Ghosh, M. K., Arapostathis, A. and Marcus, S. I. (1997). Ergodic control of switching diffusions. SIAM J. Control Optimization
35, 1952–1952.Google Scholar
[11]
Glynn, P. W. and Meyn, S. P. (1996). A Liapounov bound for solutions of the Poisson equation. Ann. Prob.
24, 916–931.CrossRefGoogle Scholar
[12]
Has'minskii, R. Z. (1980). Stochastic Stability of Differential Equations. Sijthoff and Noordhoff, Germantown, Md.CrossRefGoogle Scholar
[13]
Hernández-Lerma, O. (1994). Lectures on Continuous-Time Markov Control Processes. Sociedad Matemática Mexicana, Mexico.Google Scholar
[14]
Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes (Appl. Math. 42). Springer, New York.CrossRefGoogle Scholar
[15]
Hilgert, N. and Hernández-Lerma, O. (2003). Bias optimality versus strong 0-discount optimality in Markov control processes with unbounded costs. Acta Appl. Math.
77, 215–235.Google Scholar
[16]
Hordijk, A. and Yushkevich, A. A. (2002). Blackwell optimality. In Handbook of Markov Decision Processes (Internat. Ser. Operat. Res. Manag. Sci. 40), eds Feinberg, E. A. and Shwartz, A., Kluwer, Boston, MA, pp. 231–267.CrossRefGoogle Scholar
[17]
Jasso-Fuentes, H. (2007). Infinite-horizon optimal control problems for Markov diffusion processes. , Mathematics Department, CINVESTAV-IPN.Google Scholar
[18]
Jasso-Fuentes, H. and Hernández-Lerma, O. (2008). Characterizations of overtaking optimality for controlled diffusion processes. Appl. Math. Optimization
57, 349–369.Google Scholar
[19]
Jasso-Fuentes, H. and Hernández-Lerma, O. (2009). Ergodic control, bias, and sensitive discount optimality for Markov diffusion processes. Stoch. Anal. Appl.
27, 363–385.Google Scholar
[20]
Leizarowitz, A. (1988). Controlled diffusion processes on infinite horizon with the overtaking criterion. Appl. Math. Optimization
17, 61–78.Google Scholar
[21]
Leizarowitz, A. (1990). Optimal controls for diffusion in {R}d—min-max max-min formula for the minimal cost growth rate. J. Math. Anal. Appl.
149, 180–209.Google Scholar
[22]
Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes. III. Foster–Lyapunov criteria for continuous-time precesses. Adv. Appl. Prob.
25, 518–548.Google Scholar
[23]
Prieto-Rumeau, T. (2006). Blackwell optimality in the class of Markov policies for continuous-time controlled Markov chains. Acta Appl. Math.
92, 77–96.Google Scholar
[24]
Prieto-Rumeau, T. and Hernandez-Lerma, O. (2005). Bias and overtaking equilibria for zero-sum continuous-time Markov games. Math. Meth. Operat. Res.
61, 437–454.Google Scholar
[25]
Prieto-Rumeau, T. and Hernandez-Lerma, O. (2005). The Laurent series, sensitive discount and Blackwell optimality for continuous-time controlled Markov chains. Math. Meth. Operat. Res.
61, 123–145.Google Scholar
[26]
Prieto-Rumeau, T. and Hernández-Lerma, O. (2006). Bias optimality for continuous-time controlled Markov chains. SIAM J. Control Optimization
45, 51–73.CrossRefGoogle Scholar
[27]
Puterman, M. L. (1974). Sensitive discount optimality in controlled one-dimensional diffusions. Ann. Prob.
2, 408–419.CrossRefGoogle Scholar
[28]
Taylor, H. M. (1976). A Laurent series for the resolvent of a strongly continuous stochastic semi-group. Math. Program.
6, 258–263.Google Scholar
[29]
Veinott, A. F. Jr. (1969). Discrete dynamic programming with sensitive discount optimality criteria. Ann. Math. Statist.
40, 1635–1660.CrossRefGoogle Scholar
[30]
Veretennikov, A. Y. and Klokov, S. A. (2005). On the subexponential rate of mixing for Markov processes. Theory Prob. Appl.
49, 110–122.Google Scholar
[32]
Zhu, Q. and Guo, X. (2005). Another set of conditions for strong n (n=−1,0) discount optimality in Markov decision processes. Stoch. Anal. Appl.
23, 953–974.Google Scholar
You have
Access
- 3
- Cited by