Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-28T06:19:59.380Z Has data issue: false hasContentIssue false

Time-average optimal constrained semi-Markov decision processes

Published online by Cambridge University Press:  01 July 2016

Frederick J. Beutler*
Affiliation:
The University of Michigan, Ann Arbor
Keith W. Ross*
Affiliation:
University of Pennsylvania
*
Postal address: Computer, Information and Control Engineering Program, The University of Michigan, Ann Arbor, MI 48109, USA.
∗∗Postal address: Systems Engineering Department, Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA.

Abstract

Optimal causal policies maximizing the time-average reward over a semi-Markov decision process (SMDP), subject to a hard constraint on a time-average cost, are considered. Rewards and costs depend on the state and action, and contain running as well as switching components. It is supposed that the state space of the SMDP is finite, and the action space compact metric. The policy determines an action at each transition point of the SMDP.

Under an accessibility hypothesis, several notions of time average are equivalent. A Lagrange multiplier formulation involving a dynamic programming equation is utilized to relate the constrained optimization to an unconstrained optimization parametrized by the multiplier. This approach leads to a proof for the existence of a semi-simple optimal constrained policy. That is, there is at most one state for which the action is randomized between two possibilities; at all other states, an action is uniquely chosen for each state. Affine forms for the rewards, costs and transition probabilities further reduce the optimal constrained policy to ‘almost bang-bang’ form, in which the optimal policy is not randomized, and is bang-bang except perhaps at one state. Under the same assumptions, one can alternatively find an optimal constrained policy that is strictly bang-bang, but may be randomized at one state. Application is made to flow control of a birth-and-death process (e.g., an M/M/s queue); under certain monotonicity restrictions on the reward and cost structure the preceding results apply, and in addition there is a simple acceptance region.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 1986 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Bertsekas, D. P. (1976) Dynamic Programming and Stochastic Control. Academic Press, New York.Google Scholar
2. Beutler, F. J. (1983) Mean sojourn times in Markov queueing networks: Little&s formula revisited. IEEE Trans. Inf. Theory 29, 233241.Google Scholar
3. Beutler, F. and Ross, K. (1985) Optimal policies for controlled Markov chains with a constraint. J. Math. Anal. Appl. 112, 236252.Google Scholar
4. Çinlar, E. (1975) Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
5. Doob, J. L. (1953) Stochastic Processes. Wiley, New York.Google Scholar
6. Feller, W. (1966) An Introduction to Probability Theory and its Applications, Vol. 2. Wiley, New York.Google Scholar
7. Fox, B. (1966) Markov renewal programming by linear fractional programming. SIAM J. Appl. Math. 14, 14181432.Google Scholar
8. Hajek, B. (1984) Optimal control of two interacting service stations. IEEE Trans. Autom. Control 29, 491499.Google Scholar
9. Kelly, F. P. (1979) Reversibility and Stochastic Networks. Wiley, New York.Google Scholar
10. Kleinrock, L. (1976) Queueing Systems Vol. 2: Computer Applications. Wiley, New York.Google Scholar
11. Koyabashi, H. (1978) Modeling and Analysis: an Introduction to System Performance Evaluation Methodology. Addison-Wesley, Reading, Mass.Google Scholar
12. Lazar, A. A. (1983) The throughput time delay function of an M/M/1 queue. IEEE Trans. Inf. Theory 29, 914918.Google Scholar
13. Murty, K. G. (1983) Linear Programming. Wiley, New York.Google Scholar
14. Ross, K. W. (1985) Constrained Markov Decision Processes with Queueing Applications. Thesis, Computer, Information and Control Engineering Program, University of Michigan.Google Scholar
15. Ross, S. M. (1982) Applied Probability Models with Optimization Applications. Holden Day, San Francisco.Google Scholar
16. Ross, S. M. Stochastic Processes. Wiley, New York.Google Scholar
17. Rosberg, Z., Varaiya, J. and Walrand, J. (1982) Optimal control of service in tandem queues. IEEE Trans. Autom. Control 27, 600610.Google Scholar
18. Sauer, C. and Chandi, K. (1981) Computer Systems Performance Modeling. Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
19. Schweitzer, P. J. (1971) Iterative solution to the functional equations of undiscounted Markov renewal programming. J. Math. Anal. Appl. 34, 495501.CrossRefGoogle Scholar
20. Stidham, S. Jr (1982) Optimal control of arrivals to queues and networks of queues. 21st IEEE Conf. Decision and Control, Orlando, Fl.Google Scholar
21. White, D. J. (1972) Dynamic programming and probabilistic constraints. Operat. Res. 22, 654664.Google Scholar