Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-25T20:11:42.849Z Has data issue: false hasContentIssue false

Perturbation theory for unbounded Markov reward processes with applications to queueing

Published online by Cambridge University Press:  01 July 2016

Nico M. Van Dijk*
Affiliation:
Twente University of Technology
*
Present address: Faculty of Economical Sciences and Econometrics, Free University, P.O. Box 7161, 1007 MC Amsterdam, The Netherlands.

Abstract

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 1988 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Cooper, R. B. (1984) Introduction to Queueing Theory. North-Holland (1984).Google Scholar
[2] Van Doorn, E. A. (1984) On the overflow process from a finite Markovian queue. Performance Eval. 4, 233240.CrossRefGoogle Scholar
[3] Van Dijk, N. M. and Puterman, M. L. (1988) Perturbation theory for Markov reward processes with applications to queueing systems. Adv. Appl. Prob. 20, 7998.Google Scholar
[4] Hinderer, K. (1978) On approximate solutions of finite-stage dynamic programs. Dynamic Programming and its Applications, ed. Puterman, M. L., Academic Press, New York.Google Scholar
[5] Kemeny, J. G., Snell, J. L. and Knapp, A. W. (1966) Denumerable Markov Chains. Van Nostrand, Princeton, NJ.Google Scholar
[6] Meyer, C. D. Jr. (1980) The condition of a finite Markov chain and perturbation bounds for the limiting probabilities. SIAM J. Alg. Disc. Math. 1, 273283.Google Scholar
[7] Schweitzer, P. J. (1968) Perturbation theory and finite Markov chains. J. Appl. Prob. 5, 401413.Google Scholar
[8] Tijms, H. C. (1986) Stochastic Modelling and Analysis. A Computational Approach. Wiley, New York.Google Scholar
[9] Whitt, W. (1978) Approximations of dynamic programs I. Math. Operat. Res. 3, 231243.Google Scholar