No CrossRef data available.
Published online by Cambridge University Press: 01 July 2016
A state-space model is presented for a queueing system where two classes of customer compete in discrete-time for the service attention of a single server with infinite buffer capacity. The arrivals are modelled by an independent identically distributed random sequence of a general type while the service completions are generated by independent Bernoulli streams; the allocation of service attention is governed by feedback policies which are based on past decisions and buffer content histories. The cost of operation per unit time is a linear function of the queue sizes. Under the model assumptions, a fixed prioritization scheme, known as the μc -rule, is shown to be optimal when the expected long-run average criterion and the expected discounted criterion, over both finite and infinite horizons, are used. This static prioritization of the two classes of customers is done solely on the basis of service and cost parameters. The analysis is based on the dynamic programming methodology for Markov decision processes and takes advantage of the sample-path properties of the adopted state-space model.