Published online by Cambridge University Press: 06 March 2006
We consider a scheduling problem with two interconnected queues and two flexible servers. It is assumed that all jobs are present at the beginning and that there are no further arrivals to the system at any time. For each job, there are waiting costs per unit of time until the job leaves the system. A job of queue 1, after being served, joins queue 2 with probability p and leaves the system with probability 1 − p. The objective is how to allocate the two servers to the queues such that the expected total holding costs until the system is empty are minimized. We give a sufficient condition such that for any number of jobs in queue 1 and queue 2, it is optimal to allocate both servers to queue 1 (resp. queue 2).