Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-28T02:24:51.562Z Has data issue: false hasContentIssue false

Optimal replacement policy with unobservable states

Published online by Cambridge University Press:  14 July 2016

Robert C. Wang*
Affiliation:
Mountain Bell, Denver, Colorado
*
*Now at Bell System Center for Technical Education, Lisle, Illinois.

Abstract

In this paper we shall solve the optimal policy for the Markovian replacement model in which the state of a machine is not observable. We shall consider both discounted and average costs and discuss two examples.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1977 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Blackwell, D. (1965) Discounted dynamic programming. Ann. Math. Statist. 36, 226235.Google Scholar
Derman, C. (1963) On optimal replacement rules when changes of state are Markovian. In Mathematical Optimization Techniques, ed. Bellman, R. University of California Press, Berkeley.Google Scholar
Rosenfield, D. (1976) Markovian deterioration with uncertain information. Opns. Res. 24, 141155.CrossRefGoogle Scholar
Ross, S. (1968) Arbitrary state Markovian decision processes. Ann. Math. Statist. 39, 21182122.Google Scholar
Ross, S. (1970) Applied Probability Models with Optimization Applications. Holden-Day, San Francisco.Google Scholar
Ross, S. (1971) Quality control under Markovian deterioration. Management Sci. 17, 587596.Google Scholar
Wang, R. C. (1975) Computing optimal replacement policies. Unpublished.Google Scholar
Wang, R. C. (1976) Computing optimal quality control policies — two actions. J. Appl. Prob. 13, 826832.Google Scholar