Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-26T17:01:56.639Z Has data issue: false hasContentIssue false

Optimal stopping in a partially observable binary-valued markov chain with costly perfect information

Published online by Cambridge University Press:  14 July 2016

George E. Monahan*
Affiliation:
Georgia Institute of Technology
*
Postal address: College of Management, Georgia Institute of Technology, Atlanta, GA 30332, U.S.A.

Abstract

The problem of optimal stopping in a Markov chain when there is imperfect state information is formulated as a partially observable Markov decision process. Properties of the optimal value function are developed. It is shown that under mild conditions the optimal policy is well structured. An efficient algorithm, which uses the structural information in the computation of the optimal policy, is presented.

Type
Research Paper
Copyright
Copyright © Applied Probability Trust 1982 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Albright, S. (1979) Structural results for partially observable Markov decision processes. Operat. Res. 27, 10411053.CrossRefGoogle Scholar
Bertsekas, D. (1976) Dynamic Programming and Stochastic Control. Academic Press, New York.Google Scholar
Derman, C. (1963) On optimal replacement rules when changes of state are Markovian. In Mathematical Optimization Techniques, ed. Bellman, R.. University of California Press, Berkeley, Ca. Google Scholar
Derman, C. (1970) Finite State Markovian Decision Processes. Academic Press, New York.Google Scholar
De Groot, M. (1970) Optimal Statistical Decisions. McGraw-Hill, New York.Google Scholar
Dynkin, E. (1963) The optimum choice of the instant for stopping a Markov process. Dokl. Akad. Nauk SSR 150, 238240.Google Scholar
Monahan, G. (1977) On Optimal Stopping in a Partially Observable Markov Process with Costly Information. Unpublished Ph.D. Thesis, Northwestern University, Evanston, II.Google Scholar
Monahan, G. (1980) Optimal stopping in a partially observable Markov process with costly information. Operat. Res. 28, 13191334.CrossRefGoogle Scholar
Rosenfield, D. (1976) Markovian deterioration with uncertain information. Operat. Res. 24, 141155.CrossRefGoogle Scholar
Ross, S. (1970) Applied Probability with Optimization Applications. Holden-Day, San Francisco.Google Scholar
Ross, S. (1971) Quality control under Markovian deterioration. Management Sci. 17, 587596.CrossRefGoogle Scholar
Sondik, E. (1971) The Optimal Control of Partially Observable Markov Processes. Unpublished Ph.D. Thesis, Stanford University, Stanford, Ca. Google Scholar
Sondik, E. (1978) The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Operat. Res. 26, 282304.Google Scholar
White, C. (1977) A Markov quality control process subject to partial observation. Management Sci. 23, 843852.CrossRefGoogle Scholar