Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-29T19:01:03.311Z Has data issue: false hasContentIssue false

Optimal control of stochastic Ito differential systems by fixed terminal time

Published online by Cambridge University Press:  01 July 2016

N. U. Ahmed
Affiliation:
University of Ottawa
K. L. Teo
Affiliation:
University of New South Wales

Abstract

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 1975 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Aronson, D. G. (1968) Non-negative solutions of linear parabolic equations. Ann. Scuola Norm. Sup. Pisa 22, 607694.Google Scholar
[2] Aronson, D. G. and Serrin, J. (1967) Local behavior of solutions of quasilinear parabolic equations. Archive for Rational Mechanics and Analysis 25, 81122.Google Scholar
[3] Fleming, W. H. (1968) Optimal control of partially observable diffusions. SIAM J. Control 6, 194214.Google Scholar
[4] Ladyzhenskaya, O. A., Solonnikov, V. A. and Ural'ceva, N. N. (1968) Linear and quasilinear equations of parabolic type. Translations of Mathematical Monographs, American Monographs, American Mathematical Society, Providence.CrossRefGoogle Scholar
[5] Stroock, D. W. and Varadhan, S. R. S. (1969) Diffusion processes with continuous coefficients, I. Comm. Pure and Appl. Math. 22, 345400.Google Scholar
[6] Sobolev, S. L. (1963) Some applications of functional analysis to mathematical physics. Translation of Mathematical Monographs, American Mathematical Society, Providence.Google Scholar
[7] Ahmed, N. U. and Teo, K. L. (1974) An existence theorem on optimal control of partially observable diffusions. SIAM J. Control 12, 351355.CrossRefGoogle Scholar
[8] Teo, K. L. and Ahmed, N. U. (1974) Optimal feedback control for a class of stochastic systems. Int. J. Systems Sciences 5, 357365.Google Scholar