Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-05T03:40:45.351Z Has data issue: false hasContentIssue false

A LEARNING ALGORITHM FOR DISCRETE-TIME STOCHASTIC CONTROL

Published online by Cambridge University Press:  01 April 2000

V. S. Borkar
Affiliation:
School of Technology and Computer Science, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India, E-mail: [email protected]

Abstract

A simulation-based algorithm for learning good policies for a discrete-time stochastic control process with unknown transition law is analyzed when the state and action spaces are compact subsets of Euclidean spaces. This extends the Q-learning scheme of discrete state/action problems along the lines of Baker [4]. Almost sure convergence is proved under suitable conditions.

Type
Research Article
Copyright
© 2000 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)