Article contents
APPROXIMATING THE VALUE FUNCTION FOR OPTIMAL EXPERIMENTATION
Published online by Cambridge University Press: 14 November 2018
Abstract
In the economics literature, there are two dominant approaches for solving models with optimal experimentation (also called active learning). The first approach is based on the value function and the second on an approximation method. In principle the value function approach is the preferred method. However, it suffers from the curse of dimensionality and is only applicable to small problems with a limited number of policy variables. The approximation method allows for a computationally larger class of models, but may produce results that deviate from the optimal solution. Our simulations indicate that when the effects of learning are limited, the differences may be small. However, when there is sufficient scope for learning, the value function solution seems more aggressive in the use of the policy variable.
Keywords
- Type
- Articles
- Information
- Copyright
- © Cambridge University Press 2018
Footnotes
We would like to thank Volker Wieland for providing us with his software used in this paper. Furthermore, in writing this paper we have benefited greatly from the discussions we had with Tom Cosimano and Volker Wieland and the feedback from an anonymous referee.
References
- 3
- Cited by