Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Inference and estimation in probabilistic time series models
- I Monte Carlo
- II Deterministic approximations
- 5 Two problems with variational expectation maximisation for time series models
- 6 Approximate inference for continuous-time Markov processes
- 7 Expectation propagation and generalised EP methods for inference in switching linear dynamical systems
- 8 Approximate inference in switching linear dynamical systems using Gaussian mixtures
- III Switching models
- IV Multi-object models
- V Nonparametric models
- VI Agent-based models
- Index
- Plate section
- References
5 - Two problems with variational expectation maximisation for time series models
from II - Deterministic approximations
Published online by Cambridge University Press: 07 September 2011
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Inference and estimation in probabilistic time series models
- I Monte Carlo
- II Deterministic approximations
- 5 Two problems with variational expectation maximisation for time series models
- 6 Approximate inference for continuous-time Markov processes
- 7 Expectation propagation and generalised EP methods for inference in switching linear dynamical systems
- 8 Approximate inference in switching linear dynamical systems using Gaussian mixtures
- III Switching models
- IV Multi-object models
- V Nonparametric models
- VI Agent-based models
- Index
- Plate section
- References
Summary
Introduction
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods, and yet generally requiring less computational time than Markov chain Monte Carlo methods. In particular the variational expectation maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time series modelling. Here, we investigate the success of vEM in simple probabilistic time series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such as mean-field) can lead to less bias than more complicated structured approximations.
The variational approach
We begin this chapter with a brief theoretical review of the variational expectation maximisation algorithm, before illustrating the important concepts with a simple example in the next section. The vEM algorithm is an approximate version of the expectation maximisation (EM) algorithm [4]. Expectation maximisation is a standard approach to finding maximum likelihood (ML) parameters for latent variable models, including hidden Markov models and linear or non-linear state space models (SSMs) for time series.
- Type
- Chapter
- Information
- Bayesian Time Series Models , pp. 104 - 124Publisher: Cambridge University PressPrint publication year: 2011
References
- 26
- Cited by