Book contents
- Frontmatter
- Contents
- Preface
- A guide to notation
- 1 Model selection: data examples and introduction
- 2 Akaike's information criterion
- 3 The Bayesian information criterion
- 4 A comparison of some selection methods
- 5 Bigger is not always better
- 6 The focussed information criterion
- 7 Frequentist and Bayesian model averaging
- 8 Lack-of-fit and goodness-of-fit tests
- 9 Model selection and averaging schemes in action
- 10 Further topics
- Overview of data examples
- References
- Author index
- Subject index
2 - Akaike's information criterion
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Preface
- A guide to notation
- 1 Model selection: data examples and introduction
- 2 Akaike's information criterion
- 3 The Bayesian information criterion
- 4 A comparison of some selection methods
- 5 Bigger is not always better
- 6 The focussed information criterion
- 7 Frequentist and Bayesian model averaging
- 8 Lack-of-fit and goodness-of-fit tests
- 9 Model selection and averaging schemes in action
- 10 Further topics
- Overview of data examples
- References
- Author index
- Subject index
Summary
Data can often be modelled in different ways. There might be simple approaches and more advanced ones that perhaps have more parameters. When many covariates are measured we could attempt to use them all to model their influence on a response, or only a subset of them, which would make it easier to interpret and communicate the results. For selecting a model among a list of candidates, Akaike's information criterion (AIC) is among the most popular and versatile strategies. Its essence is a penalised version of the attained maximum log-likelihood, for each model. In this chapter we shall see AIC at work in a range of applications, in addition to unravelling its basic construction and properties. Attention is also given to natural generalisations and modifications of AIC that in various situations aim at performing more accurately.
Information criteria for balancing fit with complexity
In Chapter 1 various problems were discussed where the task of selecting a suitable statistical model, from a list of candidates, was an important ingredient. By necessity there are different model selection strategies, corresponding to different aims and uses associated with the selected model. Most (but not all) selection methods are defined in terms of an appropriate information criterion, a mechanism that uses data to give each candidate model a certain score; this then leads to a fully ranked list of candidate models, from the ostensibly best to the worst.
- Type
- Chapter
- Information
- Model Selection and Model Averaging , pp. 22 - 69Publisher: Cambridge University PressPrint publication year: 2008
- 5
- Cited by