Book contents
- Frontmatter
- Contents
- Acknowledgments
- 1 Introduction
- 2 Programming and statistical concepts
- 3 Choosing a test statistic
- 4 Random variables and distributions
- 5 More programming and statistical concepts
- 6 Parametric distributions
- 7 Linear model
- 8 Fitting distributions
- 9 Dependencies
- 10 How to get away with peeking at data
- 11 Contingency
- References
- Index
8 - Fitting distributions
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Acknowledgments
- 1 Introduction
- 2 Programming and statistical concepts
- 3 Choosing a test statistic
- 4 Random variables and distributions
- 5 More programming and statistical concepts
- 6 Parametric distributions
- 7 Linear model
- 8 Fitting distributions
- 9 Dependencies
- 10 How to get away with peeking at data
- 11 Contingency
- References
- Index
Summary
Estimators are random variables
You may choose to hypothesize variability using a random variable with a parametric distribution. Once you have hypothesized an appropriate family of parametric distributions, you still must hypothesize appropriate values for the parameters. If you have observed data, then one approach is to use your hypothesis that the observed data sampled a random variable with some distribution from that parametric family to estimate the parameters. This can be done in a variety of ways. Notice that your data are now assumed to be samples of a random variable. When you do arithmetic with the data to create an estimate of the parameters, such an estimate itself becomes a random variable, described by its distribution.
Hypothesize that your data have been generated by a binary random variable, b, repeatedly sampled independently, but you do not hypothesize a value for p != Pr(b = 1). Instead you guess p from the data that you have observed. By what criteria could you make that guess? What properties might such guesses have?
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2011