
Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Role of probability theory in science
- 2 Probability theory as extended logic
- 3 The how-to of Bayesian inference
- 4 Assigning probabilities
- 5 Frequentist statistical inference
- 6 What is a statistic?
- 7 Frequentist hypothesis testing
- 8 Maximum entropy probabilities
- 9 Bayesian inference with Gaussian errors
- 10 Linear model fitting (Gaussian errors)
- 11 Nonlinear model fitting
- 12 Markov chain Monte Carlo
- 13 Bayesian revolution in spectral analysis
- 14 Bayesian inference with Poisson sampling
- Appendix A Singular value decomposition
- Appendix B Discrete Fourier Transforms
- Appendix C Difference in two samples
- Appendix D Poisson ON/OFF details
- Appendix E Multivariate Gaussian from maximum entropy
- References
- Index
7 - Frequentist hypothesis testing
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Role of probability theory in science
- 2 Probability theory as extended logic
- 3 The how-to of Bayesian inference
- 4 Assigning probabilities
- 5 Frequentist statistical inference
- 6 What is a statistic?
- 7 Frequentist hypothesis testing
- 8 Maximum entropy probabilities
- 9 Bayesian inference with Gaussian errors
- 10 Linear model fitting (Gaussian errors)
- 11 Nonlinear model fitting
- 12 Markov chain Monte Carlo
- 13 Bayesian revolution in spectral analysis
- 14 Bayesian inference with Poisson sampling
- Appendix A Singular value decomposition
- Appendix B Discrete Fourier Transforms
- Appendix C Difference in two samples
- Appendix D Poisson ON/OFF details
- Appendix E Multivariate Gaussian from maximum entropy
- References
- Index
Summary
Overview
One of the main objectives in science is that of inferring the truth of one or more hypotheses about how some aspect of nature works. Because we are always in a state of incomplete information, we can never prove any hypothesis (theory) is true. In Bayesian inference, we can compute the probabilities of two or more competing hypotheses directly for our given state of knowledge.
In this chapter, we will explore the frequentist approach to hypothesis testing which is considerably less direct. It involves considering each hypothesis individually and deciding whether to (a) reject the hypothesis, or (b) fail to reject the hypothesis, on the basis of the computed value of a suitable choice of statistic. This is a very big subject and we will give only a limited selection of examples in an attempt to convey the main ideas. The decision on whether to reject a hypothesis is commonly based on a quantity called a P-value. At the end of the chapter we discuss a serious problem with frequentist hypothesis testing, called the “optional stopping problem.”
Basic idea
In hypothesis testing we are interested in making inferences about the truth of some hypothesis. Two examples of hypotheses which we analyze below are:
The radio emission from a particular galaxy is constant.
The mean concentration of a particular toxin in river sediment is the same at two locations.
- Type
- Chapter
- Information
- Bayesian Logical Data Analysis for the Physical SciencesA Comparative Approach with Mathematica® Support, pp. 162 - 183Publisher: Cambridge University PressPrint publication year: 2005