Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Theory of Tests, p-Values, and Confidence Intervals
- 3 From Scientific Theory to Statistical Hypothesis Test
- 4 One-Sample Studies with Binary Responses
- 5 One-Sample Studies with Ordinal or Numeric Responses
- 6 Paired Data
- 7 Two-Sample Studies with Binary Responses
- 8 Assumptions and Hypothesis Tests
- 9 Two-Sample Studies with Ordinal or Numeric Responses
- 10 General Methods for Frequentist Inferences
- 11 k-Sample Studies and Trend Tests
- 12 Clustering and Stratification
- 13 Multiplicity in Testing
- 14 Testing from Models
- 15 Causality
- 16 Censoring
- 17 Missing Data
- 18 Group Sequential and Related Adaptive Methods
- 19 Testing Fit, Equivalence, and Noninferiority
- 20 Power and Sample Size
- 21 Bayesian Hypothesis Testing
- References
- Notation Index
- Concept Index
14 - Testing from Models
Published online by Cambridge University Press: 17 April 2022
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Theory of Tests, p-Values, and Confidence Intervals
- 3 From Scientific Theory to Statistical Hypothesis Test
- 4 One-Sample Studies with Binary Responses
- 5 One-Sample Studies with Ordinal or Numeric Responses
- 6 Paired Data
- 7 Two-Sample Studies with Binary Responses
- 8 Assumptions and Hypothesis Tests
- 9 Two-Sample Studies with Ordinal or Numeric Responses
- 10 General Methods for Frequentist Inferences
- 11 k-Sample Studies and Trend Tests
- 12 Clustering and Stratification
- 13 Multiplicity in Testing
- 14 Testing from Models
- 15 Causality
- 16 Censoring
- 17 Missing Data
- 18 Group Sequential and Related Adaptive Methods
- 19 Testing Fit, Equivalence, and Noninferiority
- 20 Power and Sample Size
- 21 Bayesian Hypothesis Testing
- References
- Notation Index
- Concept Index
Summary
The chapter addresses testing when using models. We review linear models, generalized linear models, and proportional odds models, including issues such as checking model assumptions and separation (e.g., when one covariate completely predicts a binary response). We discuss the Neyman–Scott problem, that is, when bias for a fixed parameter estimate can result when the number of nuisance parameters grows with the sample size. With clustered data, we compare mixed effects models and marginal models, pointing out that for logistic regression and other models the fixed effect estimands are different in the two type of models. We present simulations showing that many models may be interpreted as a multiple testing situation, and adjustments should often be made if testing for many effects in a model. We discuss model selection using methods such as Akaike’s information criterion, the lasso, and cross-validation. We compare different model selection processes and their effect on the Type I error rate for a parameter from the final chosen model.
- Type
- Chapter
- Information
- Statistical Hypothesis Testing in ContextReproducibility, Inference, and Science, pp. 253 - 276Publisher: Cambridge University PressPrint publication year: 2022