Book contents
- Frontmatter
- Contents
- Preface
- 1 Motivation and Basic Tools
- 2 Estimation Theory
- 3 Hypothesis Testing
- 4 Elements of Statistical Decision Theory
- 5 Stochastic Processes: An Overview
- 6 Stochastic Convergence and Probability Inequalities
- 7 Asymptotic Distributions
- 8 Asymptotic Behavior of Estimators and Tests
- 9 Categorical Data Models
- 10 Regression Models
- 11 Weak Convergence and Gaussian Processes
- Bibliography
- Index
6 - Stochastic Convergence and Probability Inequalities
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- 1 Motivation and Basic Tools
- 2 Estimation Theory
- 3 Hypothesis Testing
- 4 Elements of Statistical Decision Theory
- 5 Stochastic Processes: An Overview
- 6 Stochastic Convergence and Probability Inequalities
- 7 Asymptotic Distributions
- 8 Asymptotic Behavior of Estimators and Tests
- 9 Categorical Data Models
- 10 Regression Models
- 11 Weak Convergence and Gaussian Processes
- Bibliography
- Index
Summary
Introduction
Unbiasedness, efficiency, sufficiency, and ancillarity, as outlined in Chapters 2 and 3, are essentially finite-sample concepts, but consistency refers to indefinitely increasing samples sizes, and thus has an asymptotic nature. In general, finite-sample optimality properties of estimators and tests hold basically for a small class of probability laws, mostly related to the exponential family of distributions; consistency, however, holds under much less restricted setups as we will see. Moreover, even when finite-sample optimal statistical procedures exist, they may not lead to closed-form expressions and/or be subject to computational burden. These problems are not as bothersome when we adopt an asymptotic point of view and use the corresponding results to obtain good approximations of such procedures for large (although finite) samples. This is accomplished with the incorporation of probability inequalities, limit theorems, and other tools that will be developed in this and subsequent chapters.
In this context, a minimal requirement for a good statistical decision rule is its increasing reliability with increasing sample sizes (consistency). For an estimator, consistency relates to an increasing closeness to its population counterpart as the sample sizes become larger. In view of its stochastic nature, this closeness needs to incorporate its fluctuation around the parameter it estimates and thus requires an appropriate adaptation of the definitions usually considered in nonstochastic setups. Generally, a distance function or norm of this stochastic fluctuation is incorporated in the formulation of this closeness, and consistency refers to the convergence of this norm to 0 in some well-defined manner.
- Type
- Chapter
- Information
- From Finite Sample to Asymptotic Methods in Statistics , pp. 119 - 172Publisher: Cambridge University PressPrint publication year: 2009