Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Fourier series: convergence and summability
- 2 Harmonic functions; Poisson kernel
- 3 Conjugate harmonic functions; Hilbert transform
- 4 The Fourier transform on ℝd and on LCA groups
- 5 Introduction to probability theory
- 6 Fourier series and randomness
- 7 Calderón–Zygmund theory of singular integrals
- 8 Littlewood–Paley theory
- 9 Almost orthogonality
- 10 The uncertainty principle
- 11 Fourier restriction and applications
- 12 Introduction to the Weyl calculus
- References
- Index
5 - Introduction to probability theory
Published online by Cambridge University Press: 05 February 2013
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Fourier series: convergence and summability
- 2 Harmonic functions; Poisson kernel
- 3 Conjugate harmonic functions; Hilbert transform
- 4 The Fourier transform on ℝd and on LCA groups
- 5 Introduction to probability theory
- 6 Fourier series and randomness
- 7 Calderón–Zygmund theory of singular integrals
- 8 Littlewood–Paley theory
- 9 Almost orthogonality
- 10 The uncertainty principle
- 11 Fourier restriction and applications
- 12 Introduction to the Weyl calculus
- References
- Index
Summary
In this chapter we provide a brief introduction to some basic concepts and techniques of probability theory. This serves two main purposes: first, to develop probabilistic methods used in harmonic analysis and second to establish some level of intuition for probabilistic reasoning of the kind that has proven useful in analysis. In fact, as we shall see later, some ideas in harmonic analysis become transparent only when viewed from a probabilistic angle.
Probability spaces; independence
To begin, a probability space (Ω, Σ, ℙ) is a measure space with ℙ a positive measure such that ℙ(Ω) = 1. Elements A, B, … of the σ-algebra Σ are called events and ℙ(A) is the probability of event A. Real- or complex-valued functions that are measurable relative to such a space are called random variables. Of central importance is the concept of independence: we say that events A,B are independent if and only if ℙ(A ∩ B) = ℙ(A)ℙ(B). This is exactly what naive probabilistic reasoning dictates. More generally, finitely many σ-subalgebras Σj of Σ are called independent if, for any Aj ∈ Σj, one has
Finitely many random variables {Xj}j are called independent if and only if the σ-algebras are independent, where B is the Borel σ-algebra over the scalars. The pairwise independence of more than two variables is not the same as the type of independence defined above; a typical example of independent random variables is given by a coin-tossing sequence, which (at least intuitively) is a sequence obtained by repeatedly tossing a coin that comes up heads with probability p ∈ (0, 1) and tails with probability q = 1 − p.
- Type
- Chapter
- Information
- Classical and Multilinear Harmonic Analysis , pp. 106 - 135Publisher: Cambridge University PressPrint publication year: 2013