Book contents
- Frontmatter
- Contents
- Foreword
- Introduction
- Acknowledgments
- 1 Probability basics
- 2 Probability distributions
- 3 Measuring information
- 4 Entropy
- 5 Mutual information and more entropies
- 6 Differential entropy
- 7 Algorithmic entropy and Kolmogorov complexity
- 8 Information coding
- 9 Optimal coding and compression
- 10 Integer, arithmetic, and adaptive coding
- 11 Error correction
- 12 Channel entropy
- 13 Channel capacity and coding theorem
- 14 Gaussian channel and Shannon–Hartley theorem
- 15 Reversible computation
- 16 Quantum bits and quantum gates
- 17 Quantum measurements
- 18 Qubit measurements, superdense coding, and quantum teleportation
- 19 Deutsch–Jozsa, quantum Fourier transform, and Grover quantum database search algorithms
- 20 Shor's factorization algorithm
- 21 Quantum information theory
- 22 Quantum data compression
- 23 Quantum channel noise and channel capacity
- 24 Quantum error correction
- 25 Classical and quantum cryptography
- Appendix A (Chapter 4) Boltzmann's entropy
- Appendix B (Chapter 4) Shannon's entropy
- Appendix C (Chapter 4) Maximum entropy of discrete sources
- Appendix D (Chapter 5) Markov chains and the second law of thermodynamics
- Appendix E (Chapter 6) From discrete to continuous entropy
- Appendix F (Chapter 8) Kraft–McMillan inequality
- Appendix G (Chapter 9) Overview of data compression standards
- Appendix H (Chapter 10) Arithmetic coding algorithm
- Appendix I (Chapter 10) Lempel–Ziv distinct parsing
- Appendix J (Chapter 11) Error-correction capability of linear block codes
- Appendix K (Chapter 13) Capacity of binary communication channels
- Appendix L (Chapter 13) Converse proof of the channel coding theorem
- Appendix M (Chapter 16) Bloch sphere representation of the qubit
- Appendix N (Chapter 16) Pauli matrices, rotations, and unitary operators
- Appendix O (Chapter 17) Heisenberg uncertainty principle
- Appendix P (Chapter 18) Two-qubit teleportation
- Appendix Q (Chapter 19) Quantum Fourier transform circuit
- Appendix R (Chapter 20) Properties of continued fraction expansion
- Appendix S (Chapter 20) Computation of inverse Fourier transform in the factorization of N = 21 through Shor's algorithm
- Appendix T (Chapter 20) Modular arithmetic and Euler's theorem
- Appendix U (Chapter 21) Klein's inequality
- Appendix V (Chapter 21) Schmidt decomposition of joint pure states
- Appendix W (Chapter 21) State purification
- Appendix X (Chapter 21) Holevo bound
- Appendix Y (Chapter 25) Polynomial byte representation and modular multiplication
- Index
- References
13 - Channel capacity and coding theorem
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Foreword
- Introduction
- Acknowledgments
- 1 Probability basics
- 2 Probability distributions
- 3 Measuring information
- 4 Entropy
- 5 Mutual information and more entropies
- 6 Differential entropy
- 7 Algorithmic entropy and Kolmogorov complexity
- 8 Information coding
- 9 Optimal coding and compression
- 10 Integer, arithmetic, and adaptive coding
- 11 Error correction
- 12 Channel entropy
- 13 Channel capacity and coding theorem
- 14 Gaussian channel and Shannon–Hartley theorem
- 15 Reversible computation
- 16 Quantum bits and quantum gates
- 17 Quantum measurements
- 18 Qubit measurements, superdense coding, and quantum teleportation
- 19 Deutsch–Jozsa, quantum Fourier transform, and Grover quantum database search algorithms
- 20 Shor's factorization algorithm
- 21 Quantum information theory
- 22 Quantum data compression
- 23 Quantum channel noise and channel capacity
- 24 Quantum error correction
- 25 Classical and quantum cryptography
- Appendix A (Chapter 4) Boltzmann's entropy
- Appendix B (Chapter 4) Shannon's entropy
- Appendix C (Chapter 4) Maximum entropy of discrete sources
- Appendix D (Chapter 5) Markov chains and the second law of thermodynamics
- Appendix E (Chapter 6) From discrete to continuous entropy
- Appendix F (Chapter 8) Kraft–McMillan inequality
- Appendix G (Chapter 9) Overview of data compression standards
- Appendix H (Chapter 10) Arithmetic coding algorithm
- Appendix I (Chapter 10) Lempel–Ziv distinct parsing
- Appendix J (Chapter 11) Error-correction capability of linear block codes
- Appendix K (Chapter 13) Capacity of binary communication channels
- Appendix L (Chapter 13) Converse proof of the channel coding theorem
- Appendix M (Chapter 16) Bloch sphere representation of the qubit
- Appendix N (Chapter 16) Pauli matrices, rotations, and unitary operators
- Appendix O (Chapter 17) Heisenberg uncertainty principle
- Appendix P (Chapter 18) Two-qubit teleportation
- Appendix Q (Chapter 19) Quantum Fourier transform circuit
- Appendix R (Chapter 20) Properties of continued fraction expansion
- Appendix S (Chapter 20) Computation of inverse Fourier transform in the factorization of N = 21 through Shor's algorithm
- Appendix T (Chapter 20) Modular arithmetic and Euler's theorem
- Appendix U (Chapter 21) Klein's inequality
- Appendix V (Chapter 21) Schmidt decomposition of joint pure states
- Appendix W (Chapter 21) State purification
- Appendix X (Chapter 21) Holevo bound
- Appendix Y (Chapter 25) Polynomial byte representation and modular multiplication
- Index
- References
Summary
This relatively short but mathematically intense chapter brings us to the core of Shannon's information theory, with the definition of channel capacity and the subsequent, most famous channel coding theorem (CCT), the second most important theorem from Shannon (next to the source coding theorem, described in Chapter 8). The formal proof of the channel coding theorem is a bit tedious, and, therefore, does not lend itself to much oversimplification. I have sought, however, to guide the reader in as many steps as is necessary to reach the proof without hurdles. After defining channel capacity, we will consider the notion of typical sequences and typical sets (of such sequences) in codebooks, which will make it possible to tackle the said CCT. We will first proceed through a formal proof, as inspired from the original Shannon paper (but consistently with our notation, and with more explanation, where warranted); then with different, more intuitive or less formal approaches.
Channel capacity
In Chapter 12, I have shown that in a noisy channel, the mutual information, H(X;Y) = H(Y) − H(Y|X), represents the measure of the true information contents in the output or recipient source Y, given the equivocation H(Y|X), which measures the informationless channel noise. We have also shown that mutual information depends on the input probability distribution, p(x).
- Type
- Chapter
- Information
- Classical and Quantum Information TheoryAn Introduction for the Telecom Scientist, pp. 245 - 263Publisher: Cambridge University PressPrint publication year: 2009