Book contents
- Frontmatter
- Contents
- Preface to the second edition
- Preface to the first edition
- 1 Introduction
- 2 Combinatorics
- 3 Sets and measures
- 4 Probability
- 5 Discrete random variables
- 6 Information and entropy
- 7 Communication
- 8 Random variables with probability density functions
- 9 Random vectors
- 10 Markov chains and their entropy
- Exploring further
- Appendix 1 Proof by mathematical induction
- Appendix 2 Lagrange multipliers
- Appendix 3 Integration of exp(−½x2)
- Appendix 4 Table of probabilities associated with the standard normal distribution
- Appendix 5 A rapid review of matrix algebra
- Selected solutions
- Index
6 - Information and entropy
Published online by Cambridge University Press: 06 July 2010
- Frontmatter
- Contents
- Preface to the second edition
- Preface to the first edition
- 1 Introduction
- 2 Combinatorics
- 3 Sets and measures
- 4 Probability
- 5 Discrete random variables
- 6 Information and entropy
- 7 Communication
- 8 Random variables with probability density functions
- 9 Random vectors
- 10 Markov chains and their entropy
- Exploring further
- Appendix 1 Proof by mathematical induction
- Appendix 2 Lagrange multipliers
- Appendix 3 Integration of exp(−½x2)
- Appendix 4 Table of probabilities associated with the standard normal distribution
- Appendix 5 A rapid review of matrix algebra
- Selected solutions
- Index
Summary
What is information?
In this section we are going to try to quantify the notion of information. Before we do this, we should be aware that ‘information’ has a special meaning in probability theory, which is not the same as its use in ordinary language. For example, consider the following two statements:
(i) I will eat some food tomorrow.
(ii) The prime minister and leader of the opposition will dance naked in the street tomorrow.
If I ask which of these two statements conveys the most information, you will (I hope!) say that it is (ii). Your argument might be that (i) is practically a statement of the obvious (unless I am prone to fasting), whereas (ii) is extremely unlikely. To summarise:
(i) has very high probability and so conveys little information,
(ii) has very low probability and so conveys much information. Clearly, then, quantity of information is closely related to the element of surprise.
Consider now the following ‘statement’:
(iii) XQWQYK VZXPU VVBGXWQ.
Our immediate reaction to (iii) is that it is meaningless and hence conveys no information. However, from the point of view of English language structure we should be aware that (iii) has low probability (e.g. Q is a rarely occurring letter and is generally followed by U, (iii) contains no vowels) and so has a high surprise element.
The above discussion should indicate that the word ‘information’, as it occurs in everyday life, consists of two aspects, ‘surprise’ and ‘meaning’.
- Type
- Chapter
- Information
- Probability and InformationAn Integrated Approach, pp. 105 - 126Publisher: Cambridge University PressPrint publication year: 2008