This section summarises many of the terms used in this book. A brief description of the meaning of each term is given, and where appropriate a cross reference to the corresponding section in the text is suggested for further reading.
∧: Logical and.
∨: Logical or.
X2: See chi-square.
μ, μ(i)or μi: See mean.
ρ, ρ(x, y)or ρxy: See correlation.
σ, σ(i)or σi: See standard deviation.
σxy or σ(x, y)or σij: See covariance.
Alternate hypothesis (H1): The complement of a null hypothesis. If a statistical test is performed for a null hypothesis, and that test fails, then the alternate hypothesis (as the complement of the null hypothesis) is the result supported by the data.
Arithmetic mean: See mean.
Artificial neural network: Also called a neural network. This is a collection of perceptrons fed by some number of input variables, with one or more output nodes. The function of the neural network is to try and optimally distinguish between two or more types of data. See Section 10.4.
Bagging: This is the name given to a technique of oversampling data used for training decision trees. See Section 10.5.2.
Bayes' theorem: See Eq. (3.4).
Bayesian classifier: A classification algorithm based on Bayes' theorem, see Section 10.2.
Bayesian statistics: The branch of statistics derived from the work of Rev. Bayes. This is a subjective approach to the computation of probability that can be used in a variety of situations, including for those that are not repeatable. See Section 3.2.