We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter the reader is requested to sit back and think. Think about what you are doing and why, and what your conclusions really mean. You have a theory, containing a number of unknown – or insufficiently known – parameters, and you have a set of experimental data. You wish to use the data to validate your theory and to determine or refine the parameters in your theory. Your data contain inaccuracies and whatever you infer from your data contains inaccuracies as well. While the probability distribution of the data, given the theory, is often known or derivable from counting events, the inverse, i.e., the inferred probability distribution of the estimated parameters given the experimental outcome, is of a different, more subjective kind. Scientists who reject any subjective measures must restrict themselves to hypothesis testing. If you want more, turn to Bayes.
Direct and inverse probabilities
Consider the reading of a sensitive digital voltmeter sensing a constant small voltage – say in the microvolt range – during a given time, say 1 millisecond. Repeat the experiment many times. Since the voltmeter itself adds a random noise due to the thermal fluctuations in its input circuit, your observations yi will be samples from a probability distribution f(yi − θ), where θ is the real voltage of the source. You can determine f by collecting many samples.
Every measurement is in fact a random sample from a probability distribution. In order to make a judgment on the accuracy of an experimental result we must know something about the underlying probability distribution. This chapter treats the properties of probability distributions and gives details about the most common distributions. The most important distribution of all is the normal distribution, not in the least because the central limit theorem tells us that it is the limiting distribution for the sum of many random disturbances.
Introduction
Every measurement xi of a quantity x can be considered to be a random sample from a probability distribution p(x) of x. In order to be able to analyze random deviations in measured quantities we must know something about the underlying probability distribution, from which the measurement is supposed to be a random sample.
If x can only assume discrete values x = k, k = 1, …, n then p(k) forms a discrete probability distribution and p(k) (often called the probability mass function, pmf) indicates the probability that an arbitrary sample has the value k. If x is a continuous variable, then p(x) is a continuous function of x: the probability density function, pdf. The meaning of p(x) is: the probability that a sample xi occurs in the interval (x, x + dx) equals p(x) dx.
Often you perform a series of experiments in which you vary an independent variable, such as temperature. What you are really interested in is the relation between the measured values and the independent variables, but the trouble is that your experimental values contain statistical deviations. You may already have a theory about the form of this relation and use the experiment to derive the still unknown parameters. It can also happen that the experiment is used to validate the theory or to decide on a modification. In this chapter a global view is taken and functional relations are qualitatively evaluated using simple graphical presentations of the experimental data. The trick of transforming functional relations to a linear form allows quick graphical interpretations. Even the inaccuracies of the parameters can be graphically estimated. If you want accurate results, then skip to the next chapter.
Introduction
In the previous chapter you have learned how to handle a series of equivalent measurements that should have produced equal results if there had been no random deviations in the measured data. Very commonly, however, a quantity yi is measured as a function f(xi) of an independent variable xi such as time, temperature, distance, concentration or bin number. The measured quantity may also be a function of several such variables. Usually the independent variables – which are under the control of the experimenter – are known with high accuracy and the dependent variables – the measured values – are subject to random errors.
Measure theory provides the theoretical framework essential for the development of modern probability theory. Much of elementary probability theory can be carried through with only passing reference to underlying sample spaces, but the modern theory relies heavily on measure theory, following Kolmogorov's axiomatic framework (1932) for probability spaces. The applications of stochastic processes, in particular, are now fundamental in physics, electronics, engineering, biology and finance, and within mathematics itself. For example, Itô's stochastic calculus for Brownian Motion (BM) and its extensions rely wholly on a thorough understanding of basic measure and integration theory. But even in much more elementary settings, effective choices of sample spaces and σ-fields bring advantages – good examples are the study of random walks and branching processes. (See [S], [W] for nice examples.)
Continuity of additive set functions
What do we mean by saying that we pick the number x ∈ [0, 1] at random? ‘Random’ plausibly means that in each trial with uncertain outcomes, each outcome is ‘equally likely’ to be picked. Thus we seek to impose the uniform probability distribution on the set (or sample space) Ω of possible outcomes of an experiment. If Ω has n elements, this is trivial: for each outcome Ω, the probability that Ω occurs is 1/n. But when Ω = [0, 1] the ‘number’ of possible choices of x ∈ [0, 1] is infinite, even uncountable.
Ninety percent of all physics is concerned with vibrations and waves of one sort or another. The same basic thread runs through most branches of physical science, from acoustics through engineering, fluid mechanics, optics, electromagnetic theory and X-rays to quantum mechanics and information theory. It is closely bound to the idea of a signal and its spectrum. To take a simple example: imagine an experiment in which a musician plays a steady note on a trumpet or a violin, and a microphone produces a voltage proportional to the instantaneous air pressure. An oscilloscope will display a graph of pressure against time, F(t), which is periodic. The reciprocal of the period is the frequency of the note, 440 Hz, say, for a well-tempered middle A – the tuning-up frequency for an orchestra.
The waveform is not a pure sinusoid, and it would be boring and colourless if it were. It contains ‘harmonics’ or ‘overtones’: multiples of the fundamental frequency, with various amplitudes and in various phases, depending on the timbre of the note, the type of instrument being played and on the player. The waveform can be analysed to find the amplitudes of the overtones, and a list can be made of the amplitudes and phases of the sinusoids which it comprises. Alternatively a graph, A(ν), can be plotted (the sound-spectrum) of the amplitudes against frequency (Fig. 1.1).