We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and of human intuition applies to large objects.
Richard P. Feynman
Two great intellectual triumphs occurred in the first quarter of the twentieth century: the invention of the theory of relativity, and the rather more labored arrival of quantum mechanics. The former largely embodies the ideas, and almost exclusively the vision, of Albert Einstein; the latter benefited from many creative minds, and from a constant interplay between theory and experiment. Probability plays no direct role in relativity, which is thus outside the purview and purpose of this book. However, Einstein will make an appearance in the following pages, in what some would claim was his most revolutionary role – as diviner of the logical and physical consequences of every idea that was put forward in the formative stages of quantum mechanics. It is one of the ironies of the history of science that Einstein, who contributed so importantly to the task of creating this subject, never accepted that the job was finished when a majority of physicists did. As a consequence, he participated only indirectly in the great adventure of the second quarter of the century: the use of quantum mechanics to explain the structure of atoms, nuclei, and matter.
This book has grown out of a course I have taught five times during the last 15 years at Cornell University. The College of Arts & Sciences at Cornell has a ‘distribution requirement in science,’ which can be fulfilled in a variety of ways. The Physics Department has for many years offered a series of ‘general education’ courses; any two of them satisfy the science requirement. The descriptions of these courses in the Cornell catalog begin with the words: ‘Intended for non-scientists; does not serve as a prerequisite for further science courses. Assumes no scientific background but will use high school algebra.’ This tradition was begun in the 1950s by two distinguished physicists, Robert R. Wilson and Philip Morrison, with a two-semester sequence ‘Aspects of the Physical World,’ which became known locally as ‘Physics for Poets.’ At the present time some three or four one-semester courses for non-scientists, ‘Reasoning about Luck’ sometimes among them, are offered each year.
What I try to do in this book and why is said in Chapter 1, but some words may be useful here. I started the enterprise lightheartedly hoping to do my bit to combat the widely perceived problems of scientific illiteracy and – to use a fashionable word – innumeracy, by teaching how to reason quantitatively about the uses of probability in descriptions of the natural world. I quickly discovered that the italicized word makes for great difficulties.
Isolated systems left to themselves, we have argued, evolve in such a way as to increase their entropy. The successes achieved by the use of this principle in the last two chapters cannot, however, hide the fact that it has up to now been pure assertion, more than reasonable to be sure but only vaguely connected to the motions of the microscopic constituents of matter. Let us now explore this connection, and the reasons why time and disorder flow in the same direction.
This is a curiously subtle question. From the point of view of common sense, it is not a puzzle at all. Shown a film of, for example, a lit match progressively returning to its unburnt condition, we instantly recognize a trick achieved by running a projector backwards. While it is by no means true that every macroscopically quiescent material object is in equilibrium, it is a general feature of common experience that left to themselves inanimate isolated systems become increasingly disordered. Even when the process is very slow, it is inexorable: cars eventually rust away. More often, it happens in front of your eyes: the ice cube in your drink melts, the sugar you add to your tea dissolves and never reassembles in your spoon.
hannah: The weather is fairly predictable in the Sahara
valentine: The scale is different but the graph goes up and down the same way. Six thousand years in the Sahara looks like six months in Manchester, I bet you.
from Arcadia by Tom Stoppard
It is time to correct and soften a striking difference in emphasis between the chapters on Mechanics (6) and Statistical Mechanics (8). Contrast the simple and regular motions in the former with the unpredictable jigglings in the latter. The message here will be that even in mechanics easy predictability is not by any means universal, and generally found only in carefully chosen simple examples. Sensitivity to precise ‘aiming’ is found in problems just slightly more complicated than those we considered in our discussion of mechanics. However, the time it takes for such sensitivity to manifest itself in perceptibly different trajectories depends on the particular situation. ‘Chaotic’ as a technical term has come to refer to trajectories with a sensitive dependence on initial conditions, and the general subject of their study is now called Chaos.
There is a long tradition of teaching mechanics via simple examples, such as the approximately circular motion of the moon under the gravitational influence of the earth. This is a special case of the ‘two body problem’ – two massive objects orbiting about each other – whose solution in terms of elliptical trajectories is one of the triumphs of Newton's Laws.
If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or whatever you wish to call it)…
Richard P. Feynman
That a gas, air for example, is made up of a myriad tiny bodies, now called molecules, moving about randomly and occasionally colliding with each other and with the walls of a containing vessel, is today a commonplace fact that can be verified in many ways. Interestingly, this molecular view was widely though not universally accepted long before there were experimental methods for directly confirming it. The credit for the insight has to go to Chemistry, because a careful study of chemical reactions revealed regularities that could most easily be understood on the basis of the molecular hypothesis. These regularities were known to chemists by the end of the eighteenth century. By this time, the notion of distinct chemical species, or ‘elements,’ was well established, these being substances, like oxygen and sulfur, that resisted further chemical breakdown. It was discovered that when elements combine to make chemical compounds they do so in definite weight ratios. It was also found that when two elements combine to make more than one compound the weights of one of the elements, when referred to a definite weight of the second, stand one to another in the ratio of small integers – which sentence is perhaps too Byzantine to be made sense of without a definite example.
… the whole burden of philosophy seems to consist in this – from the phenomena of motions to investigate the forces of nature, and then from these forces to demonstrate the other phenomena …
Isaac Newton
Probability enters theoretical physics in two important ways: in the theory of heat, which is a manifestation of the irregular motions of the microscopic constituents of matter; and, in quantum mechanics, where it plays the bizarre but, as far as we know, fundamental role already briefly mentioned in the discussion of radioactive decay.
Before we can understand heat, we have to understand motion. What makes objects move, and how do they move? Isaac Newton, in the course of explaining the motion of planets and of things around us that we can see and feel with our unaided senses, answered these questions for such motions three centuries ago. The science he founded has come to be called classical or Newtonian mechanics, to distinguish it from quantum mechanics, the theory of motion in the atomic and sub-atomic world.
Classical mechanics is summarized in Newton's ‘laws’ of motion. These will here be illustrated by an example involving the gravitational attraction, described by Newton's ‘law’ of gravitation.
The subject of probability is made particularly interesting and useful by certain universal features that appear when an experiment with random outcomes is tried a large number of times. This topic is developed intuitively here. We shall play with an example used in Chapter 2 and, after extracting the general pattern from the particular case, we shall infer the remarkable fact that only a very small fraction of the possible outcomes associated with many trials has a reasonable likelihood of occurring. This principle is at the root of the statistical regularities on which the banking and insurance industries, heat engines, chemistry and much of physics, and to some extent life itself depend. A relatively simple mathematical phenomenon has such far reaching consequences because, in a manner to be made clear in this chapter, it is the agency through which certainty almost re-emerges from uncertainty,
The binomial distribution
To illustrate these ideas, we will go back to rolling our hypothetical fair dice. Following the example of the dissolute French noblemen of the seventeenth century, one of whose games we analyzed in such detail in the last chapter, we shall classify outcomes for each die into the mutually exclusive categories ‘six’ and ‘not-six’, which exhaust all possibilities. If the repeatable experiment consists of rolling a single die, the probabilities for these two outcomes are the numbers 1/6 = 0.16667 and 5/6 = 0.83333.
The mathematical theory of probability was born somewhat disreputably in the study of gambling. It quickly matured into a practical way of dealing with uncertainties and as a branch of pure mathematics. When it was about 200 years old, the concept was introduced into physics as a way of dealing with the chaotic microscopic motions that constitute heat. In our century probability has found its way into the foundations of quantum mechanics, the physical theory of the atomic and subatomic world. The improbable yet true tale of how a way of thinking especially suited to the gambling salon became necessary for understanding the inner workings of nature is the topic of this book.
The next three chapters contain some of the basic ideas of the mathematical theory of probability, presented by way of a few examples. Although common sense will help us to get started and avoid irrelevancies, we shall find that a little mathematical analysis yields simple, useful, easy to remember, and quite unobvious results. The necessary mathematics will be picked up as we go along.
In the couplet by John Gay (1688–1732), the author of the Beggar's Opera, probability has a traditional meaning, implying uncertainty but reasonable likelihood. At roughly the same time that the verse was written, the word was acquiring its mathematical meaning.
The most subtle of the concepts that surfaced in the last chapter is equilibrium. Although the word suggests something unchanging in time, the molecular viewpoint has offered a closer look and shown temperature to be the average effect of molecular agitation. At first sight, it seems hard to say anything, let alone anything quantitative, about such chaotic motions. Yet, paradoxically, their very disorder provides a foundation upon which to build a microscopic theory of heat.
How does a dilute gas reach equilibrium? Contemplate a system in a thermally insulating rigid container, so that there is no transfer of energy or matter between the inside and the outside. As time passes, the energy the gas started with is being exchanged between the molecules in the system through random collisions. Finally, a state is reached which is unchanging as far as macroscopic observations are concerned. In this (equilibrium) state each molecule is still engaged in a complicated dance. To calculate the motion of any one molecule, one would eventually need to calculate the motion of all the molecules, because collisions between more and more of them, in addition to collisions with container walls, are the cause of the random motion of any one. Such a calculation, in addition to being impossible, even for the largest modern computer, would be futile: the details of the motion depend on the precise positions and velocities of all the molecules at some earlier time, and we lack this richness of knowledge.
If a thing is worth doing, it is worth doing badly.
G.K. Chesterton
The only mathematical operations we have needed so far have been addition, subtraction, division, multiplication, and an extension of the last: the operation of taking the square-root. [The square-root of a number multiplied by itself is equal to the number.] We have also had the help of a friendly work-horse, the computer, which gave insight into formulas we produced. That insight was always particular to the formula being evaluated; the general rules of thumb we used in the last chapter were obtained by nothing more or less than sleight of hand. To do better, there is no way of avoiding the mathematical operation associated with taking arbitrarily small steps an arbitrarily large number of times. This is the essential ingredient of what is called ‘calculus’ or ‘analysis.’ But don't be alarmed: our needs are modest. We shall be able to manage very well with only some information about the exponential and the logarithm. You will have heard, at least vaguely, of these ‘functions’ – to use the mathematical expression for a number that depends on another number, so that it can be represented by a graph – but rather than relying on uncertain prior knowledge, we shall learn what is needed by empirical self discovery.
Powers
The constructions we need are very natural generalizations of the concept of ‘powers’ which you will find that you are quite familiar with from what has gone before.
Here are a few randomly chosen and occasionally whimsical uses for your new knowledge about the workings of chance. The situations I shall describe all have to do with everyday life. In such applications, the difficulty is not only in the mathematical scheme but also in the frequently unstated assumptions that lie beneath it. It helps to be able to identify the repeatable random experiment and, when many trials are being treated as independent, to be able to argue that they are in fact unconnected.
The examples have been chosen to illustrate the role of statistical fluctuations, because this is the most interesting aspect of randomness for the physical applications to follow. A statistician or a mathematician would choose other examples, but, then, such a person would write a different book.
Polling
Opinion polls are second only to weather forecasts in bringing probability, often controversially, into our daily lives. ‘Polls Wrong,’ a headline might say after an election. Opinions change, and often suddenly. A pollster needs experience and common sense; his or her statistical knowledge need not be profound. But, there is a statistical basis to polling. Consider the question: ‘If 508 of 1000 randomly selected individuals prefer large cars to small, what information is gleaned about the car preferences of the population at large?’ If we ignore the subtleties just alluded to, the question is equivalent to the following one.
The eternal mystery of the world is its comprehensibility
Albert Einstein
The purpose of this little book is to introduce the interested non-scientist to statistical reasoning and its use in physics. I have in mind someone who knows little mathematics and little or no physics. My wider aim is to capture something of the nature of the scientific enterprise as it is carried out by physicists – particularly theoretical physicists.
Every physicist is familiar with the amiable party conversation that ensues when someone – whose high school experience of physics left a residue of dread and despair – says brightly: ‘How interesting! What kind of physics do you do?’ How natural to hope that passing from the general to the particular might dispel the dread and alleviate the despair. Inevitably, though, such a conversation is burdened by a sense of futility: because there are few common premises, there is no reasonable starting point. Yet it would be foolishly arrogant not to recognize the seriousness behind the question. As culprit or as savior, science is perceived as the force in modern society, and scientific illiteracy is out of fashion.
However much I would like to be a guru in a new surge toward literacy in physics, ministering to the masses on television and becoming rich beyond the dreams of avarice, this, alas, is not to be.
A man hath sapiences thre Memorie, engin and intellect also
Chaucer
Engines are, etymologically and in fact, ingenious things. By the end of this chapter we shall understand heat-engines, which are devices that convert ‘heat’ – an intuitive idea to be made precise here – into pushes or pulls or turns. First, however, we need to systematize some of the things we have already learnt about energy and entropy, and thereby extract the science of Thermodynamics from the statistical viewpoint of the last chapter.
Thermodynamics is a theory based on two ‘Laws,’ which are not basic laws of nature in the sense mentioned in Chapter 1, but commonsense rules concerning the flow of energy between macroscopic systems. From the point of view we are taking here, these axioms follow from an understanding of the random motion of the microscopic constituents of matter. It is a tribute to the genius of the scientists – particularly Carnot, Clausius, and Kelvin – who formulated thermodynamics as a macroscopic science, that their scheme is in no way undermined by the microscopic view, which does, however, offer a broader perspective and open up avenues for more detailed calculations. (For example, effusion – discussed at the end of the last chapter – lies beyond the scope of thermodynamics.)
Work, heat, and the First Law of Thermodynamics
The First Law of Thermodynamics is about the conservation of energy.
Linear polymers may be thought of as very long flexible chains made up of single units called monomers. When placed in a solvent at low dilutions, they may exhibit several different types of morphology. If the interactions between different parts of the chain are primarily repulsive, they tend to be in extended configurations with a large entropy. If, however, the forces are sufficiently attractive, the chains collapse into compact objects with little entropy. The collapse transition between these two states occurs at the theta point, where the energy of attraction balances the entropy difference between the two states. This turns out to be a continuous phase transition, to be described later in Section 9.5. However, even in the swollen, entropy dominated, phase, it turns out that the statistics of very long chains are governed by non-trivial critical exponents. Like the percolation problem, this is a purely geometrical phenomenon, yet, through a mapping to a magnetic system, all the standard results of the renormalization group and scaling may be applied. Before describing this, however, it is important to recall some of the simpler approaches to the problem.
Random walk model
If the problem of a long polymer chain is equivalent to some kind of critical behaviour, we would expect universality to hold, and some of the important quantities to be independent of the microscopic details. This means that we may forget all about polymer chemistry, and regard the monomers as rigid links of length a, like a bicycle chain.
In the previous chapter, the ferromagnetic Ising model provided a simple example of a phase diagram with an associated fixed point structure of the renormalization group flows. There were stable fixed points corresponding to low and high temperature phases, and a critical fixed point controlling the behaviour of critical Ising Systems. However, more realistic systems often have more complicated phase diagrams, and therefore a richer fixed point structure. In this chapter we study some of these examples, and show how, even with a rather qualitative description of renormalization group flows, it is possible to understand phase diagrams from the renormalization group viewpoint. More importantly, when more than one non-trivial fixed point is present, the question arises as to which is the dominant one in a particular region of the phase space. The renormalization group answers this question through the theory of cross-over behaviour. The existence of such phenomena, whereby different fixed points may influence the properties of the same system on different length scales, is totally absent in mean field treatments.
Ising model with vacancies
As a first example, consider a generalisation of the Ising model in which the spin variables s(r) may take the value 0 as well as ±1. This may be viewed as the classical version of a quantum spin-1 magnet, or as a lattice gas of magnetic particles, with |s(r)| playing the role of the occupation number.