We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A crystal consists of a collection of atoms arranged in a regular array, the spacing between atoms being of the same order of magnitude as the dimensions of the atoms. Each atom is more or less anchored to one point, called its site in the lattice, by the electrostatic forces produced by all the other atoms. We shall not find it necessary here to discuss the details of how this comes about; nor shall we consider the various patterns in which the atoms can be arranged. It will be sufficient to remember the essential feature that the structure of the crystal is periodic in space.
We have seen in chapter 5 that the energy of an electron bound to an atom is restricted to certain discrete values. Imagine that we can assemble a crystal of identical atoms whose spacing L can be altered at will. If L is large enough, the motion of an electron in one of the atoms will be affected to a negligible extent by the electrons and nuclei of the other atoms. Each atom then behaves as if it were isolated, with its electrons in discrete bound states. In figure 10.1 (a) we have drawn a schematic diagram of the potential V(r) in which an electron moves in this situation. Suppose that the spacing L is now reduced (figure 10.1(b)). The potential V(r) in the neighbourhood of a given atom is now affected by the presence of the nuclei and electrons of the other atoms, particularly those that are closest.
We have so far described the properties of intrinsic semiconductors, in which the crystal lattice is perfectly regular. However, the electrical properties of semiconductors, unlike metals or semimetals, are drastically affected by the addition of small traces of impurity atoms. Observable effects occur with impurity concentrations as low as a few parts in 108, and increasing the impurity concentration to one part in 105 can increase the conductivity by as much as a factor of 103 at room temperature and 1012 at liquid-helium temperatures. The semiconductor is said to be doped with impurity atoms.
In order to study the effect of doping, we first consider a crystal consisting of a periodic array of one type of atom, except that just one of the atoms has been replaced by an atom of a different type. As a one-dimensional model of this situation, we take the same infinite chain of square wells as in chapter 10, but with one of the wells having a different depth, U1 say (figure 12.1).
We recall that for the perfectly regular crystal, the stationary state solutions are Bloch waves of the form (10.3). These have the property that when the required continuity conditions on the wave function are satisfied at the two edges of one of the potential wells, they are automatically satisfied also at the edges of all the other wells.
Let S be a quantum system that is prepared in a pure state φ and A a discrete nondegenerate observable with eigenvalues ai and eigenstates φai.
If the preparation φ is not an eigenstate of A, then quantum mechanics does not provide any information about the value of the observable A. The pair 〈φ,A〉 merely defines a probability distribution p(φ,ai), the experimental meaning of which is given by the statistical interpretation of quantum mechanics: the real positive number p(φ ai) is the probability of obtaining the result ai after a measurement process of the observable A. This means that if one were to perform a large series of N measurements of this kind, then the relative frequency fN(φ,ai) of the result ai would approach for almost all test series the probability p(φ, ai), for N → ∞ (cf. chapter 3). One could, however, in addition to these well-established results tentatively assume that a certain value ai or even an eigenstate φai of A pertains objectively to the system S but that this value or state is subjectively unknown to the observer, who knows only the probability p(φ, ai) and hence the distribution of possible measurement outcomes. The probability would then express the subjective ignorance (or knowledge) about an objectively decided value or eigenstate of A. The hypothetical attribution of a certain value or eigenstate of A to the system S will be called objectification.
Where is the frontier of physics? Some would say 10−33 cm, some 10−15 cm and some 10+28 cm. My vote is for 10−6 cm. Two of the greatest puzzles of our age have their origins at this interface between the macroscopic and microscopic worlds. The older mystery is the thermodynamic arrow of time, the way that (mostly) time-symmetric microscopic laws acquire a manifest asymmetry at larger scales. And then there's the superposition principle of quantum mechanics, a profound revolution of the twentieth century. When this principle is extrapolated to macroscopic scales, its predictions seem wildly at odds with ordinary experience.
This book deals with both these ‘mysteries,’ the foundations of statistical mechanics and the foundations of quantum mechanics. It is my thesis that they are related. Moreover, I have teased the reader with the word ‘foundations,’ a term that many of our hardheaded colleagues view with disdain. I think that new experimental techniques will soon subject these ‘foundations’ to the usual scrutiny, provided the right questions and tests can be formulated. Historically, it is controlled observation that transforms philosophy into science, and I am optimistic that the time has come for speculations on these two important issues to undergo that transformation.
In the next few pages I provide previews of the book: Section 1.1 is a statement of the main ideas. Section 1.2 is a chapter by chapter guide.
There are two principal themes: time's arrows and quantum measurement. In both areas I will make significant statements about what are usually called their foundations. These statements are related, and involve modification of the underlying hypotheses of statistical mechanics. The modified statistical mechanics contains notions that are at variance with certain primitive intuitions, but it is consistent with all known experiments.
I will try to present these ideas as intellectually attractive, but this virtue will be offered as a reason for study, not as a reason for belief. Historically, intellectual satisfaction has not been a reliable guide to scientific truth. For this reason I have striven to provide experimental and observational tests where I could, even where such experiment is not feasible today. The need for this hardheaded, or perhaps, intellectually humble, approach is particularly felt in the two areas that I will address. The foundations of thermodynamics and the foundations of quantum mechanics have been among the most contentious areas of physics; indeed, some would deny their place within that discipline. In my opinion, this situation is a result of the paucity of relevant experiment.
In the last chapter we enumerated ‘arrows of time.’ There was a subtheme concerned with which candidates made it to the list, which didn't, trying to eliminate arrows that were immediate consequences of others. Now the subtheme becomes the theme.
We are concerned with correlating arrows of time. Our most important conclusion will be that the thermodynamic arrow of time is a consequence of the expansion of the universe. Coffee cools because the quasar 3C273 grows more distant. We will discuss other arrows, in particular the radiative and the biological, but for them the discussion is a matter of proving (or perhaps formulating) what you already believe. For the thermo/cosmo connection there remains significant controversy.
As far as I know it was Thomas Gold who proposed that the thermodynamic arrow of time had its origins in cosmology, in particular in the expansion of the universe. Certainly there had been a lot of discussion of arrows of time before his proposal, but in much of this discussion you could easily get lost, not knowing whether someone was making a definition or solving a problem. Now I'm sure the following statement slights many deep thinkers, but I would say that prior to Gold's idea the best candidate for an explanation of the thermodynamic arrow was that there had been an enormous fluctuation. If you have a big enough volume and if you wait long enough, you would get a fluctuation big enough for life on earth.
If even some of the ideas presented here are correct, the world is different from what it seems. The major theses of the book, on time's arrows and on quantum measurement theory, are unified by the notion of cryptic constraints. We see, sense, specify, macroscopic states, but what we predict about these states depends on an important assumption concerning their microscopic situation. The assumption is that the actual microstate is equally likely to be any of those consistent with the macrostate. And I say, not so. For various reasons, both classical and quantum, many otherwise-possible microstates are eliminated. As presented in detail in previous chapters, such elimination impacts many areas of physics, from the cosmos to the atom. But we also make the point that this elimination can be difficult to notice and in particular there is no experimental evidence that confirms the usual assumption. By an explicit example on a model (Fig. 4.3.2), we show that a future constraint eliminating 98% of the microstates can go completely unnoticed.
My expectation is that this fundamental change in the foundations of statistical mechanics is needed. Whether or not it takes the forms I've proposed will be determined by future investigations.
In the next section I will review open problems for the program implicit in this book. The tone will be that used in speaking with colleagues: an attempt to be frank about difficulties and a willingness to be wildly speculative.
The existence of ‘special’ states can be established with ordinary quantum mechanics. We seek particular microscopic states of large systems that have the property that they evolve to only one or another macroscopic outcome, when other microstates of a more common sort (having the same initial macrostate) would have given grotesque states. Justifying the hypothesis that Nature chooses these special states as initial conditions is another matter. In this chapter we stick to the narrower issue of whether there exist states that can do the job, irrespective of whether they occur in practice.
In Section 7.1 we give several explicit examples. In Section 6.2 we exhibited an apparatus model and its special states. Here we look at the decay of an unstable quantum state, not as a single degree of freedom in a potential, but with elements of the environment taken into account as well. We also study another popular many-body system, the spin boson model. This has extensive physical applications and especially with respect to Josephson junctions has been used to address quantum measurement questions. A single degree of freedom in a potential, by the way, does not generally lend itself to ‘specializing,’ and an example is shown below.
In recent years, exotic non-local effects of quantum mechanics have been exhibited experimentally. Behind many of these lies entanglement, the property of a wave function of several variables that it does not factor into a product of functions of these variables separately.
Quantum measurement theory addresses several problems. All of them arise from applying a microscopically valid theory to the macroscopic domain. The most famous is the Schrödinger cat example in which the ordinary use of quantum rules suggests a superposition of macroscopically different states, something we do not seem to experience. Another problem is the Einstein-Podolsky-Rosen (EPR) paradox in which a fundamental quantum concept, entanglement, creates subtle and non-classical correlations among remote sets of measurements. Although it seems mere word play to observe that such an apparent micro-macro conflict ought to be viewed as a problem in statistical mechanics—by virtue of the way that discipline is defined—until recently the importance of this observation was seldom recognized.
The founders, Bohr, Schrödinger, Heisenberg, Einstein did not emphasize this direction, but over the years the realization that measurement necessarily involves macroscopic objects—objects with potentially mischievous degrees of freedom of their own—began to be felt. For me this was brought home by the now classic fourth chapter of Gottfried's text on quantum mechanics. He takes up the following problem. The density matrix ρ0 for a normalized pure state is ρ0 = |ψ〉 〈ψ|. It follows that Tr ρ0 = Tr ρ02 = 1. After a non-trivial measurement, the density matrix ρ is supposed to be diagonal in the basis denned by the measured observable, with Tr ρ = 1 but Tr ρ2 < 1.
In our experience, time is not symmetric. From cradle to grave things happen that cannot be undone. We remember the past, predict the future. These arrows, while not always—or even now—deemed suitable for scientific investigation, have been recognized since the dawn of thought. Technology and statistical mechanics give us a precise characterization of the thermodynamic arrow. That's what the previous chapter was about. But the biological arrow (memory, etc.) is elusive. Then we come to arrows that only recently have been recognized. The greatest of these is the fact that the universe is expanding, not contracting. This is the cosmological arrow. Related, perhaps a consequence, is the radiative arrow. Roughly, this is the fact that one uses outgoing wave boundary conditions for electromagnetic radiation, that retarded Green's functions should be used for ordinary calculations, that radiation reaction has a certain sign, that more radiation escapes to the cosmos than comes in. Yet more recently, the phenomenon of CP violation was discovered in the decay of K mesons. As a consequence of CPT invariance, and some say by independent deduction, there is violation of T, time reversal invariance. This CP arrow could be called the strange arrow of time, not only because it was discovered by means of ‘strange’ particles, but because its rationale and consequences remain obscure. There is another phenomenon often associated with an arrow, the change in a quantum system resulting from a measurement.
Why should special states occur as initial conditions in every physical situation in which they are needed? Half this book has been devoted to making the points that initial conditions may not be as controllable as they seem; that there may be constraints on microscopic initial conditions; that this would not have been noticed; that such constraints can arise from two-time or future conditioning; that in our universe such future conditioning may well be present, although, as remarked, cryptic. In this chapter I will take up more detailed questions: what future conditions could give rise to the need for our ‘special’ states, and why should those particular future conditions be imposed.
Before going into this there is a point that needs to be made. Everything in the present chapter could be wrong and the thesis of Chapter 6 nevertheless correct. It is one thing to avoid grotesque states (and solve the quantum measurement problem) by means of special states and it is another provide a rationale for their occurrence. I say this not only to highlight the conceptual dependencies of the theses in this book, but also because there is a good deal of hand waving in the coming chapter and I don't want it to reflect unfavorably on the basic proposal. As pointed out earlier, the usual thermodynamic arrow of time can be phrased as follows: initial states are arbitrary, final states special.
Pure quantum evolution is deterministic, ψ → exp(—iHt/ħ)ψ, but as for classical mechanics probability enters because a given macroscopic initial condition contains microscopic states that lead to different outcomes; the relative probability of those outcomes equals the relative abundance of the microscopic states for each outcome. This is the postulated basis for the recovery of the usual quantum probabilities, as discussed in Chapter 6. In this chapter we take up the question of whether the allowable microstates (the ‘special’ states) do indeed come with the correct abundance. To recap: ‘special’ states are microstates not leading to superpositions of macroscopically different states (‘grotesque’ states). For a given experiment and for each macroscopically distinct outcome of that experiment these states form a subspace. We wish to show that the dimension of that subspace is the relative probability of that outcome.
This is an ambitious goal, especially considering the effort needed to establish that there are any special states—the subject of Chapter 7. As remarked there, the special states exhibited are likely to be only a small and atypical fraction of all special states in the physical apparatus being modeled (e.g., the cloud chamber). In one example (the decay model) there is a remarkable matching of dimension and conventional probability, but I would not make too much of that. What is especially challenging about the present task is that we seek a universal distribution.
In Chapter 6 I presented a proposal for how and why grotesque states do not occur in Nature. In subsequent chapters I explored consequences and found subsidiary requirements, such as Cauchy distributed kicks. To find out whether all or part of our scheme is the way Nature works, we turn to experiment. How to turn to experiment is not so obvious, since the basic dynamical law, ψ → exp(−iHt/ħ)ψ, is the same as for most other theories. Our basic assertion concerns not the dynamical law but the selection of states. Therefore it is that assertion that must be tested. For example, one way is to set up a situation where the states we demand, the ‘special’ states, cannot occur. Then what happens? Another part of our theory is the probability postulate, and this deals not only with the existence of special states but with their abundance. It enters in the recovery of standard probabilities but has far reaching consequences that may well lead to the best tests of the theory. Such tests arise in the context of EPR situations.
The experimental tests fall into the following categories.
Precluding a class of special states. This should prevent a class of outcomes. If the changes in the system (due to precluding the class of special states) do not change the predictions of the Copenhagen interpretation, then this provides a test. In particular, with a class of special states precluded, our theory forbids the associated outcome.
Although the variational principles of classical mechanics lead to two-time boundary value problems, when dealing with the real world everyone knows you should use initial conditions. Not surprisingly, the eighteenth-century statements of classical variational principles were invested with religious notions of teleology: guidance from future paths not taken could only be divine. The Feynman path integral formulation of quantum mechanics makes it less preposterous that a particle can explore non-extremal paths; moreover, it is most naturally formulated using information at two times. In the previous chapter, use of the two-time boundary value problem was proposed as a logical prerequisite for considering arrow-of-time questions. Perhaps this is no less teleological than Maupertuis's principle, except that we remain neutral on how and why those conditions are imposed.
In this chapter we deal with the technical aspects of solving two-time boundary value problems. In classical mechanics you get into rich existence questions—sometimes there is no solution, sometimes many. For stochastic dynamics the formulation is perhaps easiest, which is odd considering that this is the language most suited to irreversible behavior. Our ultimate interest is the quantum two-time problem and this is nearly intractable.
Later in the book I will propose that the universe is most simply described as the solution of a two-time boundary value problem. A natural reaction is to wonder whether this is too constraining. Given that our (lower entropy) past already cuts down the number of possible microstates, are there sufficient microstates to meet a future condition as well?