We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What can more than two thousand years of human thought and several hundred years of hard science tell us finally about the true nature of space and time? This is the question that the philosopher Jeremy Butterfield and I posed to a unique panel of top mathematicians, physicists and theologians in a public discussion that took place at Emmanuel College, Cambridge in September 2006, and this is the book that grew out of that event. All four other panellists, myself and the astronomer Andy Taylor who spoke at a related workshop, now present our personal but passionately held insights in rather more depth.
The first thing that can be said is we do not honestly know the true nature of space and time. Therefore this book is not about selling a particular theory but rather it provides six refreshingly diverse points of view from which the reader can see that the topic is very much as alive today as it was at the time of St Augustine or of Newton or of Einstein. Our goal is not only to expose to the modern public revolutionary ideas at the cutting edge of theoretical physics, mathematics and astronomy but to expose that there is a debate to be had in the first place and at all levels, including a wider human context. Moreover, the reader will find here essays from leading figures unconstrained by peer pressure, fashion or dogma. These are views formed not today or yesterday but in each case over a scientific lifetime of study.
Heisenberg and Schrödinger invented mathematical formalisms that provided correct answers to all the various problems of (non-relativistic) quantum theory. It was Bohr who uncovered the underlying significance of the theory; in particular he showed how the profound conceptual difficulties encountered as the theory developed – the so-called wave–particle duality, the totally unclassical nature of the uncertainty principle – could be neutralised by a revision, or more accurately a generalisation, of our use of physical concepts.
Such was the generally accepted view of things from soon after 1927, when Bohr first put forward his ideas on complementarity at an international meeting of physicists at Como, through the 1930s, when it was felt that Bohr destroyed Einstein's contrary opinions, and certainly up to the time of Bohr's death in 1962. Bohr's analysis is usually spoken of as the Copenhagen interpretation of quantum theory, though it is instructive to realise that such a term was frowned upon by those closest to Bohr, since it appears to suggest that the interpretation is just one among (conceivably) many.
Let us examine, for example, the words of Léon Rosenfeld, Bohr's disciple and long-term collaborator, as expounded at a conference in 1957 [52]. Rosenfeld maintained that any idea of ‘interpreting a formalism’ was a ‘false problem’, that in a good theory, the ‘ordinary language (spiced with technical jargon for the sake of conciseness)’ in which it is described, is ‘inseparably united … with whatever mathematical apparatus is necessary’, that ‘we are here not faced with a matter of choice between two possible languages or two possible interpretations, but with a rational language intimately connected with the formalism and adapted to it, on the one hand, and with rather wild, metaphysical speculations … on the other’.
If I were to ask a number of people in the street what they think was the most important new theory in physics in the twentieth century, and who has been the greatest physicist, I am fairly sure that – of those able to express an opinion at all – a substantial majority would say that relativity has been the greatest theory, and Einstein the greatest physicist.
Indeed Albert Einstein has probably achieved the remarkable feat of not just becoming the best-known practitioner of any branch of science among the general public, but retaining that position for 85 years, 50 of those years since his death in 1955.
It was in 1905, when he was 26, that Einstein astonished the scientific community by producing four pieces of work of the very highest quality. These included his first paper on relativity, in which he introduced what is now called the special theory of relativity (a term I shall explain in Chapter 3). The three other papers will be referred to in due course. What was astonishing was not just the quality of the work, but that the author was not an academic of note, or even of promise, but was working as a patent inspector after rather a mediocre student career. (A recent pleasantly written biography of Einstein is that of Abraham Pais [1], himself a well-known physicist; Pais gives references to many other accounts of Einstein's life. Another, more recent, biography is that by Fölsing [2].
In what is perhaps Virginia Woolf's most famous novel, much of the narrative describes intentions, hopes, plans for the visit To the Lighthouse, a visit forestalled by bad weather. There follows a section ‘Time passes’ in which all is war, death, decay, desperation; only the poets thrive. Then life returns slowly and timorously; the visit is at last made, in sombre reflective mood.
Between perhaps 1930 and 1952, the study of the meaning of quantum theory went through its own period of emptiness. Von Neumann had done most to cause it. Einstein could not disturb it … When interest did creep back, it was a result of the work of David Bohm.
Bohm had already experienced a chequered career. He was born in the USA in 1917, and rapidly built up an exceedingly high reputation as a theoretical physicist. After initial collaboration with Robert Oppenheimer, he specialised in the physics of plasmas – gases in which, because of high temperature or low pressure, atoms are ionised, or broken up into negatively charged electrons and positively charged ions. Plasmas [118] are important in astrophysics and also in the effort to achieve controlled nuclear fusion. In nuclear fusion, small nuclei fuse together to form one large one, with the emission of energy; high temperatures are required, and so one is forced to deal with plasmas. This is also exactly the process that causes stars to radiate energy.
In Chapter 4, we left Einstein in early 1926, cautiously optimistic that Schrödinger's scheme might provide what Heisenberg's could not, and did not attempt to – a mathematical description of atomic processes that could also give a physical picture of what was actually occurring. But the demonstration that the two formalisms were equivalent severely dented such optimism.
It was still possible for Einstein to prefer Schrödinger's version as offering more hope for future physical interpretation. Thus it is interesting to review the recommendations Einstein made for Nobel Prizes [1]. In 1928, Einstein suggested that Heisenberg and Schrödinger might share a prize, adding ‘With respect to achievement, each one … deserves a full Nobel prize although their theories in the main coincide in regard to reality content.’ However, Einstein was cautious enough to add that ‘it still seems problematic how much will ultimately survive of [their] grandiosely conceived theories’, and gave precedence to de Broglie, also mentioning Davisson, Germer, Born and Jordan.
By 1931, more convinced that quantum theory would last (and with de Broglie the 1929 prizewinner), Einstein plumped for Heisenberg and Schrödinger, commenting that ‘In my opinion, this theory contains without doubt a piece of the ultimate truth. The achievements of both men are independent of each other, and so significant that it would not be appropriate to divide a Nobel prize between them.’
Quantum theory was produced roughly between 1900 and 1925; it changed our view of the Universe in a revolutionary way. Such is the central preoccupation of this book. Relativity was another revolutionary theory produced between 1905 and 1916; it is less central here. The clear implication of these statements is that, prior to 1900, there was an established body of theory that was widely successful and was thought to provide final answers to fundamental physical questions. This body of theory is known as classical physics.
The above is a simplified account of the development of physics over the last few hundred years. There is a good deal of truth in it. By 1900, Newton's Laws of mechanics had been established for over 200 years; they had been overwhelmingly successful in describing a huge range of phenomena, both terrestrial and in the solar system. They were held to be among the greatest human achievements, indeed the very greatest strictly scientific achievement, and practically a direct revelation of divine intent, and by 1900 it seemed unthinkable that they could be challenged.
The same could not quite be said of electromagnetism, an area of physics that encompasses electricity, magnetism and, as will be seen later in this chapter, optics. By 1900, what we now take to be the complete and final theory of classical electromagnetism, that of James Clerk Maxwell, was over 30 years old, and the discovery of radio waves by Heinrich Hertz which was acknowledged to confirm the theory, more than ten years old.
In this chapter I shall discuss briefly a number of the interesting developments – interpretational, theoretical, experimental – that have taken place in the foundations of quantum theory over the last few decades, where appropriate relating them to the work of Bohr or Einstein. While the topics considered will range well beyond the specific areas studied by John Bell, I think it is fair to say that it was the interest stimulated by his ideas that led to nearly all the work described.
This could not, though, apply to the very first ideas I discuss, which date from as early as 1957, when a PhD student at Princeton University, Hugh Everett, wrote a thesis titled The Theory of the Universal Wave Function [231]. A short version of this was published [232], and it was followed by a brief positive assessment [233] of Everett's ideas by John Wheeler, who had guided and encouraged him. (Sixteen years later, both of Everett's papers, and a number of related ones, were collected in a single volume [234].)
I shall now sketch his ideas. Till now, we have allowed wave-functions for microscopic systems, such as atoms or electrons, to be sums of wave-functions for highly distinct states, so that the corresponding properties of the systems – position, momentum and so on – do not usually have precise values, at least not till a measurement is made.
Probably all of us have had the experience of sitting in a stationary train, and suddenly seeing a train on an adjacent track moving (or appearing to move) past our own. The doubt is there because it might not immediately be quite clear whether we are indeed still stationary and the other train is moving, to the right, say, or whether it is stationary and we have begun moving to the left.
In fact, all our normal actions in the train – sitting, eating, drinking, walking, may be carried on in exactly the same way, irrespective of whether our train is moving or not – at least so long as the motion is at constant speed in a straight line. Could this be raised to the status of a principle – ‘All laws of physics are the same in the two circumstances’? Answering this question is the subject of this chapter [30, 31].
It may perhaps seem at first sight rather a dry formal question. In fact in the previous chapter I hinted at the important part it played in Galileo's argument on the movement of the Earth. But it has a more general importance even than this. If such a principle is adopted, it puts considerable restrictions on the laws of physics – in fact it ends up by making most of classical physics unacceptable.
For the last four chapters I have been describing what may be called the ‘fundamental’ or ‘foundational’ aspects of quantum theory. I have been explaining what happens when you penetrate deeper into the ideas of quantum theory than when you merely make use of it, when you try to understand its basic meaning and what, if true, it tells us about the physical nature of the Universe.
As I said in Chapter 5, once Bohr had given his approach to these matters in his Como paper of 1927, general opinion was that his views were sacrosanct, and that no self-respecting physicist needed to return to discuss them further. It was Bell's great achievement to dent this complacency at least a little, and from the very end of the 1960s for the next quarter of a century or so, there were a few experimentalists keen to test the Bell inequalities with increasing stringency, and a few theoreticians with fresh ideas about how quantum theory should be interpreted.
While it was generally admitted that Bell's work was a genuine and important contribution to our understanding of quantum theory, and eventually the ideas actually reached some of the textbooks, in the 1970s and 1980s the field was still very much a minority interest, and even those who found it interesting would scarcely have felt that it had, or was likely to have in the foreseeable future, practical application.
During the first quarter-century of quantum theory, it developed by addressing a considerable range of topics – atomic physics, especially atomic spectroscopy; interactions involving fundamental particles – electrons, protons and so on, and also electromagnetic radiation; and the thermal capacities of solids and gases, where deviations from classical physics seem, at least in retrospect, rather straightforward.
Yet the first intimation of quantum ideas, which came to Max Planck in 1900, appeared in rather a recondite area of physics, where there would be no satisfactory classical theory for another five years, and where considerations were very much complicated by the fact that statistical methods were required. This was the area of black-body radiation or cavity radiation [32].
All surfaces emit energy in the form of electromagnetic radiation. This emission is of so-called thermal radiation, the word ‘thermal’ emphasising that the amount of energy radiated, and its frequency distribution, depend strongly on temperature. In 1879, Josef Stefan deduced from experiment that the total amount of radiation is proportional to T4 (T being an absolute temperature, of course), and Boltzmann confirmed this theoretically using thermodynamics.
The nature of the surface is also very influential and, to explain this, it is helpful to start with the absorption of energy. Different surfaces absorb different fractions of the energy that is incident on them. In particular, black surfaces absorb more than white ones.
At the beginning of the twenty-first century, quantum information theory is an exceptionally lively area of scientific research. Quantum computation and quantum cryptography promise powerful and efficient methods way beyond the scope of classical physics for solving practical problems. Quantum teleportation suggests the even more exciting possibility of achievements that would not even be dreamed of classically, and as minds adapt to new ways of thinking we must expect many other examples.
Quantum information theory emerged from a rich brew of scientific ideas that developed particularly in the 1980s and early 1990s. The ideas centred around the idea of quantum entanglement – the fact that, when two quantum particles have once interacted, even after separation their properties will be mutually dependent in a totally non-classical way. Crucially important were the theoretical analysis and experimental testing of the so-called Bell inequalities, the brain-child of John Bell, a physicist from Ireland.
Bell's main work and the initial experiments were performed in the 1960s and 1970s, respectively, and behind him was the famous Bohr–Einstein debate of the 1920s and 1930s. The Bohr–Einstein debate occurred at a critical point in the intellectual history of the twentieth century. By 1926, the ‘new’ quantum theory of Heisenberg and Schrödinger promised to provide the exact theoretical basis for the physics of atoms, but important questions remained about fundamental aspects of the theory.
In this last chapter I shall attempt a very brief assessment of the Bohr–Einstein debate. Who won it in the view of the audience? Who actually had the better case? I will also ask two rather more specific questions. The first is ‘Have the contributions of John Bell favoured the position of either of the protagonists?’ The second is ‘Has the dramatic arrival and progress of quantum information theory anything to tell us about the Bohr–Einstein debate?’
To help to answer these questions, let me first re-assemble a few points, putting together in turn an argument for each of the protagonists against a rather hostile critic. First Bohr, and I take as his critic John Bell himself. Bell was prepared, in fact, to give Bohr great credit for what Bell called the pragmatic approach to quantum theory, which I discussed earlier. This approach says that the world must be divided into ‘classical’ and ‘quantum’ parts, with an arbitrarily placed cut between them; that the macroscopic measuring device must be described in classical terms, but that we should not expect to picture the ‘quantum’ system in physical terms, and should be content just to possess rules of calculation that work well. But Bell regarded Bohr's philosophy of what lay behind pragmatism, complementarity, as ill-defined, unsatisfactory and bizarre.
This chapter explores the heuristic fruitfulness of the exclusion principle in opening up new avenues of research: namely the idea of ‘coloured’ quarks and the development of quantum chromodynamics (QCD) in the 1960s to 1970s. Sections 5.1 and 5.2 reconstruct the origin of the quark theory from Gell-Mann's so-called ‘eightfold way’ for elementary particles. The experimental discovery of the Ω− particle confirmed the validity of Gell-Mann's model, but it also provided negative evidence against quarks obeying the exclusion principle. Two alternative research programmes emerged in the 1960s to deal with this piece of negative evidence (Section 5.3): the first programme (Section 5.3.1) rejected the strict validity of the exclusion principle and explored the possibility that quarks obeyed parastatistics; the second (Section 5.3.2) retained the exclusion principle and reconciled it with negative evidence by introducing a further degree of freedom for quarks (‘colour’). The Duhem–Quine thesis seems to loom on the horizon (Section 5.4): the choice as to whether questioning the principle or introducing an auxiliary assumption to reconcile it with negative evidence seems to be underdetermined by evidence. However, I shall argue that it was exactly via the development of these two rival research programmes that the exclusion principle came to be validated, and that there was a rationale for retaining the principle despite prima facie recalcitrant evidence.
Introduction
As we have seen in Chapter 4, soon after 1924 Pauli's exclusion rule was incorporated into the growing quantum mechanical framework, where its role came to be redefined and its nomological scope extended.