We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
I have for long thought that if I had the opportunity to teach this subject, I would emphasize the continuity with earlier ideas. Usually it is the discontinuity which is stressed, the radical break with more primitive notions of space and time. Often the result is to destroy completely the confidence of the student in perfectly sound and useful concepts already acquired.
If you doubt this, then you might try the experiment of confronting your students with the following situation. Three small spaceships, A, B, and C, drift freely in a region of space remote from other matter, without rotation and without relative motion, with B and C equidistant from A (Fig. 1).
On reception of a signal from A the motors of B and C are ignited and they accelerate gently (Fig. 2).
Let ships B and C be identical, and have identical acceleration programmes. Then (as reckoned by an observer in A) they will have at every moment the same velocity, and so remain displaced one from the other by a fixed distance. Suppose that a fragile thread is tied initially between projections from B and C (Fig. 3). If it is just long enough to span the required distance initially, then as the rockets speed up, it will become too short, because of its need to Fitzgerald contract, and must finally break. It must break when, at a sufficiently high velocity, the artificial prevention of the natural contraction imposes intolerable stress.
There is an ongoing series of symposia, at Tokyo, on ‘Foundations of Quantum Mechanics in the Light of New Technology’. Indeed new technology (electronics, computers, lasers, …) has made possible new demonstrations of quantum queerness. And it has made possible practical approximations to old gedankenexperiments. Over the last decade or so have appeared beautiful experiments on ‘particle’ interference and diffraction, with neutrons and electrons, on ‘delayed choice’, on the Ehrenburg–Siday–Aharonov–Bohm effect, and on the Einstein–Podolsky–Rosen–Bohm correlations. These last are of particular relevance for the particular themes of this paper. But those themes arise already in the context of technology which is neither new or advanced, as is illustrated by the following passage:
I want to boil an egg. I put the egg into boiling water and I set an alarm for five minutes. Five minutes later the alarm rings and the egg is done. Now the alarm clock has been running according to the laws of classical mechanics uninfluenced by what happened to the egg. And the egg is coagulating according to laws of physical chemistry and is uninfluenced by the running of the clock. Yet the coincidence of these two unrelated causal happenings is meaningful, because, I, the great chef, imposed a structure on my kitchen.
These notions, of cause and effect on the one hand, and of correlation on the other, and the problem of formulating them sharply in contemporary physical theory, will be the themes of my talk.
The philosopher in the street, who has not suffered a course in quantum mechanics, is quite unimpressed by Einstein–Podolsky–Rosen correlations. He can point to many examples of similar correlations in everyday life. The case of Bertlmann's socks is often cited. Dr. Bertlmann likes to wear two socks of different colours. Which colour he will have on a given foot on a given day is quite unpredictable. But when you see (Fig. 1) that the first sock is pink you can be already sure that the second sock will not be pink. Observation of the first, and experience of Bertlmann, gives immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. And is not the EPR business just the same?
Consider for example the particular EPR gedanken experiment of Bohm (Fig. 2). Two suitable particles, suitably prepared (in the ‘singlet spin state’), are directed from a common source towards two widely separated magnets followed by detecting screens. Each time the experiment is performed each of the two particles is deflected either up or down at the corresponding magnet. Whether either particle separately goes up or down on a given occasion is quite unpredictable. But when one particle goes up the other always goes down and vice-versa. After a little experience it is enough to look at one side to know also about the other.
The quantum revolutions: from concepts to technology
The development of quantum mechanics in the beginning of the twentieth century was a unique intellectual adventure, which obliged scientists and philosophers to change radically the concepts they used to describe the world. After these heroic efforts, it became possible to understand the stability of matter, the mechanical and thermal properties of materials, the interaction between radiation and matter, and many other properties of the microscopic world that had been impossible to understand with classical physics. A few decades later, that conceptual revolution enabled a technological revolution, at the root of our information-based society. It is indeed with the quantum mechanical understanding of the structure and properties of matter that physicists and engineers were able to invent and develop the transistor and the laser – two key technologies that now permit the high-bandwidth circulation of information, as well as many other scientific and commercial applications.
After such an accumulation of conceptual – and eventually technological – successes, one might think that by 1960 all the interesting questions about quantum mechanics had been raised and answered. However, in his now-famous paper of 1964 – one of the most remarkable papers in the history of physics – John Bell drew the attention of physicists to the extraordinary features of entanglement: quantum mechanics describes a pair of entangled objects as a single global quantum system, impossible to be thought of as two individual objects, even if the two components are far apart.
To know the quantum mechanical state of a system implies, in general, only statistical restrictions on the results of measurements. It seems interesting to ask if this statistical element be thought of as arising, as in classical statistical mechanics, because the states in question are averages over better defined states for which individually the results would be quite determined. These hypothetical ‘dispersion free’ states would be specified not only by the quantum mechanical state vector but also by additional ‘hidden variables’ – ‘hidden’ because if states with prescribed values of these variables could actually be prepared, quantum mechanics would be observably inadequate.
Whether this question is indeed interesting has been the subject of debate. The present paper does not contribute to that debate. It is addressed to those who do find the question interesting, and more particularly to those among them who believe that ‘the question concerning the existence of such hidden variables received an early and rather decisive answer in the form of von Neumann's proof on the mathematical impossibility of such variables in quantum theory.’ An attempt will be made to clarify what von Neumann and his successors actually demonstrated. This will cover, as well as von Neumann's treatment, the recent version of the argument by Jauch and Piron, and the stronger result consequent on the work of Gleason. It will be urged that these analyses leave the real question untouched.
The subject–object distinction is indeed at the very root of the unease that many people still feel in connection with quantum mechanics. Some such distinction is dictated by the postulates of the theory, but exactly where or when to make it is not prescribed. Thus in the classic treatise of Dirac we learn the fundamental propositions:
… any result of a measurement of a real dynamical variable is one of its eigenvalues …,
… if the measurement of the observable ξ for the system in the state corresponding to |x〈 is made a large number of times, the average of all the results obtained will be 〈x|ξ|x〉 …,
… a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured ….
So the theory is fundamentally about the results of ‘measurements’, and therefore presupposes in addition to the ‘system’ (or object) a ‘measurer’ (or subject). Now must this subject include a person? Or was there already some such subject–object distinction before the appearance of life in the universe? Were some of the natural processes then occurring, or occurring now in distant places, to be identified as ‘measurements’ and subjected to jumps rather than to the Schrödinger equation? Is ‘measurement’ something that occurs all at once? Are the jumps instantaneous? And so on.
I have been invited to speak on ‘foundations of quantum mechanics’ – and to a captive audience of high energy physicists! How can I hope to hold the attention of such serious people with philosophy? I will try to do so by concentrating on an area where some courageous experimenters have recently been putting philosophy to experimental test.
The area in question is that of Einstein, Podolsky, and Rosen. Suppose for example, that protons of a few MeV energy are incident on a hydrogen target. Occasionally one will scatter, causing a target proton to recoil. Suppose (Fig. 1) that we have counter telescopes T1 and T2 which register when suitable protons are going towards distant counters C1 and C2. With ideal arrangements registering of both T1 and T2 will then imply registering of both C1 and C2 after appropriate time decays. Suppose next that C1 and C2 are preceded by filters that pass only particles of given polarization, say those with spin projection + ½ along the z axis. Then one or both of C1 and C2 may fail to register. Indeed for protons of suitable energy one and only one of these counters will register on almost every suitable occasion – i.e., those occasions certified as suitable by telescopes T1 and T2. This is because proton–proton scattering at large angle and low energy, say a few MeV, goes mainly in S wave. But the antisymmetry of the final wave function then requires the antisymmetric singlet spin state.
The notion of morality appears to have been introduced into quantum theory by Wigner, as reported by Goldberger and Watson. The question at issue is the famous ‘reduction of the wave packet’. There are, ultimately, no mechanical arguments for this process, and the arguments that are actually used may well be called moral. This is a popular account of the subject. Very practical people not interested in logical questions should not read it. It is a pleasure for us to dedicate the paper to Professor Weisskopf, for whom intense interest in the latest developments of detail has not dulled concern with fundamentals.
Suppose that some quantity F is measured on a quantum mechanical system, and a result f obtained. Assume that immediate repetition of the measurement must give the same result. Then, after the first measurement, the system must be in an eigenstate of F with eigenvalue f. In general, the measurement will be ‘incomplete’, i.e., there will be more than one eigenstate with the observed eigenvalue, so that the latter does not suffice to specify completely the state resulting from the measurement. Let the relevant set of eigenstates be donoted by ϕfg. The extra index g may be regarded as the eigenvalue of a second observable G that commutes with F and so can be measured at the same time.
‘… the history of cosmic theories may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at reminds one of a sleepwalker's performance …’
This is a quotation from A. Koestler's book The Sleepwalkers. It is an account of the Copernican revolution, with Copernicus, Kepler, and Galilei as heroes. Koestler was of course impressed by the magnitude of the step made by these men. He was also fascinated by the manner in which they made it. He saw them as motivated by irrational prejudice, obstinately adhered to, making mistakes which they did not discover, which somehow cancelled at the important points, and unable to recognize what was important in their results, among the mass of details. He concluded that they were not really aware of what they were doing … sleepwalkers. I thought it would be interesting to keep Koestler's thesis in mind as we hear at this meeting about contemporary theories from contemporary theorists.
For many decades now our fundamental theories have rested on the two great pillars to which this meeting is dedicated: quantum theory and relativity. We will see that the lines of research opened up by these theories remain splendidly vital. We will see that order is brought into a vast and expanding array of experimental data. We will see even a continuing ability to get ahead of the experimental data … as with the existence and masses of the W and Z mesons.
Cosmologists, even more than laboratory physicists, must find the usual interpretive rules of quantum mechanics a bit frustrating:
‘… any result of a measurement of a real dynamical variable is one of its eigenvalues …’
‘… if the measurement of the observable … is made a large number of times the average of all the results obtained will be …’
‘… a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured …’
It would seem that the theory is exclusively concerned with ‘results of measurement’ and has nothing to say about anything else. When the ‘system’ in question is the whole world where is the ‘measurer’ to be found? Inside, rather than outside, presumably. What exactly qualifies some subsystems to play this role? Was the world wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some more highly qualified measurer – with a Ph.D.? If the theory is to apply to anything but idealized laboratory operations, are we not obliged to admit that more or less ‘measurement-like’ processes are going on more or less all the time more or less everywhere? Is there ever then a moment when there is no jumping and the Schrödinger equation applies?
Quantum measurement problem is a technical euphemism for a much deeper and less well-defined question: How do we, “the observers,” fit within the physical universe? The problem is especially apparent in quantum physics because, for the first time in the history of science a majority of (but not all) physicists seriously entertains the possibility that the framework for the ultimate universal physical theory provided by quantum mechanics is here to stay.
Quantum physics relevant for this discussion is (contrary to the common prejudice) relatively simple. By this I mean that some of the key features of its predictions can be arrived at on the basis of overarching principles of quantum theory and without reference to the minutiae of other specific ingredients (such as the details of the forces).
Quantum superposition principle is such an overarching principle of quantum theory. It leads to predictions that seem difficult to reconcile with our perception of the familiar classical universe of everyday experience. The aim of this paper is to show that appearance of the classical reality can be viewed as a result of the emergence of the preferred states from within the quantum substrate through the Darwinian paradigm, once the survival of the fittest quantum states and selective proliferation of the information about them throughout the universe are properly taken into account.
Modern physics is built upon three principal pillars, quantum mechanics, special relativity, and general relativity. Historically, these principles were developed as logically independent extensions of classical Newtonian mechanics. While each theory constitutes a logically self-consistent framework, unification of these fundamental principles encountered unprecedented difficulties. Quantum mechanics and special relativity were unified in the middle of the last century, giving birth to relativistic quantum field theory. While tremendously successful in explaining experimental data, ultraviolet infinities in the calculations hint that the theory can not be in its final form. Unification of quantum mechanics with general relativity proves to be a much more difficult task and is still the greatest unsolved problem in theoretical physics.
In view of the difficulties involved with unifying these principles, we can ask a simple but rather bold question: is it possible that the three principles are not logically independent, but rather that there is an hierarchical order in their logical dependence? In particular, we notice that both relativity principles can be formulated as statements of symmetry. When applying nonrelativistic quantum mechanics to systems with a large number of degrees of freedom, we sometimes find that symmetries can emerge in the low-energy sector, which are not present in the starting Hamiltonian.
John A. Wheeler's two favorite questions are: “How come existence?” and “How come the quantum?” (Wheeler 1998). It is difficult to know how to go about answering the first question. What shape would an answer take? This article is concerned instead with the second question which I will expand as: “Why is nature described by quantum theory?”
What shape would an answer to this question take? We can get a handle on this by considering some historical examples. In the seventeenth century physicists were confronted by Kepler's laws of planetary motion. These laws were empirically adequate for predicting planetary motion and yet sufficiently abstract and ad hoc that they cannot really have been regarded as an explanation of “why” planets move the way they do. Later, Newton was able to show that Kepler's laws followed from a set of reasonable laws for the mechanics (his three laws) plus his law for gravitational forces. At this stage physicists could begin to assert with some degree of confidence that they understood “why” the planets move the way they do. Of course there were still mysteries. Newton was particularly bothered by the action at a distance of his inverse square law. What mediated the force? It was not until Einstein's theory of general relativity that an answer to this question became available.
A number of surprising observations made at the threshold of the twenty-first century have left cosmologists confused and other physicists in doubt over the reliability of cosmology. For instance the cosmological expansion appears to be accelerating. This is contrary to common sense, as it implies that on large scales gravity is repulsive. Another upheaval resulted from the high redshift mapping of the fine structure constant, when evidence was found for a time dependence of this supposed constant of Nature. Yet another puzzle was the observation of rare very high energy cosmic rays. Standard kinematic calculations, based on special relativity, predict a cut-off well below the observed energies, so this may perhaps represent the first experimental mishap of special relativity.
These three surprises are not alone and prompt several questions. Is the universe trying to tell us something radical about the foundations of physics? Or do astronomers merely wish to displease the conservative physicists? It could well be that the strange observations emerging from the new cosmology are correct, and that they provide a unique window into dramatically novel physics. Is the universe trying to give us a physics lesson?
It would be surprising if we already knew everything there is to know about physics. Indeed we expect that currently known physics must break down in the very early universe, or at very high energies. However, no one knows to what extent our current concepts may be inadequate in these extreme situations – the damage caused could be unimaginable.