We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The early twentieth century brought about the rejection by physicists of the doctrine of determinism - the belief that complete knowledge of the initial conditions of an interaction in nature allows precise and unambiguous prediction of the outcome. This book traces the origins of a central problem leading to this change in viewpoint and paradoxes raised by attempts to formulate a consistent theory of the nature of light. It outlines the different approaches adopted by members of different national cultures to the apparent inconsistencies, explains why Einstein's early (1905) attempt at a resolution was not taken seriously for fifteen years, and describes the mixture of ideas that created a route to a new, antideterministic formulation of the laws of nature. Dr Wheaton describes the experimental work on the new forms of radiation found at the turn of the century and shows how the interpretation of energy transfer from X-rays to matter gradually transformed a classical wave explanation of light to one based on particle like quanta of energy, and further, he explains how influential scientists came reluctantly to accept a wavelike interpretation of matter as well. This new and distinctively different account of one of the major theoretical shifts in modern physical thought will be of fundamental interest to physical scientists and philosophers, as well as to historians of science.
Classical mechanics and quantum mechanics are two of the most successful scientific theories ever discovered, and yet how they can describe the same world is far from clear: one theory is deterministic, the other indeterministic; one theory describes a world in which chaos is pervasive, the other a world in which chaos is absent. Focusing on the exciting field of 'quantum chaos', this book reveals that there is a subtle and complex relation between classical and quantum mechanics. It challenges the received view that classical and quantum mechanics are incommensurable, and revives another, largely forgotten tradition due to Niels Bohr and Paul Dirac. By artfully weaving together considerations from the history of science, philosophy of science, and contemporary physics, this book offers a new way of thinking about intertheory relations and scientific explanation. It will be of particular interest to historians and philosophers of science, philosophically-inclined physicists, and interested non-specialists.
Recent work in quantum information science has produced a revolution in our understanding of quantum entanglement. Scientists now view entanglement as a physical resource with many important applications. These range from quantum computers, which would be able to compute exponentially faster than classical computers, to quantum cryptographic techniques, which could provide unbreakable codes for the transfer of secret information over public channels. These important advances in the study of quantum entanglement and information touch on deep foundational issues in both physics and philosophy. This interdisciplinary volume brings together fourteen of the world's leading physicists and philosophers of physics to address the most important developments and debates in this exciting area of research. It offers a broad spectrum of approaches to resolving deep foundational challenges - philosophical, mathematical, and physical - raised by quantum information, quantum processing, and entanglement. This book is ideal for historians, philosophers of science and physicists.
The first realization that the validity of the quantum superposition principle in the Hilbert space describing a composite quantum system may give rise to fundamentally new correlations between the constituent subsystems came in the landmark 1935 paper by Einstein, Podolsky, and Rosen (EPR), where it was shown how the measurement statistics of observables in certain quantum states could not be reproduced by assigning definite wavefunctions to individual subsystems. It was in response to the EPR paper that Schrödinger, in the same year, coined the term entanglement (Verschränkung) to acknowledge the failure of classical intuition in describing the relationship between the “parts” and the “whole” in the quantum world:
Whenever one has a complete expectation catalog – a maximum total knowledge – a ψ function – for two completely separated bodies, …then one obviously has it also for the two bodies together. But the converse is not true. The best possible knowledge of a total system does not necessarily include total knowledge of all its parts, not even when these are fully separated from each other and at the moment are not influencing each other at all.
While Bell's strengthening of the original EPR-paradox setting and the subsequent experimental verification of Bell inequalities irreversibly changed the perception of entanglement from a property of counterintuitive “spookiness” to (beyond reasonable doubt) an experimental reality, the concept and implications of entanglement continue to be associated with a host of physical, mathematical, and philosophical challenges. In particular, investigation of entanglement in both its qualitative and quantitative aspects has intensified under the impetus of quantum information science (QIS).
Discussions of quantum-computational algorithms in the literature refer to various features of quantum mechanics as the source of the exponential speed-up relative to classical algorithms: superposition and entanglement, the fact that the state space of n bits is a space of 2n states while the state space of n qubits is a space of 2n dimensions, the possibility of computing all values of a function in a single computational step by “quantum parallelism,” or the possibility of an efficient implementation of the discrete quantum Fourier transform. Here I propose a different answer to the question posed in the title, in terms of the difference between classical logic and quantum logic, i.e., the difference between the Boolean classical event structure and the non-Boolean quantum event structure. In a nutshell, the ultimate source of the speed-up is the difference between a classical disjunction, which is true (or false) in virtue of the truth values of the disjuncts, and a quantum disjunction, which can be true (or false) even if none of the disjuncts is either true or false.
In the following, I will discuss the information-processing in Deutsch's XOR algorithm (the first genuinely quantum algorithm) and related period-finding quantum algorithms (Simon's algorithm and Shor's factorization algorithm). It is well known that these algorithms can be formulated as solutions to a hidden-subgroup problem. Here the salient features of the information-processing are presented from the perspective of the way in which the algorithms exploit the non-Boolean logic represented by the projective geometry (the subspace structure) of Hilbert space.
Indefinite causal structure poses particular problems for theory formulation since many of the core ideas used in the usual approaches to theory construction depend on having definite causal structure. For example, the notion of a state across space evolving in time requires that we have some definite causal structure so we can define a state on a space-like hypersurface. We will see that many of these problems are mitigated if we are able to formulate the theory in a formalism-local (or F-local) fashion. A formulation of a physical theory is said to be F-local if, in making predictions for any given arbitrary space-time region, we need only refer to mathematical objects pertaining to that region. This is a desirable property both on the grounds of efficiency and since, if we have indefinite causal structure, it is not clear how to select some other space-time region on which our calculations may depend. The usual ways of formulating physical theories (the time-evolving state picture, the histories approach, and the local-equations approach) are not F-local.
We set up a framework for probabilistic theories with indefinite causal structure. This, the causaloid framework, is F-local. We describe how quantum theory can be formulated in the causaloid framework (in an F-local fashion). This provides yet another formulation of quantum theory. This formulation, however, may be particularly relevant to the problem of finding a theory of quantum gravity. The problem of quantum gravity is to find a theory that reduces in appropriate limits to general relativity and quantum theory (including, at least, those situations in which those two theories have been experimentally confirmed).
The use of parameters to describe an experimenter's control over the devices used in an experiment is familiar in quantum physics, for example in connection with Bell inequalities. Parameters are also interesting in a different but related context, as we noticed when we proved a formal separation in quantum mechanics between linear operators and the probabilities that these operators generate. In comparing an experiment against its description by a density operator and detection operators, one compares tallies of experimental outcomes against the probabilities generated by the operators but not directly against the operators. Recognizing that the accessibility of operators to experimental tests is only indirect, via probabilities, motivates us to ask what probabilities tell us about operators, or, put more precisely, “what combinations of a parameterized density operator and parameterized detection operators generate any given set of parametrized probabilities?”
Here, we review and augment recent proofs that any given parameterized probabilities can be generated in very diverse ways, so that a parameterized probability measure, detached from any of the (infinitely many) parameterized operators that generate it, becomes an interesting object in its own right. By detaching a parameterized probability measure from the operators that may have led us to it, we (1) strengthen Holevo's bound on a quantum communication channel and (2) clarify a role for multiple levels of modeling in an example based on quantum key distribution. We then inquire into some parameterized probability measures generated by entangled states and into the topology of the associated parameter spaces; in particular we display some previously overlooked topological features of level sets of these probability measures.
In many situations, learning from the results of measurements can be regarded as updating one's probability distributions over certain variables. According to Bayesians, this updating should be carried out according to the rule of conditionalization. In the theory of quantum mechanics, there is a rule that tells us how to update the state of a system, given observation of a measurement result. The state of a quantum system is closely related to probability distributions over potential measurements. Therefore we might expect there to be some relation between Bayesian conditionalization and the quantum state-update rule. There have been several suggestions that the state change just is Bayesian conditionalization, appropriately understood, or that it is closely analogous.
Bub was the first to make the connection between quantum measurement and Bayesian conditionalization in a 1977 paper, using an approach based on quantum logic. The connection is renewed in discussions by Fuchs and also Jacobs in 2002, where again the analogy between the quantum state update and Bayesian conditionalization is pointed out. At the same time, Fuchs draws attention to a disanalogy – namely that there is an “extra unitary” transformation as part of the measurement in the quantum case. In this chapter, I will first review the proposals of Bub, Jacobs, and Fuchs. I will then show that the presence of the extra unitaries in quantum measurement leads to a difference between classical and quantum measurement in terms of information gain, drawing on results by Nielsen and Fuchs and Jacobs.
For Abner Shimony. Your influence on me goes well beyond physics. Knowing you and being close to you is one of the greatest privileges and pleasures in my life.
Introduction
Quantum mechanics is, without any doubt, a tremendously successful theory: it started by explaining black-body radiation and the photoelectric effect, it explained the spectra of atoms, and then went on to explain chemical bonds, the structure of atoms and of the atomic nucleus, the properties of crystals and the elementary particles, and a myriad of other phenomena. Yet it is safe to say that we still lack a deep understanding of quantum mechanics – surprising and even puzzling new effects continue to be discovered with regularity. That we are surprised and puzzled is the best sign that we still don't understand; however, the veil over the mysteries of quantum mechanics is starting to lift a little.
One of the strangest things microscopic particles do is to follow non-local dynamics and to yield non-local correlations. That particles follow non-local equations of motion was discovered by Aharonov and Bohm, while non-local correlations – which are the subject of this chapter – were discovered by John Bell and first cast in a form that has physical meaning, i.e., that can be experimentally tested, by Clauser, Horne, Shimony, and Holt. When they were discovered, both phenomena seemed to be quite exotic and at the fringe of quantum mechanics. By now we understand that they are some of the most important aspects of quantummechanical behavior.
On 25 November 1915, Albert Einstein presented the final version of the field equations of the general theory of relativity to the Royal Prussian Academy of Sciences. 1 These equations were generally covariant: their form remained unchanged under arbitrary transformations of the space and time coordinates. This was a mathematical manifestation of Einstein's principle of equivalence, which held that the state of affairs in a homogeneous gravitational field is identical to the state of affairs in a uniformly accelerated coordinate system.
Einstein's first publication that contained the principle of equivalence appeared in 1907. It was included in a review paper of his relativistic account of electrodynamics of 1905. The principle immediately proved its heuristic value: on its basis Einstein already proposed in the same article the existence of a gravitational redshift of light, and the bending of light trajectories in a gravitational field. Nevertheless, eight years would pass between the first formulation of the equivalence principle and its final vindication in 1915, when it acquired a firm footing in the field equations. During those years, Einstein remained nearly silent on gravitation from late 1907 until June 1911. He did not publish any substantial articles on the subject and, even more surprising, he hardly discussed it with his correspondents.
The aim of this book has been to understand Einstein's development and see the historical coherence in his later attitude to physics. The key that unlocks the later Einstein is the road by which he arrived at the field equations of general relativity, “the most joyous moment of my life.” With superficial hindsight, mathematical intuition and deduction had played the essential creative role, whereas his “physics first” arguments appeared to have been a hindrance. As we saw, his epistemology and methodology gradually changed accordingly, just as they were reflections of, and influences on his practice in unified field theory. The re-shaped methodological beliefs were readily invoked to justify the further mathematization of his research, and its increasing alienation from the realm of experience.
The semivector episode showed that Einstein's methodological beliefs were explicitly put to use when he was trying to make and motivate choices in his actual research. Semivectors appeared to deliver a most appealing result – the unified description of electrons and protons – and they figured prominently in the Spencer lecture. Soon it became clear, however, that only a Pyrrhic victory had been won; but Einstein, of course, did not retract his words spoken in Oxford on the method of theoretical physics.
Quantum information science is about the processing of information by the exploitation of some distinguishing features of quantum systems, such as electrons, photons, ions. In recent years a lot has been promised in the domain of quantum information. In quantum computing it was promised that NP-problems would be solved in polynomial time. In quantum cryptography there were claims that protocols would have practically 100% security. At the moment it is too early to say anything definitive regarding the final results of this great project.
In quantum computing a few quantum algorithms and developed devices, “quantum pre-computers” with a few quantum registers, were created. However, difficulties could no longer be ignored. For some reason it was impossible to create numerous quantum algorithms that could be applied to various problems. Up to now the whole project is based on two or three types of algorithm, and among them one, namely, the algorithms for prime factorization, might be interesting for real-world application. There is a general tendency to consider this situation with quantum algorithms as an occasional difficulty. But, as the years pass, one might start to think that there is something fundamentally wrong. The same feelings are induced by developments in quantum hardware. It seems that the complexity of the problem of creation of a device with a large number N of quantum registers increases extremely non-linearly with increasing N. In quantum cryptography the situation is opposite to that of quantum computing. There were tremendous successes in the development of technologies for production and transmission of quantum information, especially pairs of entangled photons.
There is an element of the Oxford lecture that we have not yet addressed in much detail: Einstein's claim that “the grand object of all theory” is to unify – to make the fundamental concepts and laws “as simple and as few in number as possible.” Likewise, he had told Cornelius Lanczos in 1938 that “the physically true is logically simple, that is, it has unity in its foundation” – even if the reverse does not necessarily hold (“the logically simple does not, of course, have to be physically true.”)
Unification had always been an important aspect of Einstein's work – we need only to think of the general theory of relativity, where he unified gravitation with special relativity – and it played a prominent role in his epistemology. Some ten papers by him carried the word “Einheitliche” in their title. These were all papers on unified field theories: theories that attempted to unify in a single mathematical, preferably geometrical scheme the gravitational and electromagnetic fields. We wish to outline the role of unification in Einstein's philosophy here, followed by a brief introduction to the unified field theory program.
Unification: motivation and implementation
In his autobiography Einstein gave two criteria for a successful theory: firstly, of course, it should not contradict empirical facts. Secondly, the theory should display “inner perfection,” which was characterized by its “naturalness,” usually mathematically construed, and “logical simplicity.”