We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Entropy is a key concept of quantum information theory. It measures how much uncertainty there is in the state of a physical system. In this chapter we review the basic definitions and properties of entropy in both classical and quantum information theory. In places the chapter contains rather detailed and lengthy mathematical arguments. On a first reading these sections may be read lightly and returned to later for reference purposes.
Shannon entropy
The key concept of classical information theory is the Shannon entropy. Suppose we learn the value of a random variable X. The Shannon entropy of X quantifies how much information we gain, on average, when we learn the value of X. An alternative view is that the entropy of X measures the amount of uncertainty about X before we learn its value. These two views are complementary; we can view the entropy either as a measure of our uncertainty before we learn the value of X, or as a measure of how much information we have gained after we learn the value of X.
Intuitively, the information content of a random variable should not depend on the labels attached to the different values that may be taken by the random variable. For example, we expect that a random variable taking the values ‘heads’ and ‘tails’ with respective probabilities ¼ and ¾ contains the same amount of information as a random variable that takes the values 0 and 1 with respective probabilities ¼ and ¾.
What does it mean to say that two items of information are similar? What does it mean to say that information is preserved by some process? These questions are central to a theory of quantum information processing, and the purpose of this chapter is the development of distance measures giving quantitative answers to these questions. Motivated by our two questions we will be concerned with two broad classes of distance measures, static measures and dynamic measures. Static measures quantify how close two quantum states are, while dynamic measures quantify how well information has been preserved during a dynamic process. The strategy we take is to begin by developing good static measures of distance, and then to use those static measures as the basis for the development of dynamic measures of distance.
There is a certain arbitrariness in the way distance measures are defined, both classically and quantum mechanically, and the community of people studying quantum computation and quantum information has found it convenient to use a variety of distance measures over the years. Two of those measures, the trace distance and the fidelity, have particularly wide currency today, and we discuss both these measures in detail in this chapter. For the most part the properties of both are quite similar, however for certain applications one may be easier to deal with than the other. It is for this reason and because both are widely used within the quantum computation and quantum information community that we discuss both measures.
Science offers the boldest metaphysics of the age. It is a thoroughly human construct, driven by the faith that if we dream, press to discover, explain, and dream again, thereby plunging repeatedly into new terrain, the world will somehow come clearer and we will grasp the true strangeness of the universe. And the strangeness will all prove to be connected, and make sense.
– Edward O. Wilson
Information is physical.
– Rolf Landauer
What are the fundamental concepts of quantum computation and quantum information? How did these concepts develop? To what uses may they be put? How will they be presented in this book? The purpose of this introductory chapter is to answer these questions by developing in broad brushstrokes a picture of the field of quantum computation and quantum information. The intent is to communicate a basic understanding of the central concepts of the field, perspective on how they have been developed, and to help you decide how to approach the rest of the book.
Our story begins in Section 1.1 with an account of the historical context in which quantum computation and quantum information has developed. Each remaining section in the chapter gives a brief introduction to one or more fundamental concepts from the field: quantum bits (Section 1.2), quantum computers, quantum gates and quantum circuits (Section 1.3), quantum algorithms (Section 1.4), experimental quantum information processing (Section 1.5), and quantum information and communication (Section 1.6).
Quantum mechanics has the curious distinction of being simultaneously the most successful and the most mysterious of our scientific theories. It was developed in fits and starts over a remarkable period from 1900 to the 1920s, maturing into its current form in the late 1920s. In the decades following the 1920s, physicists had great success applying quantum mechanics to understand the fundamental particles and forces of nature, culminating in the development of the standard model of particle physics. Over the same period, physicists had equally great success in applying quantum mechanics to understand an astonishing range of phenomena in our world, from polymers to semiconductors, from superfluids to superconductors. But, while these developments profoundly advanced our understanding of the natural world, they did only a little to improve our understanding of quantum mechanics.
This began to change in the 1970s and 1980s, when a few pioneers were inspired to ask whether some of the fundamental questions of computer science and information theory could be applied to the study of quantum systems. Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed. This seems a small change in perspective, but the implications are profound. No longer is the quantum world taken merely as presented, but instead it can be created.
This book provides an introduction to the main ideas and techniques of the field of quantum computation and quantum information. The rapid rate of progress in this field and its cross-disciplinary nature have made it difficult for newcomers to obtain a broad overview of the most important techniques and results of the field.
Our purpose in this book is therefore twofold. First, we introduce the background material in computer science, mathematics and physics necessary to understand quantum computation and quantum information. This is done at a level comprehensible to readers with a background at least the equal of a beginning graduate student in one or more of these three disciplines; the most important requirements are a certain level of mathematical maturity, and the desire to learn about quantum computation and quantum information. The second purpose of the book is to develop in detail the central results of quantum computation and quantum information. With thorough study the reader should develop a working understanding of the fundamental tools and results of this exciting field, either as part of their general education, or as a prelude to independent research in quantum computation and quantum information.
Structure of the book
The basic structure of the book is depicted in Figure 1. The book is divided into three parts. The general strategy is to proceed from the concrete to the more abstract whenever possible.
In Chapter 4 we showed that an arbitrary unitary operation U may be implemented on a quantum computer using a circuit consisting of single qubit and controlled-not gates. Such universality results are important because they ensure the equivalence of apparently different models of quantum computation. For example, the universality results ensure that a quantum computer programmer may design quantum circuits containing gates which have four input and output qubits, confident that such gates can be simulated by a constant number of controlled-not and single qubit unitary gates.
An unsatisfactory aspect of the universality of controlled-not and single qubit unitary gates is that the single qubit gates form a continuum, while the methods for fault-tolerant quantum computation described in Chapter 10 work only for a discrete set of gates. Fortunately, also in Chapter 4 we saw that any single qubit gate may be approximated to arbitrary accuracy using a finite set of gates, such as the controlled-not gate, Hadamard gate H, phase gate S, and π/8 gate. We also gave a heuristic argument that approximating the chosen single qubit gate to an accuracy ∈ required only Θ(1/∈) gates chosen from the finite set. Furthermore, in Chapter 10 we showed that the controlled-not, Hadamard, phase and π/8 gates may be implemented in a fault-tolerant manner.
Recent work in quantum information science has produced a revolution in our understanding of quantum entanglement. Scientists now view entanglement as a physical resource with many important applications. These range from quantum computers, which would be able to compute exponentially faster than classical computers, to quantum cryptographic techniques, which could provide unbreakable codes for the transfer of secret information over public channels. These important advances in the study of quantum entanglement and information touch on deep foundational issues in both physics and philosophy. This interdisciplinary volume brings together fourteen of the world's leading physicists and philosophers of physics to address the most important developments and debates in this exciting area of research. It offers a broad spectrum of approaches to resolving deep foundational challenges - philosophical, mathematical, and physical - raised by quantum information, quantum processing, and entanglement. This book is ideal for historians, philosophers of science and physicists.
Quantum cryptography (or quantum key distribution) is a state-of-the-art technique that exploits properties of quantum mechanics to guarantee the secure exchange of secret keys. This 2006 text introduces the principles and techniques of quantum cryptography, setting it in the wider context of cryptography and security, with specific focus on secret-key distillation. The book starts with an overview chapter, progressing to classical cryptography, information theory (classical and quantum), and applications of quantum cryptography. The discussion moves to secret-key distillation, privacy amplification and reconciliation techniques, concluding with the security principles of quantum cryptography. The author explains the physical implementation and security of these systems, enabling engineers to gauge the suitability of quantum cryptography for securing transmission in their particular application. With its blend of fundamental theory, implementation techniques, and details of recent protocols, this book will be of interest to graduate students, researchers, and practitioners in electrical engineering, physics, and computer science.
In recent years there has been a huge increase in the research and development of nanoscale science and technology. Central to the understanding of the properties of nanoscale structures is the modeling of electronic conduction through these systems. This graduate textbook provides an in-depth description of the transport phenomena relevant to systems of nanoscale dimensions. In this textbook the different theoretical approaches are critically discussed, with emphasis on their basic assumptions and approximations. The book also covers information content in the measurement of currents, the role of initial conditions in establishing a steady state, and the modern use of density-functional theory. Topics are introduced by simple physical arguments, with particular attention to the non-equilibrium statistical nature of electrical conduction, and followed by a detailed formal derivation. This textbook is ideal for graduate students in physics, chemistry, and electrical engineering.
The first realization that the validity of the quantum superposition principle in the Hilbert space describing a composite quantum system may give rise to fundamentally new correlations between the constituent subsystems came in the landmark 1935 paper by Einstein, Podolsky, and Rosen (EPR), where it was shown how the measurement statistics of observables in certain quantum states could not be reproduced by assigning definite wavefunctions to individual subsystems. It was in response to the EPR paper that Schrödinger, in the same year, coined the term entanglement (Verschränkung) to acknowledge the failure of classical intuition in describing the relationship between the “parts” and the “whole” in the quantum world:
Whenever one has a complete expectation catalog – a maximum total knowledge – a ψ function – for two completely separated bodies, …then one obviously has it also for the two bodies together. But the converse is not true. The best possible knowledge of a total system does not necessarily include total knowledge of all its parts, not even when these are fully separated from each other and at the moment are not influencing each other at all.
While Bell's strengthening of the original EPR-paradox setting and the subsequent experimental verification of Bell inequalities irreversibly changed the perception of entanglement from a property of counterintuitive “spookiness” to (beyond reasonable doubt) an experimental reality, the concept and implications of entanglement continue to be associated with a host of physical, mathematical, and philosophical challenges. In particular, investigation of entanglement in both its qualitative and quantitative aspects has intensified under the impetus of quantum information science (QIS).
Discussions of quantum-computational algorithms in the literature refer to various features of quantum mechanics as the source of the exponential speed-up relative to classical algorithms: superposition and entanglement, the fact that the state space of n bits is a space of 2n states while the state space of n qubits is a space of 2n dimensions, the possibility of computing all values of a function in a single computational step by “quantum parallelism,” or the possibility of an efficient implementation of the discrete quantum Fourier transform. Here I propose a different answer to the question posed in the title, in terms of the difference between classical logic and quantum logic, i.e., the difference between the Boolean classical event structure and the non-Boolean quantum event structure. In a nutshell, the ultimate source of the speed-up is the difference between a classical disjunction, which is true (or false) in virtue of the truth values of the disjuncts, and a quantum disjunction, which can be true (or false) even if none of the disjuncts is either true or false.
In the following, I will discuss the information-processing in Deutsch's XOR algorithm (the first genuinely quantum algorithm) and related period-finding quantum algorithms (Simon's algorithm and Shor's factorization algorithm). It is well known that these algorithms can be formulated as solutions to a hidden-subgroup problem. Here the salient features of the information-processing are presented from the perspective of the way in which the algorithms exploit the non-Boolean logic represented by the projective geometry (the subspace structure) of Hilbert space.
Indefinite causal structure poses particular problems for theory formulation since many of the core ideas used in the usual approaches to theory construction depend on having definite causal structure. For example, the notion of a state across space evolving in time requires that we have some definite causal structure so we can define a state on a space-like hypersurface. We will see that many of these problems are mitigated if we are able to formulate the theory in a formalism-local (or F-local) fashion. A formulation of a physical theory is said to be F-local if, in making predictions for any given arbitrary space-time region, we need only refer to mathematical objects pertaining to that region. This is a desirable property both on the grounds of efficiency and since, if we have indefinite causal structure, it is not clear how to select some other space-time region on which our calculations may depend. The usual ways of formulating physical theories (the time-evolving state picture, the histories approach, and the local-equations approach) are not F-local.
We set up a framework for probabilistic theories with indefinite causal structure. This, the causaloid framework, is F-local. We describe how quantum theory can be formulated in the causaloid framework (in an F-local fashion). This provides yet another formulation of quantum theory. This formulation, however, may be particularly relevant to the problem of finding a theory of quantum gravity. The problem of quantum gravity is to find a theory that reduces in appropriate limits to general relativity and quantum theory (including, at least, those situations in which those two theories have been experimentally confirmed).
The use of parameters to describe an experimenter's control over the devices used in an experiment is familiar in quantum physics, for example in connection with Bell inequalities. Parameters are also interesting in a different but related context, as we noticed when we proved a formal separation in quantum mechanics between linear operators and the probabilities that these operators generate. In comparing an experiment against its description by a density operator and detection operators, one compares tallies of experimental outcomes against the probabilities generated by the operators but not directly against the operators. Recognizing that the accessibility of operators to experimental tests is only indirect, via probabilities, motivates us to ask what probabilities tell us about operators, or, put more precisely, “what combinations of a parameterized density operator and parameterized detection operators generate any given set of parametrized probabilities?”
Here, we review and augment recent proofs that any given parameterized probabilities can be generated in very diverse ways, so that a parameterized probability measure, detached from any of the (infinitely many) parameterized operators that generate it, becomes an interesting object in its own right. By detaching a parameterized probability measure from the operators that may have led us to it, we (1) strengthen Holevo's bound on a quantum communication channel and (2) clarify a role for multiple levels of modeling in an example based on quantum key distribution. We then inquire into some parameterized probability measures generated by entangled states and into the topology of the associated parameter spaces; in particular we display some previously overlooked topological features of level sets of these probability measures.