We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
J. C. M. Baeten, Technische Universiteit Eindhoven, The Netherlands,T. Basten, Technische Universiteit Eindhoven, The Netherlands,M. A. Reniers, Technische Universiteit Eindhoven, The Netherlands
In the process theory BSP(A) discussed in Chapter 4, the only way of combining two processes is by means of alternative composition. For the specification of more complex systems, additional composition mechanisms are useful. This chapter treats the extension with a sequential-composition operator. Given two process terms x and y, the term x · y denotes the sequential composition of x and y. The intuition of this operation is that upon the successful termination of process x, process y is started. If process x ends in a deadlock, also the sequential composition x · y deadlocks. Thus, a pre-requisite for a meaningful introduction of a sequential-composition operator is that successful and unsuccessful termination can be distinguished. As already explained in Chapter 4, this is not possible in the theory MPT(A) as all processes end in deadlock. Thus, as before, as a starting point the theory BSP(A) of Chapter 4 is used. This theory is extended with sequential composition to obtain the Theory of Sequential Processes TSP(A). It turns out that the empty process is an identity element for sequential composition: x · 1 = 1 · x = x.
The process theory TSP
This section introduces the process theory TSP, the Theory of Sequential Processes. The theory has, as before, a set of actions A as its parameter. The signature of the process theory TSP(A) is the signature of the process theory BSP(A) extended with the sequential-composition operator.
J. C. M. Baeten, Technische Universiteit Eindhoven, The Netherlands,T. Basten, Technische Universiteit Eindhoven, The Netherlands,M. A. Reniers, Technische Universiteit Eindhoven, The Netherlands
J. C. M. Baeten, Technische Universiteit Eindhoven, The Netherlands,T. Basten, Technische Universiteit Eindhoven, The Netherlands,M. A. Reniers, Technische Universiteit Eindhoven, The Netherlands
The control and data flow of a program can be represented using continuations, a concept from denotational semantics that has practical application in real compilers. This book shows how continuation-passing style is used as an intermediate representation on which to perform optimisations and program transformations. Continuations can be used to compile most programming languages. The method is illustrated in a compiler for the programming language Standard ML. However, prior knowledge of ML is not necessary, as the author carefully explains each concept as it arises. This is the first book to show how concepts from the theory of programming languages can be applied to the producton of practical optimising compilers for modern languages like ML. This book will be essential reading for compiler writers in both industry and academe, as well as for students and researchers in programming language theory.
Ada is the only ISO-standard, object-oriented, concurrent, real-time programming language. It is intended for use in large, long-lived applications where reliability and efficiency are essential, particularly real-time and embedded systems. In this book, Alan Burns and Andy Wellings give a thorough, self-contained account of how the Ada tasking model can be used to construct a wide range of concurrent and real-time systems. This is the only book that focuses on an in-depth discussion of the Ada tasking model. Following on from the authors' earlier title Concurrency in Ada, this book brings the discussion up to date to include the new Ada 2005 language and the recent advances in real-time programming techniques. It will be of value to software professionals and advanced students of programming alike: indeed every Ada programmer will find it essential reading and a primary reference work that will sit alongside the language reference manual.
This book gives applications of the theory of process algebra, or Algebra of Communicating Processes (ACP), that is the study of concurrent or communicating processes studied using an algebraic framework. The approach is axiomatic; the authors consider structures that are some set of mostly equational axioms, which are equipped with several operators. Thus the term 'algebra' is used in the model-theoretic sense. The axiomatic approach enables one to organize the field of process theories. The theory is applied systematically to a number of situations, including systolic algorithms, semantics of an object-oriented language, and protocols. It will be welcomed by computer scientists working in parallel programming.
Category theory has become increasingly important and popular in computer science, and many universities now have introductions to category theory as part of their courses for undergraduate computer scientists. The author is a respected category theorist and has based this textbook on a course given over the last few years at the University of Sydney. The theory is developed in a straightforward way, and is enriched with many examples from computer science. Thus this book meets the needs of undergradute computer scientists, and yet retains a level of mathematical correctness that will broaden its appeal to include students of mathematics new to category theory.
Type theory is one of the most important tools in the design of higher-level programming languages, such as ML. This book introduces and teaches its techniques by focusing on one particularly neat system and studying it in detail. In this way, all the key ideas are covered without getting involved in the complications of more advanced systems, but concentrating rather on the principles that make the theory work in practice. This book takes a type-assignment approach to type theory, and the system considered is the simplest polymorphic one. The author covers all the basic ideas, including the system's relation to propositional logic, and gives a careful treatment of the type-checking algorithm which lies at the heart of every such system. Also featured are two other interesting algorithms that have been buried in inaccessible technical literature. The mathematical presentation is rigorous but clear, making the book at a level which can be used as an introduction to type theory for computer scientists.
By
Mingsheng Ying, University of Technology Sydney,
Runyao Duan, University of Technology Sydney,
Yuan Feng, University of Technology Sydney,
Zhengfeng Ji, Perinmeter Institute for Theoretical Physics
Edited by
Simon Gay, University of Glasgow,Ian Mackie, Imperial College London
This chapter presents a systematic exposition of predicate transformer semantics for quantum programs. It is divided into two parts: The first part reviews the state transformer (forward) semantics of quantum programs according to Selinger's suggestion of representing quantum programs by superoperators and elucidates D'Hondt-Panangaden's theory of quantum weakest preconditions in detail. In the second part, we develop a quite complete predicate transformer semantics of quantum programs based on Birkhoff-von Neumann quantum logic by considering only quantum predicates expressed by projection operators. In particular, the universal conjunctivity and termination law of quantum programs are proved, and Hoare's induction rule is established in the quantum setting.
8.1 Introduction
In the mid-1990s Shor and Grover discovered, respectively, the famous quantum factoring and searching algorithms. Their discoveries indicated that in principle quantum computers offer a way to accomplish certain computational tasks much more efficiently than classical computers, and thus stimulated an intensive investigation in quantum computation. Since then a substantial effort has been made to develop the theory of quantum computation, to find new quantum algorithms, and to exploit the physical techniques needed in building functional quantum computers, including in particular fault tolerance techniques.
Currently, quantum algorithms are expressed mainly at the very low level of quantum circuits. In the history of classical computation, however, it was realized long time ago that programming languages provide a technique that allows us to think about a problem that we intend to solve in a high-level, conceptual way, rather than the details of implementation.
Reasoning about quantum systems has gained prominence due to a big potential in applications such as information processing, security, distributed systems, and randomized algorithms. This fact has attracted research in formal reasoning about quantum states, programs, and processes. On the other hand, temporal logics have proved to be successful in the verification of classical distributed systems and security protocols. In this chapter we extend exogenous quantum propositional logic with temporal modalities, considering both linear and branching time. We provide a weakly complete Hilbert calculi for the proposed quantum temporal logics and study their SAT and model-checking problems.
10.1 Introduction
Reasoning about quantum systems has gained prominence due to their potential applications in information processing, security, distributed systems and randomized algorithms. This has attracted research in formal reasoning about quantum states (see for instance van der Meyden and Patra 2003b, a; Mateus and Sernadas 2006; Chadha et al. 2009; Caleiro et al. 2006) and quantum programs (cf. Knill 1996; Sanders and Zuliani 2000; Abramsky and Coecke 2004; D'Hondt and Panangaden 2004; Altenkirch and Grattage 2005; Selinger and Valiron 2005; Baltag and Smets 2006; Baltazar et al. 2007; Chadha et al. 2006, 2007; Baltazar et al. 2008). On the other hand, formal methods have proved to be successful in design and verification of classical distributed systems and security protocols (e.g., Clarke and Wing 1996; Meadows 2003). Herein, we present branching and linear temporal logics for reasoning about evolution of quantum systems composed of a fixed finite set of qubits.
We describe model-checking techniques for protocols arising in quantum information theory and quantum cryptography. We discuss the theory and implementation of a practical model checker, QMC, for quantum protocols. In our framework, we assume that the quantum operations performed in a protocol are restricted to those within the stabilizer formalism; while this particular set of operations is not universal for quantum computation, it allows us to develop models of several useful protocols as well as of systems involving both classical and quantum information processing. We discuss the modeling language of QMC, the logic used for verification, and the verification algorithms that have been implemented in the tool. We demonstrate our techniques with applications to a number of case studies, including quantum teleportation and the BB84 quantum coin-flipping protocol.
11.1 Introduction
Model-checking is a method of computer-aided verification used widely in both academia and industry for the analysis of software and communication systems. The method essentially consists of three phases: (1) modeling the system of interest at a suitable level of abstraction, usually by describing it in a formal specification language; (2) specifying the properties that the system must satisfy – these properties are typically expressed with reference to the system model, using a logic designed specially for this purpose (e.g., linear or branching time temporal logic); and (3) supplying the model and the properties to a model-checking tool, which automatically constructs the state space of the model and checks whether or not the given properties are satisfied (and if not, usually providing a counterexample).
The idea of quantum computation, in the algorithmic sense, originated from the suggestion by Feynman (1982) that a computer based on the principles of quantum mechanics might be capable of efficiently simulating quantum systems of interest to physicists; such simulation seems to be very difficult with classical computers. Feynman's suggestion was followed up by Deutsch (1985), who introduced the notion of the quantum Turing machine and investigated the possible computational power of physically realizable computers. He showed that a specific problem, now known as Deutsch's problem, can be solved more efficiently by a quantum algorithm than by a classical algorithm. Several years later, Shor (1994) discovered efficient quantum algorithms for two important practical problems – integer factorization and the “discrete logarithm” problem – and shortly afterwards, Grover (1996) discovered an efficient quantum algorithm for unstructured searching. Since then, quantum algorithmics and quantum complexity theory have developed into substantial and active research fields.
Meanwhile, the principles of quantum mechanics were being used as the foundation for a new approach to cryptography. Bennett and Brassard (1984) defined a protocol for key distribution whose security is guaranteed by the laws of quantum theory. Their system built on earlier work by Wiesner (1983), which remained unpublished until several years after its conception. We regard quantum cryptography as an aspect of quantum computation, in particular distributed quantum computation; alternatively, both quantum algorithmics and quantum cryptography can be viewed as branches of quantum information processing.
In this chapter, we present applications of abstract interpretation techniques in quantum computing. Quantum computing is a now well-established domain of computer science, and the recent developments of semantic techniques show the vitality of this rapidly growing area. On the other hand, the proof has been made that abstract interpretation is a powerful theory (of “classical” computer science) for comparing more or less precise semantics of the same programming language. In a more general picture, abstract interpretation can be seen as a framework for comparing the precision of several representations of the same dynamic system evolution. In this paper, we point out that abstract interpretation can be fruitfully used in quantum computing: (i) for establishing a hierarchy of quantum semantics and (ii) for analyzing entanglement evolution.
6.1 Introduction
The field of quantum information processing is growing rapidly. This development is based on the exploitation of nonclassical phenomena such as superposition and entanglement, which gave rise to new protocols (cryptographic protocols with unconditional security, Bennett 1992; Bennett and Brassard 1984; teleportation Bennett et al. 1993), and new algorithms (polynomial time factorization, Shor 1994 and search algorithms (Grover 1996)).
Some recent developments in quantum information processing are based on the use of high-level methods, adapting and extending the methods successfully used in classical computation (Abramsky and Coecke 2004; Gay and Nagarajan 2005, 2006; Gay et al. 2008; Kashefi 2003; Lalire and Jorrand 2004; Selinger 2004c, d).
Measurement-based quantum computation (MBQC) has emerged as a new approach to quantum computation where the notion of measurement is the main driving force of computation. This is in contrast with the more traditional circuit model that takes unitary operations as fundamental. Among measurement-based quantum computation methods the recently introduced one-way quantum computer stands out as basic and fundamental. The key idea is to start from an entangled state and then use measurements and one-qubit unitaries, which may be dependent on the outcomes of measurements, to guide the computation. The main point is that one never has to perform unitaries on more than one qubit at a time after the initial preparation of an entangled state. The “programs” that one writes in this model are traditionally called “patterns.”
In this chapter, we develop a rigorous mathematical model underlying measurement-based quantum computation. We give syntax, operational semantics, denotational semantics, and an algebra of programs derived from the denotational semantics. We also present a rewrite theory and prove a general standardization theorem that allows all programs to be put in a semantically equivalent standard form. Standardization has far-reaching consequences: a new physical architecture based on performing all the entanglement in the beginning, parallelization by exposing the dependency structure of measurements, and expressiveness theorems.
We use our general measurement calculus not just to formalize the one-way model but also several other measurement-based models, e.g., Teleportation, Phase, and Pauli models, and present compositional embeddings of them into and from the one-way model.
It is well known that the “quantum logic” approach to the foundations of quantum mechanics is based on the subspace ordering of projectors on a Hilbert space. In this paper, we show that this is a special case of an ordering on partial isometries, introduced by Halmos and McLaughlin. Partial isometries have a natural physical interpretation, however, they are notoriously not closed under composition. In order to take a categorical approach, we demonstrate that the Halmos-McLaughlin partial ordering, together with tools from both categorical logic and inverse categories, allows us to form a category of partial isometries.
This category can reasonably be considered a “categorification” of quantum logic – we therefore compare this category with Abramsky and Coecke's “compact closed categories” approach to foundations and with the “monoidal closed categories” view of categorical logic. This comparison illustrates a fundamental incompatibility between these two distinct approaches to the foundations of quantum mechanics.
9.1 Introduction
As early as 1936, von Neumann and Birkhoff proposed treating projectors on Hilbert space as propositions about quantum systems (Birkhoff and von Neumann 1936), by direct analogy with classical order-theoretic approaches to logic. Boolean lattices arise as the Lindenbaum-Tarski algebras of propositional logics, and as the set of all projectors on a Hilbert space also forms an orthocomplemented lattice, the operations meet, join, and complement were analogously interpreted as the logical connectives conjunction, disjunction, and negation.
Just as conventional functional programs may be understood as proofs in an intuitionistic logic, so quantum processes can also be viewed as proofs in a suitable logic.We describe such a logic, the logic of compact closed categories and biproducts, presented both as a sequent calculus and as a system of proof-nets. This logic captures much of the necessary structure needed to represent quantum processes under classical control, while remaining agnostic to the fine details. We demonstrate how to represent quantum processes as proof-nets and show that the dynamic behavior of a quantum process is captured by the cut-elimination procedure for the logic. We show that the cut-elimination procedure is strongly normalizing: that is, that every legal way of simplifying a proof-net leads to the same, unique, normal form. Finally, taking some initial set of operations as nonlogical axioms, we show that that the resulting category of proof-nets is a representation of the free compact closed category with biproducts generated by those operations.
3.1 Introduction
3.1.1 Logic, Processes, and Categories
Birkhoff and von Neumann initiated the logical study of quantum mechanics in their well-known 1936 paper. They constructed a logic by assigning a proposition letter to each observable property of a given quantum system and studied negations, conjunctions, and disjunctions of these properties. The resulting lattice is nondistributive, and so the heart of what is called “quantum logic” is the study of various kinds of nondistributive lattices.
The no-cloning theorem is a basic limitative result for quantum mechanics, with particular significance for quantum information. It says that there is no unitary operation that makes perfect copies of an unknown (pure) quantum state. We re-examine this foundational result from the perspective of the categorical formulation of quantum mechanics recently introduced by the author and Bob Coecke. We formulate and prove a novel version of the result, as an incompatibility between having a “natural” copying operation and the structural features required for modeling quantum entanglement coexisting in the same category. This formulation is strikingly similar to a well-known limitative result in categorical logic, Joyal's lemma, which shows that a “Boolean cartesian closed category” trivializes and hence provides a major roadblock to the computational interpretation of classical logic. This shows a heretofore unsuspected connection between limitative results in proof theory and no-go theorems in quantum mechanics. The argument is of a robust, topological character and uses the graphical calculus for monoidal categories to advantage.
1.1 Introduction
The no-cloning theorem (Dieks 1982; Wootters and Zurek 1982) is a basic limitative result for quantum mechanics, with particular significance for quantum information. It says that there is no unitary operation that makes perfect copies of an unknown (pure) quantum state. A stronger form of this result is the no-broadcasting theorem (Barnum et al. 1996), which applies to mixed states. There is also a no-deleting theorem (Pati and Braunstein 2000).
The quantum IO monad is a purely functional interface to quantum programming implemented as a Haskell library. At the same time it provides a constructive semantics of quantum programming. The QIO monad separates reversible (i.e., unitary) and irreversible (i.e., probabilistic) computations and provides a reversible let operation (ulet), allowing us to use ancillas (auxiliary qubits) in a modular fashion. QIO programs can be simulated either by calculating a probability distribution or by embedding it into the IO monad using the random number generator. As an example we present a complete implementation of Shor's algorithm.
5.1 Introduction
We present an interface from a pure functional programming language, Haskell, to quantum programming: the quantum IO monad, and use it to implement Shor's factorization algorithm. The implementation of the QIO monad provides a constructive semantics for quantum programming, i.e., a functional program that can also be understood as a mathematical model of quantum computing. Actually, the Haskell QIO library is only a first approximation of such a semantics; we would like to move to a more expressive language that is also logically sound. Here we are thinking of a language such as Agda (Norell 2007), which is based on Martin Löf's type theory. We have already investigated this approach of functional specifications of effects in a classical context (Swierstra and Altenkirch 2007, 2008; Swierstra 2008). At the same time the QIO monad provides a high-level interface to a hypothetical quantum computer.