We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We discuss the design of a typed lambda calculus for quantum computation. After a brief discussion of the role of higher-order functions in quantum information theory, we define the quantum lambda calculus and its operational semantics. Safety invariants, such as the no-cloning property, are enforced by a static type system that is based on intuitionistic linear logic. We also describe a type inference algorithm and a categorical semantics.
4.1 Introduction
The lambda calculus, developed in the 1930s by Church and Curry, is a formalism for expressing higher-order functions. In a nutshell, a higher-order function is a function that inputs or outputs a “black box,” which is itself a (possibly higher-order) function. Higher-order functions are a computationally powerful tool. Indeed, the pure untyped lambda calculus has the same computational power as Turing machines (Turing 1937). At the same time, higher-order functions are a useful abstraction for programmers. They form the basis of functional programming languages such as LISP (McCarthy 1960), Scheme (Sussman and Steele 1975), ML (Milner 1978), and Haskell (Hudak et al. 2007).
In this chapter, we discuss how to combine higher-order functions with quantum computation. We believe that this is an interesting question for a number of reasons. First, the combination of higher-order functions with quantum phenomena raises the prospect of entangled functions. Certain well-known quantum phenomena can be naturally described in terms of entangled functions, and we give some examples of this in Section 4.2.
In recent work, several researchers including the authors have developed a categorical formalization of quantum mechanics in terms of symmetric monoidal dagger categories. In this framework, classical data turned out to be represented by an algebraic structure, that of special commutative dagger Frobenius algebras. This structure captures the distinct capabilities that apply to classical data – that they can be copied and deleted. In the present paper, we provide categorical semantics and diagrammatic representations of deterministic, nondeterministic, and probabilistic operations over classical data represented in this way.
Moreover, a combination of some fundamental categorical constructions (the Kleisli construction of the category of free algebras and the Grothendieck construction of the total category of an indexed category) with the specific categorical presentations of pure and mixed quantum states provides a resource-sensitive categorical account of classical control of quantum data and of classical data resulting from quantum measurements, as well as of the classical data processing that may happen in between measurements and controls. Along the way we also discover some apparently novel quantum typing structures.
One of the salient features of categorical quantum mechanics is still its graphic calculus, which allows succinct presentations of diverse quantum protocols. The elements of an abstract stochastic calculus are beginning to emerge from it, pointing toward convenient refinements of resource-sensitive logics that are hoped to capture the probabilistic content and limited observability of quantum data.
Diagrams are widely used in reasoning about problems in physics, mathematics and logic, but have traditionally been considered to be only heuristic tools and not valid elements of mathematical proofs. This book challenges this prejudice against visualisation in the history of logic and mathematics and provides a formal foundation for work on natural reasoning in a visual mode. The author presents Venn diagrams as a formal system of representation equipped with its own syntax and semantics and specifies rules of transformation that make this system sound and complete. The system is then extended to the equivalent of a first-order monadic language. The soundness of these diagrammatic systems refutes the contention that graphical representation is misleading in reasoning. The validity of the transformation rules ensures that the correct application of the rules will not lead to fallacies. The book concludes with a discussion of some fundamental differences between graphical systems and linguistic systems. This groundbreaking work will have important influence on research in logic, philosophy and knowledge representation.
Axiomatic categorical domain theory is crucial for understanding the meaning of programs and reasoning about them. This book is the first systematic account of the subject and studies mathematical structures suitable for modelling functional programming languages in an axiomatic (i.e. abstract) setting. In particular, the author develops theories of partiality and recursive types and applies them to the study of the metalanguage FPC; for example, enriched categorical models of the FPC are defined. Furthermore, FPC is considered as a programming language with a call-by-value operational semantics and a denotational semantics defined on top of a categorical model. To conclude, for an axiomatisation of absolute non-trivial domain-theoretic models of FPC, operational and denotational semantics are related by means of computational soundness and adequacy results. To make the book reasonably self-contained, the author includes an introduction to enriched category theory.
Chaitin, the inventor of algorithmic information theory, presents in this book the strongest possible version of Gödel's incompleteness theorem, using an information theoretic approach based on the size of computer programs. One half of the book is concerned with studying the halting probability of a universal computer if its program is chosen by tossing a coin. The other half is concerned with encoding the halting probability as an algebraic equation in integers, a so-called exponential diophantine equation.
Constraint logic programming lies at the intersection of logic programming, optimisation and artificial intelligence. It has proved a successful tool in many areas including production planning, transportation scheduling, numerical analysis and bioinformatics. Eclipse is one of the leading software systems that realise its underlying methodology. Eclipse is exploited commercially by Cisco, and is freely available and used for teaching and research in over 500 universities. This book has a two-fold purpose. It's an introduction to constraint programming, appropriate for one-semester courses for upper undergraduate or graduate students in computer science or for programmers wishing to master the practical aspects of constraint programming. By the end of the book, the reader will be able to understand and write constraint programs that solve complex problems. Second, it provides a systematic introduction to the Eclipse system through carefully-chosen examples that guide the reader through the language and illustrate its power, versatility and utility.
Dr Andrews here provides a homogeneous treatment of the semantics (operational and logical) of both theoretical and practical logic programming languages. He shows how the rift between theory and practice in logic programming can be bridged. This is achieved by precisely characterizing the way in which 'depth-first' search for solutions to a logical formula - the usual strategy in most practical languages - is incomplete. Languages that perform 'breadth-first' searches reflect more closely the theory underlying logic programming languages. Researchers interested in logic programming or semantics, as well as artificial intelligence search strategies, will want to consult this book as the only source for some essential and new ideas in the area.
Epistemic logic has grown from its philosophical beginnings to find diverse applications in computer science as a means of reasoning about the knowledge and belief of agents. This book, based on courses taught at universities and summer schools, provides a broad introduction to the subject; many exercises are included together with their solutions. The authors begin by presenting the necessary apparatus from mathematics and logic, including Kripke semantics and the well-known modal logics K, T, S4 and S5. Then they turn to applications in the contexts of distributed systems and artificial intelligence: topics that are addressed include the notions of common knowledge, distributed knowledge, explicit and implicit belief, the interplays between knowledge and time, and knowledge and action, as well as a graded (or numerical) variant of the epistemic operators. The problem of logical omniscience is also discussed extensively. Halpern and Moses' theory of honest formulae is covered, and a digression is made into the realm of non-monotonic reasoning and preferential entailment. Moore's autoepistemic logic is discussed, together with Levesque's related logic of 'all I know'. Furthermore, it is shown how one can base default and counterfactual reasoning on epistemic logic.
This book discusses recent research in the theoretical foundations of several subjects of importance for the design of hardware, and for computer science in general. The physical technologies of very large scale integration (VLSI) are having major effects on the electronic industry. The potential diversity and complexity of digital systems have begun a revolution in the technologies of digital design, involving the application of concepts and methods to do with algorithms and programming. In return, the problems of VLSI design have led to new subjects becoming of importance in computer science. Topics covered in this volume include: models of VLSI complexity; complexity theory; systolic algorithm design; specification theory; verification theory; design by stepwise refinement and transformations. A thorough literature survey with an exhaustive bibliography is also included. The book has grown from a workshop held at the Centre for Theoretical Computer Science at Leeds University and organised by the editors.
Formal specification is a method for precisely modelling computer-based systems that combines concepts from software engineering and mathematical logic. In this book the authors describe algebraic and state-based specification techniques from the unified view of the Common Object-oriented Language for Design, COLD, a wide-spectrum language in the tradition of VDM and Z. The kernel language is explained in detail, with many examples, including: set representation, a display device, an INGRES-like database system, and a line editor. Fundamental techniques such as initial algebra semantics, loose semantics, partial functions, hiding, sharing, predicate and dynamic logic, abstraction functions, representation of invariants and black-box correctness are also presented. More advanced ideas, for example Horn logic, and large systems are given in the final part. Appendices contain full details of the language's syntax and a specification library. Techniques for software development and design are emphasised throughout, so the book will be an excellent choice for courses in these areas.
Reasoning under uncertainty, that is, making judgements with only partial knowledge, is a major theme in artificial intelligence. Professor Paris provides here an introduction to the mathematical foundations of the subject. It is suited for readers with some knowledge of undergraduate mathematics but is otherwise self-contained, collecting together the key results on the subject and formalizing within a unified framework the main contemporary approaches and assumptions. The author has concentrated on giving clear mathematical formulations, analyses, justifications and consequences of the main theories about uncertain reasoning, so the book can serve as a textbook for beginners or as a starting point for further basic research into the subject. It will be welcomed by graduate students and research workers in logic, philosophy and computer science as an account of how mathematics and artificial intelligence can complement and enrich each other.
The major reason for the lack of use of parallel computing is the mismatch between the complexity and variety of parallel hardware, and the software development tools to program it. The cost of developing software needs to be amortised over decades, but the platforms on which it executes change every few years, requiring complete rewrites. The evident cost-effectiveness of parallel computation has not been realized because of this mismatch. This book presents an integrated approach to parallel software development by addressing both software and performance issues together. It presents a methodology for software construction that produces architecture-independent and intellectually abstract software. The software can execute efficiently on a range of existing and potential hardware configurations. The approach is based on the construction of categorical data types, a generalization of abstract data types, and of objects. Categorical data types abstract both from the representation of a data type, and also from the detailed control flow necessary to perform operations on it. They thus impose a strong separation between the semantics, on which programs can depend, and the implementation, which is therefore free to hide the parallel machine properties that are used.
Declarative programs consist of mathematical functions and relations and are amenable to formal specification and verification, since the methods of logic and proof can be applied to the programs in a well-defined manner. Here Dr Padawitz emphasizes verification based on logical inference rules, i.e. deduction (in contrast with model-theoretic approaches, deductive methods can be automated to some extent). His treatment of the subject differs from others in that he tries to capture the actual styles and applications of programming; neither too general with respect to the underlying logic, nor too restrictive for the practice of programming. He generalizes and unifies results from classical theorem-proving and term rewriting to provide proof methods tailored to declarative program synthesis and verification. Detailed examples accompany the development of the methods, whose use is supported by a documented prototyping system. The book can be used for graduate courses or as a reference for researchers in formal methods, theorem-proving and declarative languages.
Petri nets are a popular and powerful formal model for the analysis and modelling of concurrent systems, and a rich theory has developed around them. Petri nets are taught to undergraduates, and also used by industrial practitioners. This book focuses on a particular class of petri nets, free choice petri nets, which play a central role in the theory. The text is very clearly organised, with every notion carefully explained and every result proved. Clear exposition is given for place invariants, siphons, traps and many other important analysis techniques. The material is organised along the lines of a course book, and each chapter contains numerous exercises, making this book ideal for graduate students and research workers alike.
Mathematicians from Leibniz to Hilbert have sought to mechanise the verification of mathematical proofs. Developments arising out of Gödel's proof of his incompleteness theorem showed that no computer program could automatically prove true all the theorems of mathematics. In practice, however, there are a number of sophisticated automated reasoning programs that are quite effective at checking mathematical proofs. Now in paperback, this book describes the use of a computer program to check the proofs of several celebrated theorems in metamathematics including Gödel's incompleteness theorem and the Church–Rosser theorem. The computer verification using the Boyer–Moore theorem prover yields precise and rigorous proofs of these difficult theorems. It also demonstrates the range and power of automated proof checking technology. The mechanisation of metamathematics itself has important implications for automated reasoning since metatheorems can be applied by labour-saving devices to simplify proof construction. The book should be accessible to scientists and philosophers with some knowledge of logic and computing.
This book presents the proceedings of the Distributed Ada '89 Symposium held at the University of Southampton in December. The objective of the symposium was to provide a platform for developers and users with experience in the areas of distributed and parallel environments to reveal the advantages and difficulties encountered. The impact of Ada-9X and other enhancements to the language were also explored.
The authors describe here a framework in which the type notation of functional languages is extended to include a notation for binding times (that is run-time and compile-time) that distinguishes between them. Consequently the ability to specify code and verify program correctness can be improved. Two developments are needed, the first of which introduces the binding time distinction into the lambda calculus, in a manner analogous with the introduction of types into the untyped lambda calculus. Methods are also presented for introducing combinators for run-time. The second concerns the interpretation of the resulting language, which is known as the mixed lambda-calculus and combinatory logic. The notion of 'parametrized semantics' is used to describe code generation and abstract interpretation. The code generation is for a simple abstract machine designed for the purpose; it is close to the categorical abstract machine. The abstract interpretation focuses on a strictness analysis that generalises Wadler's analysis for lists.
In our day-to-day lives we constantly make decisions which are simply 'good enough' rather than optimal. Most computer-based decision-making algorithms, on the other hand, doggedly seek only the optimal solution based on rigid criteria and reject any others. In this book, Professor Stirling outlines an alternative approach, using novel algorithms and techniques which can be used to find satisficing solutions. Building on traditional decision and game theory, these techniques allow decision-making systems to cope with more subtle situations where self and group interests conflict, perfect solutions can't be found and human issues need to be taken into account - in short, more closely modelling the way humans make decisions. The book will therefore be of great interest to engineers, computer scientists and mathematicians working on artificial intelligence and expert systems.