We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many computing systems consist of a possibly large number of components that not only work independently or concurrently, but also interact or communicate with each other from time to time. Examples of such systems are operating systems, distributed systems and communication protocols, as well as systolic algorithms, computer architectures and integrated circuits.
Conceptually, it is convenient to treat these systems and their components uniformly as concurrent processes. A process is here an object that is designed for a possibly continuous interaction with its user, which can be another process. An interaction can be an input or output of a value, but we just think of it abstractly as a communication. In between two subsequent communications the process usually engages in some internal actions. These proceed autonomously at a certain speed and are not visible to the user. However, as a result of such internal actions the process behaviour may appear nondeterministic to the user. Concurrency arises because there can be more than one user and inside the process more than one active subprocess. The behaviour of a process is unsatisfactory for its user(s) if it does not communicate as desired. The reason can be that the process stops too early or that it engages in an infinite loop of internal actions. The first problem causes a deadlock with the user(s); the second one is known as divergence. Thus most processes are designed to communicate arbitrarily long without any danger of deadlock or divergence.
A crucial test for any theory of concurrent processes is case studies. These will clarify the application areas where this theory is particularly helpful but also reveal its shortcomings. Such shortcomings can be challenges for future research.
Considering all existing case studies based on Petri nets, algebraic process terms and logical formulas, it is obvious that these description methods are immensely helpful in specifying, constructing and verifying concurrent processes. We think in particular of protocol verification, e.g. [Vaa86, Bae90], the verification of VLSI algorithms, e.g. [Hen86], the design of computer architectures, e.g. [Klu87, DD89a, DD89b], and even of concurrent programming languages such as OCCAM [INM84, RH88] or POOL [Ame85, ABKR86, AR89, Vaa90]. However, these examples use one specific description method in each case.
Our overall aim is the smooth integration of description methods that cover different levels of abstraction in a top-down design of concurrent processes. This aim is similar to what Misra and Chandy have presented in their rich and beautiful book on UNITY [CM88]. However, we believe that their approach requires complementary work at the level of implementation, i.e. where UNITY programs are mapped onto architectures.
Our presentation of three different views of concurrent processes attempts to contribute to this overall aim. To obtain a coherent theory, we concentrated on a setting where simple classes of nets, terms and formulas are used. We demonstrated the applicability of this setting in a series of small but non-trivial process constructions.
The stepwise development of complex systems through various levels of abstraction is good practice in software and hardware design. However, the semantic link between these different levels is often missing. This book is intended as a detailed case study how such links can be established. It presents a theory of concurrent processes where three different semantic description methods are brought together in one uniform framework. Nets, terms and formulas are seen as expressing complementary views of processes, each one describing processes at a different level of abstraction.
Petri nets are used to describe processes as concurrent and interacting machines which engage in internal actions and communications with their environment or user.
Process terms are used as an abstract concurrent programming language. Due to their algebraic structure process terms emphasise compositionality, i.e. how complex terms are composed from simpler ones.
Logical formulas of a first-order predicate logic, called trace logic, are used as a specification language for processes. Logical formulas specify safety and liveness aspects of the communication behaviour of processes as required by their users.
At the heart of this theory are two sets of transformation rules for the top-down design of concurrent processes. The first set can be used to transform logical formulas stepwise into process terms, and the second set can be used to transform process terms into Petri nets. These rules are based on novel techniques for the operational and denotational semantics of concurrent processes.
We now introduce a second view of concurrent processes whereby each process is a term over a certain signature of operator symbols. By interpreting these symbols on nets, we will solve the problem of compositionality. As interpretations we take a selection of the operators suggested in Lauer's COSY, Milner's CCS and Hoare's CSP.
Lauer's COSY (Concurrent Systems) is one of the first approaches to compositionality of processes on a schematic, uninterpreted level [LTS79]. It originates from path expressions [CH74] and can thus be seen as an extension of regular expressions to include parallelism. We use here COSY's operator for parallel composition because, as we shall see later in Chapter 4, it enjoys pleasant logical properties.
A significant step beyond COSY is Milner's CCS (Calculus of Communicating Systems) with its conceptual roots in algebra and the λ-calculus [Mil80, Mil83]. From CCS we take the idea that processes are recursive terms over certain operator symbols, i.e. they form the smallest set that is generated by the operator symbols and closed under parameterless recursion. In COSY only iteration is present as might be clear from its background in regular expressions. By using recursion we ensure that process terms are Turing powerful even on the schematic level without the help of interpreted values or variables. We also take CCS's choice operator because it allows a very clear treatment on the level of nets, and its notion of action morphism by which actions can be renamed.
In, Harper, Honsell, and Plotkin present LF (the Logical Framework) as a general framework for the definition of logics. LF provides a uniform way of encoding a logical language, its inference rules and its proofs. In, Avron, Honsell, and Mason give a variety of examples for encoding logics in LF. In this paper we describe Elf, a meta-language intended for environments dealing with deductive systems represented in LF.
While this paper is intended to include a full description of the Elf core language, we only state, but do not prove here the most important theorems regarding the basic building blocks of Elf. These proofs are left to a future, paper. A preliminary account of Elf can be found in. The range of applications of Elf includes theorem proving and proof transformation in various logics, definition and execution of structured operational and natural semantics for programming languages, type checking and type inference, etc. The basic idea behind Elf is to unify logic definition (in the style of LF) with logic programming (in the style of λProlog, see). It achieves this unification by giving types an operational interpretation, much the same way that Prolog gives certain formulas (Horn-clauses) an operational interpretation. An alternative approach to logic programming in LF has been developed independently by Pym.
Here are some of the salient characteristics of our unified approach to logic definition and meta-programming.
Martin-Löf's type theory is presented in several steps. The kernel is a dependently typed λ-calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. At each step syntax, inference rules, and set-theoretic semantics are given.
Introduction
Usually Martin-Löf's type theory is presented as a closed system with rules for a fixed collection of set formers including Π, ∑, +, Eq, Nn, N, W, and Un. But it is often pointed out that the system is in principle open to extension: we may introduce new sets when there is a need for them. The principle is that a set is by definition inductively generated – it is defined by its introduction rules, which are rules for generating its elem ents. The elimination rule is determined by the introduction rules and expresses definition by primitive recursion on the way the elements of the set are generated. (Normally the term primitive recursive refers to number-theoretic functions. But it makes sense to use this term generally for the kind of recursion you have in Martin-Löf's type theory, since it is recursion on the way the elements of a set are generated. This includes primitive recursive functionals and transfinite recursion on well-orderings. An alternative term would be structural recursion in analogy with structural induction.)
Backhouse et.al. exhibited a schema for inductive sets which delimits a class of definitions admissible in Martin-Löf's type theory, which includes all the standard operations for forming small sets except the equality set.
We define an extended version of Nederpelt's calculus which can be used as a logical framework. The extensions have been introduced in order to support the notions of mathematical definition of constants and to internalize the notion of theory. The resulting calculus remains concise and simple, a basic requirement for logical frameworks. The calculus manipulates two kinds of objects: texts which correspond to λ-expressions, and contexts which are mainly sequences of variable declarations, constant definitions, or context abbreviations. Basic operations on texts and contexts are provided. It is argued that these operations allow one to structure large theories. An example is provided.
Introduction
This paper introduces the static kernel of a language called DEVA. This language, which has been developed in the framework of the ToolUse Esprit project, is intended to express software development mathematically. The general paradigm which was followed considered development methods as theories and developments as proofs. Therefore, the kernel of the language should provide a general treatment of formal theories and proofs.
The problem of defining a generic formal system is comparable to the one of defining a general computing language. While, according to Church's thesis, any algorithm can be expressed as a recursive function, one uses higher level languages for the actual programming of computers. Similarly, one could argue that any formal system can be expressed as Post productions, but to use such a formalism as a logical framework is, in practice, inadequate.
The goal of this note is to present a “modular” proof, for various type systems with η-conversion, of the completeness and correctness of an algorithm for testing the conversion of two terms. The proof of completeness is an application of the notion of logical relations (see Statman 1983, that uses also this notion for a proof of Church-Rosser for simply typed λ-calculus).
An application of our result will be the equivalence between two formulations of Type Theory, the one where conversions are judgement, like in the present version of Martin-Löf set theory, and the one where conversion is defined at the level of raw terms, like in the standard version of LF (for a “conversion-as-judgement” presentation of LF, see Harper 1988). Even if we don't include η-conversion, the equivalence between the “conversion-as-judgement” and “conversion defined on raw terms” formulation appears to be a non-trivial property.
In order to simplify the presentation we will limit ourselves to type theory with only Π, and one universe. This calculus contains LF. After some motivations, we present the algorithm, the proof of its completeness and, as a corollary, its correctness. As a corollary of our argument, we prove normalisation, Church-Rosser, and the equivalence between the two possible formulations of Type Theory.
Informal motivation
The algorithm
The idea is to compute the weak head-normal form of the two terms (in an untyped way), and, in order to take care of η-conversion, in the case where one weak-head normal form is an abstraction (λx:A)M and the other is N a variable or an application, to compare recursively apply(N,ξ) and M[ξ].
We illustrate the effectiveness of proof transformations which expose the computational content of classical proofs even in cases where it is not apparent. We state without proof a theorem that these transformations apply to proofs in a fragment of type theory and discuss their implementation in Nuprl. We end with a discussion of the applications to Higman's lemma by the second author using the implemented system.
Introduction: Computational content
Informal practice
Sometimes we express computational ideas directly as when we say 2 + 2 reduces to 4 or when we specify an algorithm for solving a problem: “use Euclid's GCD (greatest common divisor) algorithm to reduce this fraction.” At other times we refer only indirectly to a method of computation, as in the following form of Euclid's proof that there are infinitely many primes:
For every natural number n there is a prime p greater than n. To prove this, notice first that every number m has a least prime factor; to find it, just try dividing it by 2, 3, …, m and take the first divisor. In particular n! + 1 has a least prime factor. Call it p. Clearly p cannot be any number between 2 and n since none of those divide n! + 1 evenly. Therefore p > n. QED
This proof implicitly provides an algorithm to find a prime greater than n.
This book contains a collection of papers concerned with logical frameworks. Such frameworks arise in a number of ways when considering the relationship between logic and computation, and indeed the general structure of logical formalisms. In particular, in Computer Science, there is interest in the representation and organization of mathematical knowledge on the computer, and in obtaining computational assistance with its derivation. One would especially like to implement program logics and prove programs correct. Again, there is direct computational content in various logical formalisms, particularly constructive ones. Finally, such issues provoke interest in re-examining purely logical questions.
Logical frameworks arise in two distinct but related senses. First, very many logics are of interest in Computer Science, and great repetition of effort is involved in implementing each. It would therefore be helpful to create a single framework, a kind of meta-logic, which is itself implementable and in which the logics of interest can be represented. Putting the two together there results an implementation of any represented logic.
In the second sense, one chooses a particular “universal” logic which is strong enough to do all that is required, and sticks to it. For example, one might choose a set theory, and do mathematics within that. Both approaches have much in common. Even within a fixed logic there is the need for a descriptive apparatus for particular mathematical theories, notations, derived rules and so on. Providing such is rather similar to providing a framework in the first sense.
By
Leen Helmink, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands,
René Ahn, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper, a method is presented for proof construction in Generalised Type Systems. An interactive system that implements the method has been developed. Generalised type systems (GTSs) provide a uniform way to describe and classify type theoretical systems, e.g. systems in the families of AUTOMATH, the Calculus of Constructions, LF. A method is presented to perform unification based top down proof construction for generalised type systems, thus offering a well-founded, elegant and powerful underlying formalism for a proof development system. It combines clause resolution with higher-order natural deduction style theorem proving. No theoretical contribution to generalised type systems is claimed.
A type theory presents a set of rules to derive types of objects in a given context with assumptions about the type of primitive objects. The objects and types are expressions in typed λ-calculus. The propositions as types paradigm provides a direct mapping between (higher-order) logic and type theory. In this interpretation, contexts correspond to theories, types correspond to propositions, and objects correspond to proofs of propositions. Type theory has successfully demonstrated its capabilities to formalise many parts of mathematics in a uniform and natural way. For many generalised type systems, like the systems in the so-called λ-cube, the typing relation is decidable. This permits automatic proof checking, and such proof checkers have been developed for specific type systems.
The problem addressed in this paper is to construct an object in a given context, given its type.
Various languages have been proposed as specification languages for representing a wide variety of logics. The development of typed λ-calculi has been one approach toward this goal. The logical framework (LF), a λ-calculus with dependent types is one example of such a language. A small subset of intuitionistic logic with quantification over the simply typed λ-calculus has also been proposed as a framework for specifying general logics. The logic of hereditary Harrop formulas with quantification at all non-predicate types, denoted here as hhw, is such a meta-logic. In this paper, we show how to translate specifications in LF into hhw specifications in a direct and natural way, so that correct typing in LF corresponds to intuitionistic provability in hhw. In addition, we demonstrate a direct correspondence between proofs in these two systems. The logic of hhw can be implemented using such logic programming techniques as providing operational interpretations to the connectives and implementing unification on λ-terms. As a result, relating these two languages makes it possible to provide direct implementations of proof checkers and theorem provers for logics specified in LF.
Introduction
The design of languages that can express a wide variety of logics has been the focus of much recent work. Such languages attempt to provide a general theory of inference systems that captures uniformities across different logics, so that they can be exploited in implementing theorem provers and proof systems.