We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
G. Ramalingam, Microsoft Research India; Bangalore, India,
Thomas Reps, University of Wisconsin; Madison, WI; USA
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Program representation graphs (PRGs) are an intermediate representation for programs. (They are closely related to program dependence graphs.) In this paper, we develop a mathematical semantics for PRGs that, inspired by Kahn's semantics for a parallel programming language, interprets PRGs as dataflow graphs. We also study the relationship between this semantics and the standard operational semantics of programs. We show that (i) the semantics of PRGs is more defined than the standard operational semantics, and (ii) for states on which a program terminates normally, the PRG semantics is identical to the standard operational semantics.
Introduction
In this paper, we develop a mathematical semantics for program representation graphs (PRGs) and study its relationship to a standard (operational) semantics of programs. Program representation graphs are an intermediate representation of programs, introduced by Yang et al. in an algorithm for detecting program components that exhibit identical execution behaviors. They combine features of static-single-assignment forms (SSA forms) and program dependence graphs (PDGs) (See Fig. 10.1 for an example program and its PRG.) PRGs have also been used in an algorithm for merging program variants.
Program dependence graphs have been used as an intermediate program representation in various applications such as vectorization, parallelization, and merging program variants. A number of variants of the PDG have been used as the basis for efficient program analysis by optimizing compilers as well as other tools.
By
Roberto M. Amadio, Université Paris Diderot, PPS, UMR-7126,
Mehdi Dogguy, Université Paris Diderot, PPS, UMR-7126
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The Sπ-calculus is a synchronous π-calculus which is based on the SL model. The latter is a relaxation of the Esterel model where the reaction to the absence of a signal within an instant can only happen at the next instant. In the present work, we present and characterize a compositional semantics of the Sπ-calculus based on suitable notions of labelled transition system and bisimulation. Based on this semantic framework, we explore the notion of determinacy and the related one of (local) confluence.
Introduction
Let P be a program that can repeatedly interact with its environment. A derivative of P is a program to which P reduces after a finite number of interactions with the environment. A program terminates if all its internal computations terminate and it is reactive if all its derivatives are guaranteed to terminate. A program is determinate if after any finite number of interactions with the environment the resulting derivative is unique up to semantic equivalence.
Most conditions found in the literature that entail determinacy are rather intuitive, however the formal statement of these conditions and the proof that they indeed guarantee determinacy can be rather intricate in particular in the presence of name mobility, as available in a paradigmatic form in the π-calculus.
Our purpose here is to provide a streamlined theory of determinacy for the synchronous π-calculus introduced in.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
By
Paul Klint, Centrum voor Wiskunde en Informatica and University of Amsterdam
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn was a great colleague and good friend who has left us much too early. In this paper I will sketch our joint research projects, the many discussions we had, some personal recollections, and the influence these have had on the current state-of-the-art in meta-level language technology.
Getting acquainted
Bâtiment 8.On a sunny day in the beginning of July 1983 I parked my beige Citroen Dyane on the parking lot in front of Bâtiment 8, INRIA Rocquencourt. At the time, the buildings made the impression that the US military who had constructed the premises in Rocquencourt were also the last that had ever used the paint brush. Inside, lived an energetic research family and I was hosted by project CROAP headed by Gilles Kahn. My roommates Veronique Donzeau-Gouge and Bertrand Mélèse helped me find a bureau in a corner in the cramped building and helped to set up a Multics account on the Honeywell-Bull mainframe.
After some flirtations with computer graphics, software portability and the Unix operating system, I turned to the study of string processing languages on which I wrote a PhD in 1982. The main topic was the Summer programming language that featured objects, success/failure driven control flow, string matching and composition, and a “try” mechanism that allowed the execution of an arbitrary sequence of statements and would undo all side effects in case this execution resulted in failure.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The essence of this paper stems from discussions that the first author (Jean-Pierre Banâtre) had with Gilles on topics related with programming in general and chemical programming in particular. Gilles liked the ideas behind the Gamma model and the closely related Berry and Boudol's CHAM as the basic principles are so simple and elegant. The last opportunity Jean-Pierre had to speak about these ideas to Gilles, was when he presented the LNCS volume devoted to the Unconventional Programming Paradigms workshop. The 10 minutes appointment (at that time, he was CEO of INRIA) lasted a long time. Gilles was fine and in good humour, as often, and he was clearly happy to talk about a subject he loved. He spoke a lot about λ-calculus, the reduction principle, the β-reduction… a really great souvenir!
Abstract
Originally, the chemical model of computation has been proposed as a simple and elegant parallel programming paradigm. Data is seen as “molecules” and computation as “chemical reactions”: if some molecules satisfy a predefined reaction condition, they are replaced by the “product” of the reaction. When no reaction is possible, a normal form is reached and the program terminates. In this paper, we describe classical coordination mechanisms and parallel programming models in the chemical setting. All these examples put forward the simplicity and expressivity of the chemical paradigm.
By
Aarne Ranta, Chalmers University of Technology and University of Gothenburg
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Grammars of natural languages are needed in programs such as natural language interfaces and dialogue systems, but also more generally, in software localization. Writing grammar implementations is a highly specialized task. For various reasons, no libraries have been available to ease this task. This paper shows how grammar libraries can be written in GF (Grammatical Framework), focusing on the software engineering aspects rather than the linguistic aspects. As an implementation of the approach, the GF Resource Grammar Library currently comprises ten languages. As an application, a translation system from formalized mathematics to text in three languages is outlined.
Introduction
How can we generate natural language text from a formal specification of meaning, such as a formal proof? Coscoy, Kahn and Théry studied the problem and built a program that worked for all proofs constructed in the Coq proof assistant. Their program translates structural text components, such as we conclude that, but leaves propositions expressed in formal language:
We conclude that Even(n) → Odd(Succ(n)).
A similar decision is made in Isar, whereas Mizar permits English-like expressions for some predicates. One reason for stopping at this level is certainly that typical users of proof systems are comfortable with reading logical formulas, so that only the proof-level formalization needs translation.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This essay is dedicated in admiration to the memory of Gilles Kahn, a friend and guide for 35 years. I have been struck by the confidence and warmth expressed towards him by the many French colleagues whom he guided. As a non-Frenchman I can also testify that colleagues in other countries have felt the same.
I begin by recalling two events separated by 30 years; one private to him and me, one public in the UK. I met Gilles in Stanford University in 1972, when he was studying for the PhD degree – which, I came to believe, he found unnecessary to acquire. His study was, I think, thwarted by the misunderstanding of others. I was working on two different things: on computer-assisted reasoning in a logic of Dana Scott based upon domain theory, which inspired me, and on models of interaction – which I believed would grow steadily in importance (as indeed they have). There was hope to unite the two. Yet it was hard to relate domain theory to the non-determinism inherent in interactive processes. I remember, but not in detail, a discussion of this connection with Gilles. The main thing I remember is that he ignited. He had got the idea of the domain of streams which, developed jointly with David MacQueen, became one of the most famous papers in informatics; a model of deterministic processes linked by streams of data.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Adjoint algorithms are a powerful way to obtain the gradients that are needed in scientific computing. Automatic differentiation can build adjoint algorithms automatically by source transformation of the direct algorithm. The specific structure of adjoint algorithms strongly relies on reversal of the sequence of computations made by the direct algorithm. This reversal problem is at the same time difficult and interesting. This paper makes a survey of the reversal strategies employed in recent tools and describes some of the more abstract formalizations used to justify these strategies.
Why build adjoint algorithms?
Gradients are a powerful tool for mathematical optimization. The Newton method for example uses the gradient to find a zero of a function, iteratively, with an excellent accuracy that grows quadratically with the number of iterations. In the context of optimization, the optimum is a zero of the gradient itself, and therefore the Newton method needs second derivatives in addition to the gradient. In scientific computing the most popular optimization methods, such as BFGS, all give best performances when provided gradients too.
In real-life engineering, the systems that must be simulated are complex: even when they are modeled by classical mathematical equations, analytic resolution is totally out of reach. Thus, the equations must be discretized on the simulation domain, and then solved, for example, iteratively by a computer algorithm.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The split of a multihop, point-to-point TCP connection consists in replacing a plain, end-to-end TCP connection by a cascade of TCP connections. In such a cascade, connection n feeds connection n + 1 through some proxy node n. This technique is used in a variety of contexts. In overlay networks, proxies are often peers of the underlying peer-to-peer network. split TCP is also already proposed and largely adopted in wireless networks at the wired/wireless interface to separate links with vastly different characteristics. In order to avoid losses in the proxies, a backpressure mechanism is often used in this context.
In this paper we develop a model for such a split TCP connection aimed at the analysis of throughput dynamics on both links as well as of buffer occupancy in the proxy. The two main variants of split TCP are considered: that with backpressure and that without. The study consists of two parts: the first part is purely experimental and is based on ns2 simulations. It allows us to identify complex interaction phenomena between TCP flow rates and proxy buffer occupancy, which seem to have been ignored by previous work on split TCP. The second part of the paper is of a mathematical nature. We establish the basic equations that govern the evolution of such a cascade and prove some of the experimental observations made in the first part.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
I have spent 29 years of my life with INRIA. I had seen the beginning of the institute in 1967. I was appointed president of the institute in 1984 and I left it in 1996, after 12 years as president.
Gilles Kahn joined IRIA in the late 1960s and made his entire career in the institute. He passed away while he was president, after a courageous fight against a dreadful illness, which unfortunately did not leave him any chance. I knew him for more than 35 years and I have an accurate vision on the role he played in the development of the institute. More globally, I have seen his influence in the computer science community in France and in Europe. This article gives me the opportunity to understand what was behind such a leadership and to recall some of the challenges that INRIA has faced during more than three decades. What we can learn from the past and from the action of former great leaders is extremely helpful for the future.
This is probably the best way to be faithful to Gilles and to remain close to his thoughts.
Historical IRIA
Why IRIA?
INRIA was born as an evolution of IRIA.
IRIA was created in 1967 as a part of a set of decisions taken under the leadership of General de Gaulle. It was a time of bold decisions towards research at large, based on clear political goals.
Category theory and related topics of mathematics have been increasingly applied to computer science in recent years. This book contains selected papers from the London Mathematical Society Symposium on the subject which was held at the University of Durham. Participants at the conference were leading computer scientists and mathematicians working in the area and this volume reflects the excitement and importance of the meeting. All the papers have been refereed and represent some of the most important and current ideas. Hence this book will be essential to mathematicians and computer scientists working in the applications of category theory.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn was born in Paris on April 17th, 1946 and died in Garches, near Paris, on February 9th, 2006. He received an engineering diploma from Ecole Polytechnique (class of 1964), studied for a few years in Stanford and then joined the computer science branch of the French Atomic Energy Commission (CEA), which was to become the CISI company. He joined the French research institute in computer science and control theory (IRIA, later renamed INRIA) in 1976. He stayed with this institute until his death, at which time he was the chief executive officer of the institute. He was a member of Academia Europaea from 1995 and a member of the French Academy of Science from 1997.
Gilles Kahn's scientific achievements
Gilles Kahn's scientific interests evolved from the study of programming language semantics to the design and implementation of programming tools and the study of the interaction between programming activities and proof verification activities. In plain words, these themes addressed three questions. How do programmers tell a computer to perform a specific task? What tools can we provide to programmers to help them in their job? In particular, how can programmers provide guarantees that computers will perform the task that was requested?
Programming language semantics
In the early 1970s, Gilles Kahn proposed that programs should be described as collections of processes communicating through a network of channels, a description style that is now known as Kahn networks.
By
Yves Bertot, INRIA Sophia-Antipolis Méditerranée
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
We describe how the formal description of a programming language can be encoded in the Coq theorem prover. Four aspects are covered: Natural semantics (as advocated by Gilles Kahn), axiomatic semantics, denotational semantics, and abstract interpretation. We show that most of these aspects have an executable counterpart and describe how this can be used to support proofs about programs.
Introduction
Nipkow demonstrated in that theorem provers could be used to formalize many aspects of programming language semantics. In this paper, we want to push the experiment further to show that this formalization effort also has a practical outcome, in that it makes it possible to integrate programming tools inside theorem provers in an uniform way. We re-visit the study of operational, denotational semantics, axiomatic semantics, and weakest pre-condiction calculus as already studied by Nipkow and we add a small example of a static analysis tool based on abstract interpretation.
To integrate the programming tools inside the theorem prover we rely on the possibility to execute the algorithms after they have been formally described and proved correct, a technique known as reflection. We also implemented a parser, so that the theorem prover can be used as a playground to experiment on sample programs. We performed this experiment using the Coq system. The tools that are formally described can also be “extracted” outside the proof environment, so that they become stand alone programs.
By
Thierry Coquand, Chalmers University of Technology and Göteborg University,
Yoshiki Kinoshita, National Institute of Advanced Industrial Science and Technology (AIST), Japan,
Bengt Nordström, Chalmers University of Technology and Göteborg University,
Makoto Takeyama, National Institute of Advanced Industrial Science and Technology (AIST), Japan
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This paper presents a formal description of a small functional language with dependent types. The language contains data types, mutual recursive/inductive definitions and a universe of small types. The syntax, semantics and type system is specified in such a way that the implementation of a parser, interpreter and type checker is straight-forward. The main difficulty is to design the conversion algorithm in such a way that it works for open expressions. The paper ends with a complete implementation in Haskell (around 400 lines of code).
Introduction
We are going to describe a small language with dependent types, its syntax, operational semantics and type system. This is in the spirit of the paper “A simple applicative language: Mini-ML” by Clément, Despeyroux, and Kahn, where they explain a small functional language. From them we have borrowed the idea of using patterns instead of variables in abstractions and let-bindings. It gives an elegant way to express mutually recursive definitions. We also share with them the view that a programming language should not only be formally specified, but it should also be possible to reason about the correctness of its implementation. There should be a small step from the formal operational semantics to an interpreter and also between the specification of the type system to a type checker.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This is a self-contained, modern treatment of the algebraic theory of machines. Dr Holcombe examines various applications of the idea of a machine in biology, biochemistry and computer science and gives also a rigorous treatment of the way in which these machines can be decomposed and simulated by simpler ones. This treatment is based on fundamental ideas from modern algebra. Motivation for many of the newer results is provided by way of applications so this account should be accessible and valuable for those studying applied algebra or theoretical computer science at advanced undergraduate or beginning postgraduate level, as well as for those undertaking research in those areas.
This book is concerned with techniques for formal theorem-proving, with particular reference to Cambridge LCF (Logic for Computable Functions). Cambridge LCF is a computer program for reasoning about computation. It combines the methods of mathematical logic with domain theory, the basis of the denotational approach to specifying the meaning of program statements. Cambridge LCF is based on an earlier theorem-proving system, Edinburgh LCF, which introduced a design that gives the user flexibility to use and extend the system. A goal of this book is to explain the design, which has been adopted in several other systems. The book consists of two parts. Part I outlines the mathematical preliminaries, elementary logic and domain theory, and explains them at an intuitive level, giving reference to more advanced reading; Part II provides sufficient detail to serve as a reference manual for Cambridge LCF. It will also be a useful guide for implementors of other programs based on the LCF approach.
Most books on data structures assume an imperative language like C or C++. However, data structures for these languages do not always translate well to functional languages such as Standard ML, Haskell, or Scheme. This book describes data structures from the point of view of functional languages, with examples, and presents design techniques so that programmers can develop their own functional data structures. It includes both classical data structures, such as red-black trees and binomial queues, and a host of new data structures developed exclusively for functional languages. All source code is given in Standard ML and Haskell, and most of the programs can easily be adapted to other functional languages. This handy reference for professional programmers working with functional languages can also be used as a tutorial or for self-study.
The aim of this volume is to present developments in semantics and logics of computation in a way that is accessible to graduate students. The book is based on a summer school at the Isaac Newton Institute and consists of a sequence of linked lecture courses by international authorities in the area. The whole set has been edited to form a coherent introduction to these topics, most of which have not been presented pedagogically before.