We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
In Chapter 2, we presented several complexity classes based on the approximability degree of the problems they contain: APX, PTAS, FPTAS and their parallel counterparts NCX, NCAS, FNCAS. These classes, defined in terms of degree of approximability, are known as the “computationally defined” approximation classes. We have seen that in order to show that a problem belongs to one of these classes, one can present an approximation algorithm or a reduction to a problem with known approximability properties. In this chapter we show that in some cases the approximation properties of a problem can be obtained directly from its syntactical definition. These results are based on Fagin's characterization of the class NP in terms of existential second-order logic [Fag75], which constitutes one of the most interesting connections between logic and complexity. Papadimitriou and Yannakakis discovered that Fagin's characterization can be adapted to deal with optimization problems. They defined classes of approximation problems according to their syntactical characterization [PY91]. The importance of this approach comes from the fact that approximation properties of the optimization problems can be derived from their characterization in terms of logical quantifiers. Papadimitriou and Yannakakis defined the complexity classes MaxSNP and MaxNP which contain optimization versions of many important NP problems. They showed that many MaxSNP problems are in fact complete for the class. The paradigmatic MaxSNP-complete problem is Maximum 3SAT. Recall that the Maximum 3SAT problem consists in, given a boolean formula F written in conjunctive normal form with three literals in each clause, finding a truth assignment that satisfies the maximum number of clauses.
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
In the previous chapter, we gave a brief introduction to the topic of parallel approximability. We keep the discussion at an intuitive level, trying to give the feeling of the main ideas behind parallel approximability. In this chapter, we are going to review in a more formal setting the basic definitions about PRAM computations and approximation that we shall use through the text. We will also introduce the tools and notation needed. In any case, this chapter will not be a deep study of these topics. There exists a large body of literature for the reader who wishes to go further in the theory of PRAM computation, among others the books by Akl [Akl89], Gibbons and Rytter [GR88], Reif [Rei93] and JáJá [JaJ92]. There are also excellent short surveys on this topic, we just mention the one by Karp and Ramachandran [KR90] and the collection of surveys from the ALCOM school in Warwick [GS93]. In a similar way, many survey papers and lecture notes have been written on the topic of approximability, among others the Doctoral Dissertation of V. Kann [Kan92] with a recent update on the appendix of problems [CK95], the lecture notes of R. Motwani [Mot92], the survey by Ausiello et al. [ACP96] which includes a survey of non-approximability methods, the recent book edited by Hochbaum [Hoc96] and a forthcoming book by Ausiello et al. [ACG+96]
The PRAM Model of Computation
We begin this section by giving a formal introduction to our basic model of computation, the Parallel Random Access Machine. A PRAM consists of a number of sequential processors, each with its own memory, working synchronously and communicating between themselves through a common shared memory.
This article describes some mathematical methods for verifying properties of programs in higher-order, functional languages. We focus on methods for reasoning about equivalence of expressions. Such methods are often based upon a denotational semantics of the programming language in question, using the mathematical theory of domains (Scott 1982; Plotkin 1981a). Here I will describe some methods which are based upon operational semantics (Plotkin 1981b). These operationally-based techniques have several attractive features. For example, there is relatively little mathematical overhead involved in developing the basic theory—in contrast with that needed to develop the existence and properties of recursively defined domains, the sine qua non of denotational semantics. On the other hand, domain theory provides an extremely powerful tool for analysing recursive program constructs. I believe that any serious attempt to develop a useful theory for verification of program properties has to involve both operational and denotational techniques.
Highlights The main purpose of this article is to advertise the usefulness, for proving equivalences between functional programs, of co-inductive techniques more familiar in the context of concurrency theory (de Roever 1978; Park 1981; Milner 1989). They were imported into the world of lambda calculus and functional programming by several people: see Dybjer and Sander (1989); Abramsky (1990); Howe (1989, Howe (1996); Egidi, Honsell, and della Rocca (1992); and Gordon 1994.
This book is based on material presented at a summer school on Semantics and Logics of Computation that took place at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK, in September 1995. The school was sponsored by the EU ESPRIT Basic Research Action on Categorical Logic in Computer Science (CLiCS I & II) and aimed to present some modern developments in semantics and logics of computation in a series of graduate-level lectures. Most of the material presented here has not previously been accessible in such a coherent and digestible form. This Preface gives a thematic overview of the contents of the book. It also briefly sketches the history of the two CLiCS projects which came to an end with the summer school.
Games, proofs and programs One of the most exciting recent developments in programming semantics has been the use of games and strategies to provide a more fine grained semantics than that provided by domain-theoretic models. This ‘intensional semantics’ aims to combine the good mathematical and structural properties of traditional denotational semantics with the ability to capture dynamical and interactive aspects of computation, and to embody the computational intuitions of operational semantics. More pragmatically, game semantics have been used to solve long-standing ‘full abstraction’ problems for PCF (a simple language for Programming Computable arithmetic Functions of higher type) and Idealised Algol. In other words games have provided syntax-independent models of these languages which precisely capture the operationally defined notion of ‘contextual equivalence’ of program expressions.
This course is an introduction to the research trying to connect the proof theory of classical logic and computer science. We omit important and standard topics, among them the connection between the computational interpretation of classical logic and the programming operator callcc.
Instead, here we put the emphasis on actual mathematical examples. We analyse the following questions: what can be the meaning of a non-effective proof of an existential statement, a statement that claims the existence of a finite object that satisfies a decidable property? Is it clear that a non-effective proof has a meaning at all? Can we always say that this proof contains implicitly, if not explicitly, some effective witness? Is this witness unique? By putting the emphasis on actual mathematical examples, we follow Gentzen who founded natural deduction by analysing concrete mathematical examples, like Euclid's proof of the infinity of prime numbers.
We present general methods that can be used to compute effectively such witnesses. The three methods we shall analyse are
The negative translation. This is a quite natural translation of classical logic in intuitionistic logic, which is both simple and powerful.
A game-theoretic interpretation of classical logic, which is a suggestive reformulation of Gentzen-Novikov's sequent calculus, and may be in some cases more convenient than negative translation.
Use of formal topology. It consists of realising some conditions in a topological model. In some cases, it is possible then to realise these conditions effectively, while the “absolute” realisation of these conditions is impossible.
The aim of these notes is to describe the monadic and incremental approaches to the denotational semantics of programming languages. This is done via the use of suitable typed metalanguages, which capture the relevant structure of semantic categories. The monadic and incremental approaches are formulated in the setting of a type-theoretic framework for the following reasons:
a type theory with dependent types allows a precise, concise and general description of the two approaches, based on signatures as abstract representations for languages;
there are various implementations (e.g. LEGO and CoQ) which provide computer assistance for several type-theories, and without computer assistance it seems unlikely that either of the two approaches can go beyond toy languages.
On the other hand, the monadic and incremental approaches can be described already with a naive set-theoretic semantics. Therefore, knowledge of Domain Theory and Category Theory becomes essential only in Section 6.
The presentation adopted differs from advanced textbooks on denotational semantics in the following aspects:
it makes significant use of type theory as a tool for describing languages and calculi, while this is usually done via a set of formation or inference rules;
it incorporates ideas from Axiomatic and Synthetic Domain Theory into metalanguages, while most metalanguages for denotational semantics are variants of LCF;
it stresses the use of metalanguages to give semantics via translation (using the monadic and incremental approaches), but avoids a detailed analysis of the categories used in denotational semantics.
The aim of these notes is to explain how games can provide an intensional semantics for functional programming languages, and for a theory of proofs. From the point of view of program semantics, the rough idea is that we can move from modelling computable functions (which give the ‘extensional’ behaviour of programs) to modelling ‘intensional’ aspects of the algorithms themselves. In proof theory, the tradition has been to consider syntactic representations of (what are presumably intended to be ‘intensional’) proofs; so the idea is to give a more intrinsic account of a notion of proof.
Three main sections follow this Introduction. Section 2 deals with games and partial strategies; it includes a discussion of the application of these ideas to the modelling of algorithms. Section 3 is about games and total strategies; it runs parallel to the treatment in Section 2, and is quite compressed. Section 4 gives no more than an outline of more sophisticated notions of game, and discusses them as models for proofs. Exercises are scattered through the text.
I very much hope that the broad outline of these notes will be comprehensible on the basis of little beyond an understanding of sequences (lists) and trees. However the statements of some results and some of the exercises presuppose a little knowledge of category theory, of domain theory and of linear logic.
The “classical” paradigm for denotational semantics models data types as domains, i.e. structured sets of some kind, and programs as (suitable) functions between domains. The semantic universe in which the denotational modelling is carried out is thus a category with domains as objects, functions as morphisms, and composition of morphisms given by function composition. A sharp distinction is then drawn between denotational and operational semantics. Denotational semantics is often referred to as “mathematical semantics” because it exhibits a high degree of mathematical structure; this is in part achieved by the fact that denotational semantics abstracts away from the dynamics of computation—from time. By contrast, operational semantics is formulated in terms of the syntax of the language being modelled; it is highly intensional in character; and it is capable of expressing the dynamical aspects of computation.
The classical denotational paradigm has been very successful, but has some definite limitations. Firstly, fine-structural features of computation, such as sequentially, computational complexity, and optimality of reduction strategies, have either not been captured at all denotationally, or not in a fully satisfactory fashion. Moreover, once languages with features beyond the purely functional are considered, the appropriateness of modelling programs by functions is increasingly open to question. Neither concurrency nor “advanced” imperative features such as local references have been captured denotationally in a fully convincing fashion.
Computational behaviours are often distributed, in the sense that they may be seen as spatially separated activities accomplishing a joint task. Many such systems are not meant to terminate, and hence it makes little sense to talk about their behaviours in terms of traditional input-output functions. Rather, we are interested in the behaviour of such systems in terms of their often complex patterns of stimuli/response relationships varying over time. For this reason such systems are often referred to as reactive systems.
Many structures for modelling reactive systems have been studied over the past 20 years. Here we present a few key models. Common to all of them, is that they rest on an idea of atomic actions, over which the behaviour of a system is defined. The models differ mainly with respect to what behavioural features of systems are represented. Some models are more abstract than others, and this fact is often used in informal classifications of the models with respect to expressibility. One of our aims is to present principal representatives of models, covering the landscape from the most abstract to the most concrete, and to formalise the nature of their relationships by explicitly representing the steps of abstraction that are involved in moving between them. In following through this programme, category theory is a convenient language for formalising the relationships between models.
To give an idea of the role categories play, let us focus attention on transition systems as a model of parallel computation.
Here is a sample of notations that might be useful to people who are considering Z. All are based on the discrete mathematics taught in Chapters 8 to 11.
In addition to Z itself, the Z family includes several object-oriented dialects including Object-Z, MooZ, OOZE, and Z++ [Stepney, Barden, and Cooper, 1992a; Stepney et al., 1992b; Lano and Haughton, 1993]. Some early contributors to Z went on to create a development method called B that includes a specification language and a tool for automating calculations and proofs [Lano and Haughton, 1995].
Of the other formal notations, VDM [Jones, 1990] is most similar to Z. Like Z, VDM is a model-based notation. You model a system by representing its state and a collection of operations that can change its state. VDM lacks the boxed paragraphs of Z and has nothing quite like the Z schema calculus. VDM stands for the Vienna Development Method. The VDM community emphasizes refinement, not just modelling. Z and VDM are compared in Hayes [1992b].
Combinations of conditions that define complex predicates can sometimes be made easier to grasp by presenting them in a two-dimensional tabular format. A particularly rigorous and comprehensive tabular notation was invented by Parnas and others [Parnas, 1994] and has been applied to nuclear reactor shutdown software. Leveson and colleagues invented a tabular notation called AND/OR tables and applied it to an aircraft collision avoidance system [Leveson et al., 1994].
This chapter presents a more realistic model for the graphical user interface we introduced in Chapter 6. It is based on the control console of a real medical device, but the same techniques can be applied to any system where the operator uses a pointing device such as a mouse to select items from on-screen windows and menus, and uses a keyboard to enter information into dialog boxes. Such facilities are provided by many software systems in wide use today, for example the X window system.
A graphical user interface is an example of a state transition system driven by events. This chapter explains how to model event-driven state transition systems in Z, and shows how to illustrate a Z text with a kind of state transition diagram called a statechart. This chapter also shows how to use Z to express designs that are partitioned into units or modules that are largely independent. In Z these units can include both data and the operations that act on it, so they can represent classes in object-oriented programming.
Events
A great advantage of a graphical user interface is that it allows the users to choose operations in whatever order makes the most sense to them, it does not force users through a fixed sequence determined by the designers. All operations are always potentially available, although some operations might have to be disabled at certain times.
This chapter teaches a practical method for writing code from Z specifications that supplements intuition and experience with formal derivation.
The preceding Chapters 26 and 27 on refinement and program derivation show how to get from Z to code by purely formal methods, where each development step is a formula manipulation. As you must have realized, it is rarely necessary to develop an entire system in this completely formal way. The programming problems that arise within a single project usually present a range of difficulty. Large parts of the project may be so routine that there is no need for any formal description other than the code itself. Only a portion requires specification in Z. In this portion, you might refine only a fraction to a detailed design in Z. And in this fraction you might derive and verify only a page or two of code. The rest is so obvious that it can be translated to code by intuition and then verified by inspection.
Nevertheless, you can choose a strategy for implementing Z that you could justify formally by the methods of Chapters 26 and 27 if you were challenged to do so. This chapter presents such a strategy. When you have a formal specification, you can check designs and code rigorously if doubts remain after informal inspection.
The examples in this chapter are in C. They could easily be adapted to other programming languages.
Formal methods are not project management methods, but some programmers fear that using formal methods would impose a burdensome and inflexible way of working. This chapter should dispel that misconception and reassure you that formal methods are compatible with many different development methods and management styles. This chapter discusses dividing projects into stages, learning users' requirements, translating informal requirements to formal specifications, and validating formal specifications.
Work in stages
There is one assumption that underlies all formal methods: A programming project is divided into stages, where each stage produces a product that can be examined, reviewed, and assessed for correctness and other qualities.
Three products that must be produced by almost any programming project are the specification, which describes the behavior of the product; the design, which describes its internal structure; and the code, which is executable and is expressed in some particular programming language. Most projects produce other products as well, such as manuals and other materials for instructing users, assurance evidence such as test plans and test results, and so forth.
Working in stages is a central concept in every systematic software development method. Formal methods add these innovations: express the specification and design (not just the code) in a formal notation, and use formula manipulations (such as calculation and proof) to derive the products and check that they are correct.
Experienced programmers are often skeptical of programming methods that proceed in stages.