We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This part includes all the proof theoretic machinery and results presented in this book. First, in the short motivating Chapter 7, various semantic consequence relations are introduced. Then, in Chapter 8, the notion of a standard formal system is developed. Such a system is given by a set of axioms and has a proof structure based on modus ponens and necessitation (a rule designed to cope with the box connectives). Once developed, this proof theoretic machinery has to be justified (in the sense that it has to be shown to be correct and powerful enough). This is done by proving a completeness theorem. Chapter 9 contains a completeness result which is applicable to all standard systems, and hence can be regarded as rather superficial. The proof of this result is important for the method used is applicable in many other situations. Chapter 10 contains a more refined completeness result which is widely, but not generally, applicable. This kind of completeness was first developed by Kripke and it was this advancement which brought modal logic out of the dark ages.
We have isolated three important properties of standard formal systems: being canonical, and having the fmp, both of which imply the third property of being Kripke-complete. We have also seen may examples of systems with these properties. In this part we look more closely at the connection between these properties.
In each of Chapters 14 and 15 we look at a system which has the fmp but is not canonical. Both of these systems have independent interest. In Chapter 16 we consider a system which is canonical but does not have the fmp. This system is custom built to have these properties but may, in time, be found to have interest in its own right. Finally, in Chapter 17, we look at two systems, one of which has all three properties and the other having none of the properties. Furthermore, these two systems have precisely the same class of unadorned models.
Taken as a whole these four chapters hint at some of the complexities that can arise in modal logic.
For each signature I we have now defined two quite different entities; the polymodal language of signature I, and the class of structures (labelled transition structures) of signature I. These two entities must now be made to interact. Thus the structures will be made to support a semantics for the language, or, equivalently, the language will be used to describe properties of structures.
The polymodal language has the usual propositional facilities together with a family of new 1-ary connectives [i], one for each label i. We now wish to evaluate each formula of this language, i.e. determine whether or not a formula φ is True or False. Of course, this can not be done in isolation, we need to work in an appropriate context. To determine the truth value of φ we need three pieces of information together with an agreed procedure for using the information.
We need to know the truth values of the variables appearing in φ. As in the propositional case this information will be conveyed by a valuation, however, these modal valuations are more complicated than the propositional versions.
We need to know how to handle the propositional connectives. This will be done in exactly the same way as the propositional language (i.e. using the defining truth tables of the connectives). In this sense, modal logic subsumes propositional logic.
The central concept of this book is that of a labelled transition structure or, for brevity, simply a structure. These are the relational structures used to support the standard semantics of polymodal languages. In their monomodal form they are known as Kripke structures or frames (because of their supporting role). The introduction of these structures into modal logic around 1960 brought about a considerable amount of clarification and insight, and stimulated a rapid development of the subject. The use of these structures is a powerful tool and elevates the subject above the rather tedious symbol shuffling and philosophical ramblings that used to be its forte.
However, these structures are not just the tools of modal logic. They occur naturally in many parts of mathematics and computer science. For instance, partial orderings, equivalence relations, graphs, automata, and process algebras all give examples of these transition structures. This points to a true perspective of the relationship between modal languages and their semantic structures.
The objective of modal logic is not an analysis of modal languages; it is not the study of certain formal systems and the relationships between these; it is not the construction of different proof styles and rules of inference, etc: these are merely techniques developed to help the practitioner towards his central aim. The main objective of modal logic is, no more and no less, the study of labelled transition structures.
This part begins the non basic part of the book and is built around a theme of semantics preserving morphisms. First in Chapter 11 the notion of a bisimulation is developed. This is a kind of relation between structures designed to highlight the similarities between them which, as a side product, has the required semantic preserving properties. The notion subsumes most other semantic preserving morphisms.
Building on this notion, in Chapter 12 a method of constructing smaller structures from larger structures is described. This method of filtrations has completeness and decidability consequences for the sytems to which it is applied. These are developed in Chapter 13.
In Chapter 3 I claimed that modal logic should be viewed as a tool for describing and analysing properties of structures (that is, of labelled transition structures). How can this be, and how effective is this tool? The kind of simple things we might want to know about a relation are whether it is reflexive, symmetric, transitive, confluent, etc. We may want to know whether one relation is included in another, or is the converse of another, or whether one relation can be decomposed as the composite of two others, etc. We may want to know more complicated things like whether a relation is well-founded, or whether one relation is the ⋆-closure of another.
Remarkably, these and many other properties are characterized by quite simple modal formulas. It is this characterizing ability which makes modal logic such a powerful tool. Once it is understood, it can be seen that modal logic is a quite extensive part of full second order logic, and it is the ability to capture second order properties which gives it is power.
Some examples
As an illustration of the kind of thing we are going to do we begin with a quite simple example of a correspondence result. In this result we focus on one particular label with its associated relation ≺ and connective □.
This part introduces and develops the Kripke style semantics for modal languages.
The supporting structures for this semantics, here called labelled transition systems but sometimes called frames of Kripke structures, are described in Chapter 3. Then in Chapter 4 these structures are enriched by valuations which enable us to give the semantics of the languge (relative to an arbitrary valued structure). The semantics is given in terms of what is sometimes called a forcing relation. The concepts introduced in these two chapters are the most important in the whole book.
This Kripke (or forcing) semantics provides a link between the structures and the language, and we find that many property of thses structures can be captured by appropriate modal formulas. This idea, which is known as correspondence theory, is introduced and exemplified in Chapter 5. Chapter 6 is devoted to the proof of a correspondence result which covers many, but not all, of the cases.
This book is about ways of developing software that can be executed by parallel computers. Software is one of the most complex and difficult artefacts that humans build. Parallelism increases this complexity significantly. At the moment, parallel software is designed for specific kinds of architectures, and can only be moved from architecture to architecture by large-scale rewriting. This is so expensive that very few applications have been discovered for which the effort is worth it. These are mostly numeric and scientific.
I am convinced that the only way to move beyond this state of affairs and use parallelism in a wide variety of applications is to break the tight connection between software and hardware. When this is done, software can be moved with greater ease from architecture to architecture as new ones are developed or become affordable. One of the main claims of this book is that it is possible to do this while still delivering good performance on all architectures, although not perhaps the optimal wall-clock performance that obsesses the present generation of parallel programmers.
The approach presented in this book is a generalisation of abstract data types, which introduced modularity into software design. It is based on categorical data types, which encapsulate control flow as well as data representation. The semantics of powerful operations on a data type are decoupled from their implementation. This allows a variety of implementations, including parallel ones, without altering software.
Parallel computation has been a physical reality for about two decades, and a major topic for research for at least half that time. However, given the whole spectrum of applications of computers, almost nothing is actually computed in parallel. In this chapter we suggest reasons why there is a great gap between the promise of parallel computing and the delivery of real parallel computations, and what can be done to close the gap.
The basic problem in parallel computation, we suggest, is the mismatch between the requirements of parallel software and the properties of the parallel computers on which it is executed. The gap between parallel software and hardware is a rapidly changing one because the lifespans of parallel architectures are measured in years, while a desirable lifespan for parallel software is measured in decades. The current standard way of dealing with this mismatch is to reengineer software every few years as each new parallel computer comes along. This is expensive, and as a result parallelism has only been heavily used in applications where other considerations outweigh the expense. This is mostly why the existing parallel processing community is so heavily oriented towards scientific and numerical applications – they are either funded by research institutions and are pushing the limits of what can be computed in finite time and space, or they are working on applications where performance is the only significant goal.
Chapter 2 covers desirable model properties and shows how categorical data types satisfy these properties. I developed this view of model properties during 1992 (an early survey of models using them appeared as [179]) and extended and refined it over the next two years. A preliminary version of Chapter 4 was given as a talk at the Workshop on Programming Tools for Parallel Machines at Alimini, Italy, in the summer of 1993.
Chapter 3 is based on Valiant's work which can be found in a series of papers [197,199, 200]. The results on emulation on SIMD architectures appears in [178], although they seem to have been widely understood before that.
The construction of lists as a categorical data type in Chapter 5 follows the general presentation in Grant Malcolm's thesis [138]. An alternative presentation that emphasises the role of adjunctions in the CDT construction is due to Mike Spivey [189]. Much of the category theory on which this work depends was done in the Sixties [25]. The demonstration that lists can be efficiently implemented comes from [178].
The material on software development in Chapter 6 is a selection from a much larger range of material developed in what has become known as the Bird-Meertens formalism [17,31–35]. The material on almost-homomorphisms (and the name) come from work by Murray Cole [55].
The development of operations to compute recurrences (Chapter 7) and the cost calculus for lists (Chapter 8) is joint work by myself and Wentong Cai during 1992, when he was a postdoctoral fellow.
In Chapter 2 we listed some of the requirements that a model for general-purpose parallel computation should satisfy: architecture independence, intellectual abstractness, having a software development methodology, having cost measures, having no preferred scale of granularity, and being efficiently implementable. In Chapter 3, we saw how results about emulation of arbitrary computations on architectures mean that efficient implementability can only be achieved by restricting the communication allowed in computations. In this chapter, we examine existing models and parallel programming languages and see how they measure up to these requirements.
Many of these models were not developed with such an ambitious set of requirements in mind, so it is not a criticism of them if they fail to meet some. Nevertheless, it provides a picture of the current situation. The absence of a popular or standard model, even in particular application domains, and the wide range of models that have been proposed, underline some of the difficulties of using parallelism in a general-purpose way that were discussed in Chapter 2. It is not possible to cover all proposed models for parallel computation, but a representative selection has been included.
There are several other good surveys of programming models from different perspectives. Crooks and Perrott [59] survey models for distributed-memory architectures, particularly those that provide abstractions for data partitioning and distribution. Turcotte [195] surveys models suitable for networks of workstations. Bal et al. [20] survey architecture-specific models.
The distinctions between models and programming languages are not easy to make.