We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When this book was first published, recent aspects of human factors research had led to methodologies that integrated usability into the development of interactive systems. MUSE was one of the pioneering methods for Usability Engineering. It provides an environment in which human factors contributions can realise their full potential. MUSE supports active human factors involvement in both design specification and evaluation, and takes system development from user requirements to user interface design. Its methods describe how design errors can be avoided or rectified throughout system development, as well as showing how errors can be identified and providing support for inter-disciplinary design planning and coordination. It therefore ranks as one of the best developed and most completely structured human factors methods. This book reviews the motivation for developing MUSE, and provides readers with a manual for method application. It will be essential reading for all involved with systems development, whether from the HCI or software engineering communities, and can be used as well for course accompaniment.
This book describes the use of qualified types to provide a general framework for the combination of polymorphism and overloading. For example, qualified types can be viewed as a generalization of type classes in the functional language Haskell and the theorem prover Isabelle. These in turn are extensions of equality types in Standard ML. Other applications of qualified types include extensible records and subtyping. Using a general formulation of qualified types, the author extends the Damas/Milner type inference algorithm to support qualified types, which in turn specifies the set of all possible types for any term. In addition, he describes a new technique for establishing suitable coherence conditions that guarantee the same semantics for all possible translations of a given term. Practical issues that arise in concrete implementations are also discussed, concentrating in particular on the implementation of overloading in Haskell and Gofer, a small functional programming system developed by the author.
Modern computer networks now circle the world, but the transmission of information between them depends on the many different protocols that define the behaviour of the sender and receiver. It is clear therefore, that the accurate description of these protocols is important if harmonious communication is to be maintained. In this book the authors use the formal specification language PSF to provide an unambiguous description of several communication protocols of varying levels of complexity, ranging from the alternating bit protocol to the token ring protocol. Beginners, as well as professionals in the field of communication protocols, will benefit from both the methods of specification described and the protocols discussed in this book.
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The book's second part contains a detailed survey of important methods in this field, which are carefully analysed, and shown to be either incomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analysed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
In this chapter we give a general treatment of structured declarations. This allows us to give an account of language constructs such as structs in C-like languages and objects in object-oriented languages.
In Section 9.2, we describe a big-step semantics of Bur. Then, in Section 9.3, we consider Coat, a small object-oriented language with classes and dynamically generated objects.
Records
In this chapter, we will use the term record to stand for a structured declaration. We shall allow record declarations inside records. Moreover, records may contain both variables and procedures; this is in contrast to languages such as C and Pascal that allow only the declaration of variables in structured declarations. By allowing procedure declarations as part of a record, we can now view a record as an object. Variables then correspond to the local variables of an object and procedures correspond to object methods. Later, in Section 9.3, we extend this understanding so that we can speak of objects that are dynamically generated instances of classes.
In Figure 9.1 we see an example of a program that has nested record declarations. The program consists of a block which has a record r1 that contains the declaration of a variable x, a procedure p and a record r2. The record r2 contains the declaration of two variables, y and z.
In this chapter we encounter our first examples of structural operational semantics, namely big-step and small-step semantics of arithmetic and Boolean expressions. These examples are on a small scale, but they still manage to introduce the principles that we shall be using throughout the remainder of the book.
A structural operational semantics is syntax-directed, and for this reason we first introduce the notion of abstract syntax. We use abstract syntax in this chapter to introduce the language Bims, which forms the core of almost all of the tiny programming languages considered in this book.
The rest of the chapter is devoted to introducing the basics of structural operational semantics. A central concept is that of a transition system. Transition systems are defined using transition rules.
The final section of the chapter briefly explores how we may now formulate and prove properties of a structural operational semantics. We shall return to this topic in later chapters.
Abstract syntax
In order to describe the behaviour of programs we must first present an account of the structure of programs, that is, their syntax. In program semantics we are not interested in syntax analysis – that is part of the theory of parsing. Instead, we are interested in a notion of abstract syntax that will allow us to describe the essential structure of a program. In other words, abstract syntax is not concerned with operator precedence etc.
This chapter gives a short introduction to the general principles of denotational semantics. We do this by giving a denotational semantics of Bims. It turns out that the semantics of while-loops poses a problem, as the obvious semantics is not compositional. However, at the end of the chapter we find a way to circumvent this problem of non-compositionality.
Background
Denotational semantics is a child of the 1960s and is based on the ground-breaking insights of Christopher Strachey and Dana Scott (Strachey, 1966, 1967; Scott and Strachey, 1971). Since then, a lot of work has gone into providing denotational semantics for existing programming languages such as Algol 60, Pascal, Smalltalk and Lisp.
Denotational semantics has also turned out to be a particularly useful tool in the design and implementation of languages such as Ada, CHILL and Lucid (Schmidt, 1986).
Denotational semantics can also be useful in static program analysis, which is the study of the correctness properties on the basis of analyses of the program text (as opposed to analyses of the behaviour of the program).
Simple examples of static analysis involve methods for code optimization. Here are two examples.
Consider the so-called constant-folding problem: when is it possible to replace an expression involving variables by a constant?
In Chapter 5 we saw how one can describe parallel programs using structural operational semantics. Concurrenct behaviour is explained as the nondeterministic interleaving of the steps of the individual parallel components, so to describe this we employ a small-step semantics.
However, the language constructs considered in Chapter 5 allowed parallel components to communicate only by reading and modifying the values of shared variables. In this chapter we take a look at other paradigms for communication between concurrent processes.
The first language that we shall consider in Section 8.1 is very reminiscent of the process calculi CSP (Hoare, 1988) and CCS (Milner, 1989). The languages considered in later sections are all variants of this language.
Section 8.5 of the chapter is devoted to a short introduction to the area of behavioural equivalences for concurrent processes, and in particular to the notion of bisimulation.
CSP and CCS are early examples of process calculi, and the idea of bisimulation equivalence has become prominent here. A process calculus is a simple notation designed with the aim of being able to describe and reason about the behaviour of concurrent processes. The last section of the chapter describes a more recent and very important process calculus which has given rise to a large body of research. This process calculus, the π-calculus, is an expressive yet simple process calculus that allows one to describe phenomena such as references and the transfer of reference.