We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is about the union of two important paradigms in programming languages, namely, higher-order languages and concurrent languages. Higher-order programming languages, often referred to as “functional programming” languages, are languages that support functions as first-class values. The language used here is the popular higher-order language Standard ML (SML) [MTH90, MTHM97], which is the most prominent member of the ML family of languages. In particular, the bulk of this book focuses on concurrent programming using the language Concurrent ML (CML), which extends SML with independent processes and higher-order communication and synchronization primitives. The power of CML is that a wide range of communication and synchronization abstractions can be programmed using a small collection of primitives.
A concurrent program is composed from two or more sequential programs, called processes, that execute (at least conceptually) in parallel. The sequential part of the execution of these processes is independent, but they also must interact via shared resources in order to collaborate on achieving their common purpose. In this book, we are concerned with the situation in which the concurrency and process interaction are explicit. This is in contrast with implicitly parallel languages, such as parallel functional languages [Hud89, Nik91, PvE93] and concurrent logic programming languages [Sha89]. The choice of language mechanisms used for process interaction is the key issue in concurrent programming language design.
A natural application of concurrency is the management of multiple independent tasks. For example, building a large C program involves a number of individual compilations, each of which is run as a separate UNIX command. Since these compilations are independent, they may be run at the same time (possibly on different machines). In this chapter, we describe the implementation of a “parallel” build system using CML.
The problem
The basic problem is that we are given a set of objects, and a set of dependencies between the objects. Associated with some objects is an action that describes how to build the object; other objects (e.g., source files) do not have associated actions. Taken together, they form an acyclic dependency graph, whose topological order defines the order in which objects should be built. We use the term antecedents to denote the nodes that a node depends on, and successors to denote the nodes that depend on it. The nodes of the graph are classified into internal nodes (those that have non-zero in-degree), leaf nodes (those with no antecedents), and the root node (which has no successors). We restrict ourselves to graphs with exactly one root. For the graph to be well formed, any internal node should have an associated action. Leaf nodes may also have actions.
Also associated with each object is a timestamp that tells when the object was last built or modified.
The design of CML has been driven by practical experience. In particular, the mechanism of first-class synchronous operations is motivated by the fundamental conflict between selective communication and abstraction. This chapter explains the rationale for the design of CML, and especially for first-class synchronous operations. It is aimed at people interested in language design, and is not required to understand the remainder of the book. In this chapter, we focus on coreCML — synchronous message passing plus the event combinators — the other synchronization mechanisms found in CML, such as mailboxes, I-variables, and M-variables, can be viewed as derived forms. Some of the discussion here repeats earlier arguments, but is included for coherence.
Basic design choices
As surveyed in Chapter 2, there are many possible choices for the design of a concurrent language. CML chooses message passing over shared memory, synchronous communication over asynchronous, and simple rendezvous over extended rendezvous. This section argues in favor of these choices.
While SML is an imperative language, its design greatly encourages a mostly functional style of programming. Mutable values must be declared explicitly as such, and there is syntactic overhead on their use. For these reasons, extending SML with sharedmemory concurrency primitives is not true to the “spirit” of the language. Message passing, on the other hand, encourages a mostly functional programming style that fits well with ML. As we have seen, much of the state in typical CML programs is represented as immutable arguments to the tail-recursive functions that implement threads.
Concurrent programming is the task of writing programs consisting of multiple independent threads of control, called processes. Conceptually, we view these processes as executing in parallel, but in practice their execution may be interleaved on a single processor. For this reason, we distinguish between concurrency in a programming language, and parallelism in hardware. We say that operations in a program are concurrent if they can be executed in parallel, and we say that operations in hardware are parallel if they overlap in time.
Operating systems, where there is a need to allow useful computation to be done in parallel with relatively slow input/output (I/O) operations, provide one of the earliest examples of concurrency. For example, during its execution, a program P might write a line of text to a printer by calling the operating system. Since this operation takes a relatively long time, the operating system initiates it, suspends P, and starts running another program Q. Eventually, the output operation completes and an interrupt is received by the operating system, at which point it can resume executing P. In addition to introducing parallelism and hiding latency, as in the case of slow I/O devices, there are other important uses of concurrency in operating systems. Using interrupts from a hardware interval timer, the operating system can multiplex the processor among a collection of user programs, which is called time-sharing. Most time-sharing operating systems allow user programs to interact, which provides a form of user-level concurrency.
A principal use of concurrent programming is in the implementation of distributed systems. A distributed system consists of processes running in different address spaces on logically different processors. Because the processes are physically disjoint, there are a number of issues that arise in distributed systems that are not present in concurrent programming:
Communication latency is significantly higher over a network than between threads running in the same address space.
Processors and network links can go down, and come back up, during the execution of a distributed program.
Programs running on different nodes may be compiled independently, which means that type security cannot be guaranteed statically.
For these reasons, the synchronous model used in CML does not map well to the distributed setting. A different programming model is required for dealing with remote communication.
Although concurrent programming languages, such as CML, may not directly provide a notation for distributed programming, they do have an important rôle in implementing distributed systems. A distributed system is made up of individual programs running on different machines, which must communicate with each other. Managing this communication is easier in a concurrent language. Furthermore, multiple threads of control can help hide the latency of network communication. For these reasons, most distributed programming languages and toolkits provide concurrent programming features.
This chapter explores a distributed implementation of Linda-style tuple spaces (described in Section 2.6.3).
Concurrent programming differs from sequential programming in several significant ways. Conceptually, we can view the execution of a concurrent program as an interleaving of the sequential execution of its constituent processes. Since there are many possible interleavings, the execution of a concurrent program is nondeterministic; i.e., different interleavings may produce different results. In effect, a concurrent program defines a partial order on its actions, whereas a sequential program defines a total order. This nondeterminism is both the bane and boon of concurrent programming: on the one hand, it creates additional correctness problems, while on the other hand, it provides flexibility and a more natural program structure. As argued in Chapter 1, the choice of programming notation can help the programmer control the complexity of concurrency, while reaping its benefits.
A concurrent language typically consists of a sequential core (or sub-language), extended with support for concurrency. The concurrency support can be divided into three different kinds of mechanism:
• A mechanism for introducing independent sequential threads of control, called processes. The process creation mechanism can be either static, restricting the program to a fixed number of processes, or dynamic, allowing new processes to be created “on-the-fly.”
• A mechanism for processes to communicate. Communication involves exchanging data, either through shared memory locations (e.g., variables) or by explicit message passing.
• And a mechanism for processes to synchronize. Synchronization restricts the ordering of execution in otherwise independent threads, and is used to limit the program's nondeterminism where necessary.
This chapter is the first of three tutorial chapters on the basic features and programming techniques of CML. This chapter focuses on the features that make up CML's semantic core. The next chapter continues with a discussion of the two most important styles of CML programming: process networks and client-server protocols. Chapter 5 completes the tutorial with further discussion of CML's synchronization and communication mechanisms.
Sequential programming
As is the case with most concurrent languages, CML consists of a sequential core language — Standard ML — extended with concurrency primitives. The individual processes in a CML program are programmed using the features of SML. While we conceptually view CML as a programming language, it is actually implemented as a collection of modules on top of SML/NJ. The aspects of CML described in this chapter all belong to the core structure of this library, which is named CML. To reduce notational clutter, most of the examples in this chapter are assumed to be given in an environment where the CML structure has been pre-opened (i.e., all of its bindings are at top-level). The CML structure's interface is described in Appendix A, which contains an abridged version of the CML Reference Manual.
CML is a message-passing language, which means that processes are viewed as executing in independent address spaces with their only communication being via messages. But, since SML provides updatable references, this is a fiction that must be maintained by programming style and convention.
In Chapter 2 it was already mentioned that correctness of an implementation essentially means that the corresponding diagrams commute weakly. Recall that there are four possible ways in which this can be defined, each implying a notion of simulation. This is depicted in Figures 4.1–4.4 for a single operation (note the direction of the inner arrows).
In this chapter the subtle differences between these notions of simulation are studied. Such differences must be taken into account, e.g., when concatenating simulation diagrams. Also we investigate how these notions behave under vertical stacking and how they are related to each other. The outcome has serious consequences for the value of U- and U−1-simulation.
With the necessary technical machinery at hand, we are finally able to show how data invariants can be used to convert partial abstraction relations into total ones.
Then we analyze soundness and completeness of simulation as a method for proving data refinement.
We undertake most of our investigations in a purely semantic set-up, suppressing the distinction between syntax and semantics as much as possible.
Figure 4.1 represents U-simulation, and Figure 4.2 represents L-simulation. The diagrams in Figures 4.3 and 4.4 represent U−1 -simulation and L−1-simulation, respectively. Because a concrete operation can be less nondeterministic than the corresponding abstract operation, we say that the diagrams commute weakly. This weak form of commutativity is expressed by “⊆” in the following definitions. (Strong commutativity would be expressed by “=”.)
Composing Simulation Diagrams
To obtain a compositional theory of simulation, it would be interesting to have a sufficiently strong condition under which these kinds of simulation hold for composed diagrams.
The goal of this monograph is the introduction of, and comparison between, various methods for proving implementations of programs correct. Although these methods are illustrated mainly by applying them to correctness proofs of implementations of abstract data structures, the techniques developed apply equally well to proving correctness of implementations in general. For we shall prove that all these methods are only variations on one central theme: that of proof by simulation, of which we analyze at least 13 different formulations.
As the central result we prove that these methods either imply or are equivalent to L-simulation (also called forward or downward simulation in the literature) or a combination of L- with L−1-simulation (the latter is also called backward or upward simulation). Since, as shown by Hoare, He, and Sanders, only the combination of these forms of simulation is complete, this immediately establishes when these methods are complete, namely, when they are equivalent to this combination.
Our motivation for writing this monograph is that we believe that in this area of computer science (as well as in various other areas) the duty of universities is not to train students in particular methods, but rather to give students insight in both similarities and differences between methods such as VDM, Z, the methods advocated by Reynolds and Hehner, and methods more directly based on Hoare Logic or predicate transformers. The reason for this conviction is that computer science develops far too quickly for us to believe that any of these methods will survive in its present form.