We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Amoeba distributed operating system supports the transaction as its communication primitive. The protocol that the Amoeba system uses to carry out sequences of transactions reliably and efficiently is analyzed in terms of process algebra. The design goals are formulated as process algebra equations and it is established that one of them is not met. This can be repaired by adding an extra transition. Subsequently it is verified that the revised version meets its specifications.
It has been observed that formal verification methods for mathematical proofs, computer programs, communication protocols and the like are usually illustrated by ‘toy’ examples and that such proofs tend to be discouragingly long. In order to demonstrate that it is feasible to verify a ‘real-life’ communication protocol by means of process algebra, we picked one from the literature.
In his Ph.D. thesis, Mullender investigates issues he considered while developing the Amoeba distributed operating system. In Section 3.2.4 of a transaction protocol is described to which we will refer as the Amoeba protocol. In the preceding sections of the design goals are described that this protocol is supposed to satisfy. He does not give a formal verification that his protocol meets this criteria. In fact, it turns out that one of them is not met. Note that this only applies to the simplified version of the protocol that appears in, the actual implementation uses a much more complicated version in which this mistake is not found.
Section 1 of this article gives the minimum background information necessary for understanding the rest of the article.
In Section 2 the design goals are formulated in English and in terms of process algebra.
By
L. Kossen, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands,
W. P. Weijland, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands
In designing VLSI-circuits it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Systolic systems are circuits built up from arrays of cells and therefore very suitable for formal analysis and induction methods. In two examples correctness proofs are given using bisimulation semantics with asynchronous cooperation. These examples also have been worked out by Hennessy in a setting of failure semantics with synchronous cooperation. Finally the notion of process creation is introduced and used to construct machines with unbounded capacity.
INTRODUCTION
In this article we will present simple descriptions of so-called systolic systems. Such systems can be looked at as a large integration of identical cells in such a way that the behaviour of the total system strongly resembles the behaviour of the individual cells. In fact the total system behaves like one of its individual cells ‘on a larger scale’.
For example one can think of a machine sorting arrays of numbers with a certain maximum length. Suppose we need a machine handling arrays that are much longer. A typical ‘systolic approach’ to this problem would be to try to interconnect the smaller machines such that the total circuit sorts arrays of a greater length. As a matter of fact this specific example will be worked out in the following sections. In designing VLSI-circuits (short for very large scale integrated circuits) it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Otherwise one looses all intuition about the behaviour of the circuit that is eventually constructed.
Let x be a process which can perform an action a when it is in state s. In this article we consider the situation where x is placed in a context which blocks a whenever, x is in s. The option of doing a in state s is redundant in such a context and x can be replaced by a process x′ which is identical to x, except for the fact that x′ cannot do a when it is in s (irrespective of the context). A simple, compositional proof technique is presented, which uses information about the traces of processes to detect redundancies in a process specification. As an illustration of the technique, a modular verification of a workcell architecture is presented.
INTRODUCTION
We are interested in the verification of distributed systems by means of algebraic manipulations. In process algebra, verifications often consist of a proof that the behaviour of an implementation IMPL equals the behaviour of a specification SPEC, after abstraction from internal activity: τI(IMPL) = SPEC.
The simplest strategy to prove such a statement is to derive first the transition system (process graph) for the process IMPL with the expansion theorem, apply an abstraction operator to this transition system, and then simplify the resulting system to the system for SPEC using the laws of (for instance) bisimulation semantics. This ‘global’ strategy however, is often not very practical due to combinatorial state explosion: the number of states of IMPL can be of the same order as the product of the number of states of its components. Another serious problem with this strategy is that it provides almost no ‘insight’ in the structure of the system being verified.
By
J. A. Bergstra, Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands, Department of Philosophy, State University of Utrecht, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands,
J. W. Klop, Department of Software Technology, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands, Department of Mathematics and Computer Science, Free University, P.O. Box 7161, 1007 MC Amsterdam, The Netherlands
This article serves as an introduction to the basis of the theory, that will be used in the rest of this book. To be more precise, we will discuss the axiomatic theory ACPT (Algebra of Communicating Processes with abstraction), with additional features added, which is suitable for both specification and verification of communicating processes. As such, it can be used as background material for the other articles in the book, where all basic axioms are gathered. But we address ourselves not exclusively to readers with previous exposure to algebraic approaches to concurrency (or, as we will call it, process algebra). Also newcomers to this type of theory could find enough here, to get started. For a more thorough treatment of the theory, we refer to, which will be revised, translated and published in this CWI Monograph series. There, most proofs can also be found; we refer also to the original papers where the theory was developed. This article is an abbreviated version of reference.
Our presentation will concentrate on process algebra as it has been developed since 1982 at the Centre for Mathematics and Computer Science, Amsterdam (see), since 1985 in cooperation with the University of Amsterdam and the University of Utrecht. This means that we make no attempt to give a survey of related approaches though there will be references to some of the main ones.
This paper is not intended to give a survey of the whole area of activities in process algebra.
We acknowledge the help of Jos Baeten in the preparation of this paper.
In this book, we give applications of the theory of process algebra, known by the acronym ACP (Algebra of Communicating Processes), as it has been developed since 1982 at the Centre for Mathematics and Computer Science, Amsterdam (see), since 1985 in cooperation with the University of Amsterdam and the University of Utrecht. An important stimulus for this book was given by the ESPRIT contract no. 432, An Integrated Formal Approach to Industrial Software Development (Meteor). The theory itself is treated in, which will be revised, translated and published in this series. The theory is briefly reviewed in the first article in this book, An introduction to process algebra, by J.A. Bergstra and J.W. Klop.
This book gives applications of the theory of process algebra. By the term process algebra we mean the study of concurrent or communicating processes in an algebraic framework. We endeavour to treat communicating processes in an axiomatic way, just as for instance the study of mathematical objects as groups or fields starts with an axiomatization of the intended objects. The axiomatic method which will concern us, is algebraic in the sense that we consider structures which are models of some set of (mostly) equational axioms; these structures are equipped with several operators. Thus we use the term ‘algebra’ in the sense of model theory.
By
Richard A Volz, Dept. of Computer Science, Texas A&M Univ., College Station, Texas, U.S.A., 77843.,
Padmanabhan Krishnan, Dept. of Elec. Eng. & Comp. Sci., The Univ. of Michigan, Ann Arbor, Mich. U.S.A., 48109.,
Ronald Theriault, Dept. of Computer Science, Texas A&M Univ., College Station, Texas, U.S.A., 77843.
This paper describes the design and implementation of a Distributed Ada system. The language is not well defined with respect to distribution, and any implemtation for distributed execution must make a number of decisions regarding the language. The objectives in the implementation described here are to remain as close to the current definition of Ada as possible, and to learn through experience what changes are necessary in future versions. The approach we take translates a single Distributed Ada program into a number of Ada programs (one per site), each of which may then be compiled by an existing Ada compiler. Issues discussed include the ramifications of sharing of data types, objects, subprograms, tasks and task types. The implementation techniques used in the translator are described. We also develop a model of the performance of our system and validate it through performance benchmarks.
INTRODUCTION
The importance of distributed systems cannot be over-emphasized, especially with the reduction in the cost of high speed connection between powerful processing elements. Distributed computing has made inroads into many important areas such as manufacturing, avionic systems and space systems. The cost of developing software for such systems, however, is reaching astronomical proportions [1]. A major concern is the creation of software tools to economically harness the increased computing power.
Central to distributed software development is the language used to program these distributed devices.
By
Brian Dobbing, Alsys Limited Newtown Road, Henley on Thames, Oxon RG9 1EN, England,
Ian Caldwell, Alsys Limited Newtown Road, Henley on Thames, Oxon RG9 1EN, England
This paper describes firstly a general model for implementing Ada in a distributed or parallel system using existing compilation systems and without extensions to, or restrictions on, the Ada language definition. It then describes an instance of the model, namely how to implement an application across a network of transputers in Ada using the current Alsys Ada Compilation System for the Transputer.
INTRODUCTION
Much debate has already taken place regarding the inadequacies of Ada to support a single program running on a distributed or parallel architecture. This has led to a set of twelve requirements from the Parallel/Distributed Systems Working Group for consideration by the Ada 9X Project Requirements Team [DoD89]. Whilst we await Ada 9X, it is very important to be able to demonstrate that current Ada can be used to program distributed systems efficiently, without compromising Ada's goals of security and portability. This document describes how Ada can be used in this way in the general case and also gives an example of distributed Ada on the Inmos transputer. The transputer has been chosen primarily as a precursor to a study commissioned by the European Space Agency into the problems of, and recommendations for, mapping Ada onto a multi-transputer network for on-board space applications.
The intention of the general model is to be able to demonstrate support for the needs of distributed systems such as:
program partitioning and configuration;
dynamic reconfiguration and fault tolerance;
different perception of time in different partitions;
This paper presents the current position of the York Distributed Ada Project. The project has developed an approach to the programming of loosely coupled distributed systems in Ada, this being based on a single Ada program consisting of virtual nodes which communicate by message passing. A prototype development environment has been produced to support this approach, and an example distributed embedded system has been built. Preliminary work has been undertaken on mechanisms to support the construction of fault-tolerant systems within the approach.
INTRODUCTION
The Ada programming language, together with an appropriate support environment, is well suited to the development of large software engineering applications. Many such projects involve programming embedded computer systems, which often include loosely coupled distributed processing resources — collections of processing elements which are connected by some communication system but which share no common memory. It is now a commonly accepted view (Wellingsl987) that the Ada language by itself does not provide adequate support for the programming of these systems. The York Distributed Ada (YDA) Project has addressed this problem and developed an approach which allows Ada programs to be constructed for execution on distributed systems. (The YDA project was originally funded through the UK's Alvey Software Engineering Directorate as part of the ASPECT(Hall1985) project. It is currently funded by the Admiralty Research Establishment.) The current position of the project is presented in this paper.
By
Anders Ardö, Department of Computer Engineering University of Lund, P.O. Box 118 S-221 00 Lund, Sweden,
Lars Lundberg, Department of Computer Engineering University of Lund, P.O. Box 118 S-221 00 Lund, Sweden
Now, when the internal speed of computers is close to the physical limitations of electronic devices cf. [Wil83,Mea83] parallelism is the main way to increase the computing capacity. This conclusion has been obvious for quite some time now, and multiprocessor systems have attracted much attention during the last years.
In order to achieve parallelism, new computer structures and internal organizations are needed. There are, however, still no general solutions to those problems. Finding general and efficient ways to organize for parallelism in computer systems is one main interest of computer architecture research today.
One application of multiprocessors that is likely to increase rapidly is based on the use of parallel languages. As parallel languages has not been generally available, there is still very little knowledge among software designers how to take advantage of program parallelism. Now, when parallel languages like Ada are becoming readily available, the programming community will take advantage of the new possibilities they offer.
There are strong reasons to believe that parallelism in programming will be used even aside from the possibilities of increasing the speed with parallel hardware. Many computational problems have an inherent parallelism that obviously will be used when only parallelism has become part of the programmers' daily life. This will also increase program understandability [Hoa78]. Real-time programming is an obvious example for which these arguments are relevant, but as experience grows the same will show to be true for a variety of application areas.
This paper will present a study of practical design decisions relevant to the retargeting of a traditional compilation system to a distributed target environment. The knowledge was gathered during the course of Honeywell's Distributed Ada project which involved the retargeting of a full commercial Ada compilation system to a distributed environment. The goal of the project was to create a compilation system which would allow a single unmodified Ada program to be fragmented and executed in a distributed environment.
The Distributed Ada Project
The trend in embedded system architectures is shifting from uniprocessor systems to networks of multiple computers. Advances in software tools and methodologies have not kept pace with advances in using distributed system architectures. In current practice, the tools designed for developing software on uniprocessor systems are used even when the target hardware is distributed. Typically, the application developer factors the hardware configuration into software design very early in the development process and writes a separate program for each processor in the system. In this way, software design gets burdened with hardware information that is unrelated to the application functionality. The paradigm is also weak in that no compiler sees the entire application. Because of this, the semantics of remote operations are likely to be different from local operations and the type checking that the compiler provides is defeated for inter-processor operations.
The task of programming distributed applications in Ada may be addressed in several ways. Most of these require the application developer to factor the hardware configuration into software design very early in the development process. The resulting software is sensitive to changes in hardware, does not lend itself to design iteration, is not easily transportable across different hardware configurations, and is not stable against changes during the lifecycle of the application.
In Section 3, we describe an approach that aims at separation of concerns between program design and program partitioning for distributed execution. The entire application is written as a single Ada program using the full capabilities of the language for program structuring, separate compilation, and type checking. Then in a distinct second phase of design, the program is partitioned and prepared for distributed execution. Advantages of a two-phase design approach are discussed. Section 4 reviews related work and presents a comparative evaluation. Section 5 describes the notation used to express program partitioning. Section 6 revisits the issue of what Ada entities should be distributable.
Two implementations of this approach have been completed and tested with the Ada Compiler Validation Capability (ACVC) test-suite. Implementation issues, and the key features of our implementation approach are presented in an accompanying paper.
CLASSIFICATION OF APPROACHES
The Ada language does not provide explicit language support for distribution.
By
D. Auty, SofTech, USA,
A. Burns, University of Bradford, UK,
C. W. McKay, University of Houston - Clear Lake, USA,
C. Randall, GHG Corporation, USA,
P. Rogers, University of Houston - Clear Lake, USA
Perhaps the greatest challenge facing Ada is in the domain of the large distributed real-time system. Because of the long lead time associated with such complex applications no real experience of the use of Ada, in this type of domain, has yet been gained. Nevertheless there are projects of a large and complex nature that are committed to the use of Ada, even though the full potential of the language has yet to prove itself in this challenging domain.
The Portable Common Execution Environment (PCEE) project is a research effort addressing the life cycle support of large, complex, non-stop, distributed computing applications with Mission And Safety Critical (MASC) components. Such applications (for example the International Space Station — Freedom) typically have extended life-time (e.g., 30 years) requirements. PCEE focuses on the system software, the interface to applications and the system architecture necessary to reliably build and maintain such systems. The requirements extend from the target system environment to the integration environment, and ultimately to the host environment. The integration environment serves as the single logical point of integration, deployment, and configuration control whereas system development occurs in the host environment. Life cycle issues include an integrated approach to the technologies (environments, tools, and methodologies) and theoretical foundations (models, principles, and concepts) that span these three environments. The scope of the effort is necessarily broad. There are, however, substantial research foundations to support development across the breadth of the project.
By
Judy M Bishop, Department of Electronics and Computer Science, The University, Southampton, England,
Michael J Hasling, Department of Electronics and Computer Science, The University, Southampton, England
Although Ada is now ten years old, there are still not firm guidelines as to how the distribution of an Ada program onto multiprocessors should be organised, specified and implemented. There is considerable effort being expended on identifying and solving problems associated with distributed Ada, and the first aim of this paper is to set out where the work is being done, and how far it has progressed to date. In addition to work of a general nature, there are now nearly ten completed distributed Ada implementations, and a second aim of the paper is to compare these briefly, using a method developed as part of the Stadium project at the University of Southampton. Much of Southampton's motivation for getting involved in distributed Ada has been the interest from the strong concurrent computing group, which has for several years taken a lead in parallel applications on transputers. The paper concludes with a classification of parallel programs and a description of how the trends in distributed Ada will affect users in the different groups.
COLLECTIVE WORK ON DISTRIBUTED ADA
The major forums where work on distributed Ada is progressing are Ada UK's International Real-Time Issues Workshop, the Ada 9X Project, SIGAda ARTEWG and AdaJUG CARTWG. Reports of these meetings appear regularly in Ada User (published quarterly by Ada UK) and Ada Letters (published bi-monthly by ACM SIGAda). The status of their activities is summarised here.
Although Ada is now reaching its adolescence, distributed Ada is still in its infancy. The extent of the problems yet to be solved and the multitude of proposed solutions presents a very real dilemma for prospective implementors and users alike. How does one specify a distributed program? What parts of Ada are allowed to be distributed? Will the underlying hardware configuration matter? Can the program be made fault tolerant and reliable in the face of processor failure? How much effort will it take to move an existing Ada program onto a mutiprocessor system? Will the proposed new Ada Standard (Ada 9X) address distributed issues?
These are just some of the questions that arise, and there is considerable effort being expended, world-wide, in answering them. However, much of this work is being conducted in small working groups, and the interim results are published only in condensed form, if at all. The aim of this book is to open the debate to as wide an audience as possible, heightening the level of awareness of the progress that has been made to date, and the issues that still remain open.
The symposium on which this book is based was held at the University of Southampton on 11–12 December 1989 and attended by nearly 100 people.
A recent trend in computer engineering has been the replacement of large uniprocessor based proprietary architectures by multiple microprocessor based designs employing various interconnection strategies. While these multiprocessor based systems offer significant performance and economic advantages over uniprocessor systems, not all prospective users are able or willing to adapt their applications to execute as multiple concurrent streams.
The Ada programming language is well suited to multiprocessor systems as it allows the programmer to direct the use of concurrency through the use of the Ada tasking mechanism. The avoidance of automatic distribution of the program by the compiler and the choice of the Ada task as the unit of distribution greatly simplify the development of Ada software for multiprocessor architectures.
For performance reasons, the inter-processor communications path should offer low latency and high transfer rates. Shared memory supports these characteristics and a multiprocessor system, where all memory can be accessed by all processors, has proven to be a suitable platform for a parallel Ada implementation.
This paper discusses the implementation and architecture of a parallel Ada system that allows up to twenty processors to co-execute the same Ada program with true concurrency. Particular attention is given to the design of the Ada runtime and the interface between the runtime and the underlying operating system, as these parts of the system must be “multi-threaded” throughout in order to minimize bottle-necks. The paper concludes with the description of a 1000 MIPS Ada engine currently under development.
By
Robert Dewar, New York University Ada/Ed Research Group,
Susan Flynn, New York University Ada/Ed Research Group,
Edmond Schonberg, New York University Ada/Ed Research Group,
Norman Shulman, New York University Ada/Ed Research Group
The Ada multi-tasking model is one in which tasks can run on separate processors and memory is either non-shared (local to one task), or shared (referenced by more than one task). It would therefore seem that mapping Ada onto a multi-processor architecture with both local and shared memory should be straightforward. This paper examines the difficulties in mapping Ada onto the IBM RP3 which is an example of such an architecture. In practice there are a number of difficult problems, the most significant of which is the inability to determine at compile time which variables are shared. The RP3 has a flexible shared memory architecture, and an important purpose of the Ada/RP3 project is to investigate possible models for implementation of Ada, with a view to determining whether modifications or enhancements of Ada are desirable to ensure optimal use of such architectures.
INTRODUCTION
The NYU Ada/Ed system consists of a front end and interpreter written entirely in C. This system is a direct descendant of the original SETL interpreter, and has been ported to a wide variety of machines [KS84].
Our current research involves porting Ada/Ed to the IBM RP3, an experimental multi-processor with shared memory [P87]. The front end is essentially unchanged, except for the addition of a set of pragmas described later, but the backend is being modified to interface with proprietary IBM code generating technology, and the runtime library is being rewritten to take advantage of the multi-processor architecture.
By
A.B. Gargaro, Computer Sciences Corporation Moorestown, New Jersey, USA,
S.J. Goldsack, Department of Computing Imperial College London, UK,
R.A. Volz, Department of Computer Science Texas A&M University, USA,
A.J. Wellings, Department of Computer Science University of York, UK
The Ada programming language was designed to provide support for a wide range of safety-critical applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals. A major impediment has been the lack of language support for distributed fault-tolerant program execution.
In this paper we propose language changes to Ada which will facilitate the programming of fault-tolerant distributed real-time applications. These changes support partitioning and configuration/reconfiguration. Paradigms are given to illustrate how dynamic reconfiguration of the software can be programmed following notification of processor and network failure, mode changes, software failure, and deadline failure.
INTRODUCTION
There is increasing use of computers that are embedded in some wider engineering application. These systems all have several common characteristics: they must respond to externally generated input stimuli within a finite and specified period; they must be extremely reliable and/or safe; they are often geographically distributed over both a local and a wide area; they may contain a very large and complex software component; they may contain processing elements which are subject to cost/size/weight constraints.
Developing software to control safety-critical applications requires programming abstractions that are unavailable in many of today's programming languages. The Ada programming language was designed to provide support for such applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals.