We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we shall compare our approach with five related systems development environments. These environments include PSL/PSA, ADS/SODA, SADT/EDDA, SAMM/SIGS and RSL/SREM. They have been chosen for comparison because of the following reasons:
(a) They were pioneer systems development environments meant to cover the entire systems life cycle, and have stood the test of time.
(b) They are better known and documented, so that interested readers can obtain further information easily.
(c) They are still under active development and recent enhancements have been reported.
(d) The languages examined cover a wide spectrum of characteristics. Some were originally developed as manual tools, but others were meant for mechanical support from the very beginning. For some languages, a system can be specified in a multi-level fashion, but not for others. Some languages use graphics as the specification medium while others are purely textual. Some of the development environments generate graphical feedback to users whereas others generate textual documentation.
(e) The final reason is a pragmatic one. We have included projects which are familiar to the present author. Part of this chapter is derived from the research results of a joint project with Mr Daniel Pong (Tse and Pong 1982, Tse and Pong (to appear)).
PSL/PSA AND META/GA
PSL was the first major language for defining a system formally and analysing it automatically (Teichroew 1971, Teichroew et al. 1980, 1982, PSL/PSA Introduction 1987).
Structured systems development methodologies have been recognized as the most popular tools in information systems development. They are widely accepted by practising systems developers because of the top down nature of the methodologies and the graphical nature of the tools. Unfortunately, however, the models are only derived from the experience of the authors. In spite of the popularity of these models, relative little work has been done in providing a theoretical framework to them. In this project, we have tried to solve the problem by defining a unifying theoretical framework behind the popular structured models.
We have defined an initial algebra of structured systems, which can be mapped by unique homomorphisms to a DeMarco algebra of data flow diagrams, a Yourdon algebra of structure charts and a Jackson algebra of structure texts (with equations). As a result, specifications can be transformed from one form to another. Algebraic interpreters may be adapted to validate the specifications.
We have also found that the proposed term algebra as well as the DeMarco, Yourdon and Jackson notations fit nicely into a functorial framework. The framework provides a theoretical basis for manipulating incomplete or unstructured specifications through the concepts of structured tasks and refinement morphisms. Moreover, DeMarco data flow diagrams can be mapped to term algebras through free functors. Conversely, specifications in term algebras can be mapped to other notations such as Yourdon structure charts by means of functors.
This part describes three different approaches to the use of formal methods in the verification and design of systems and circuits.
Chapter 2 describes the stages involved in the verification of a counter using a mechanized theorem prover.
The next chapter describes a mathematical model of synchronous computation within which formal transformations which are useful in the design process can be defined.
Chapter 4 describes verification in a different framework – that of the algebra of communicating processes.
In designing VLSI-circuits it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Systolic systems are circuits built up from arrays of cells and therefore very suitable for formal analysis and induction methods. In the case of a palindrome recognizer a correctness proof is given using bisimulation semantics with asynchronous cooperation. The proof is carried out in the formal setting of the Algebra of Communicating Processes (see Bergstra & Klop [1986]), which provides us with an algebraical theory and a convenient proof system. An extensive introduction to this theory is included in this paper. The palindrome recognizer has also been studied by Hennessy [1986] in a setting of failure semantics with synchronous cooperation.
INTRODUCTION
In the current research on (hardware) verification one of the main goals is to find strong proof systems and tools to verify the designs of algorithms and architectures. For instance, in the development of integrated circuits the important stage of testing a prototype (to save the high costs of producing defective processors) can be dealt with much more efficiently, when a strong verification tool is available. Therefore, developing a verification theory has very high priority and is subject of study at many universities and scientific institutions.
However, working on detailed verification theories is not the only approach to this problem. Once having a basic theory, the development of case studies is of utmost importance to provide us with new ideas.
In this part the design process itself is examined from three approaches.
In Chapter 5 design is modelled as transforming formal draft system designs, and the user specification process is examined in detail.
In Chapter 6 circuits are relations on signals, and design is achieved through the application of combining forms satisfying certain mathematical laws.
Chapter 7 treats the problem of the automatic synthesis of VLSI chips for signal processing, and the practical issues involved are discussed in greater depth.
The development of VLSI fabrication technology has resulted in a wide range of new ideas for application specific hardware and computer architectures, and in an extensive set of significant new theoretical problems for the design of hardware. The design of hardware is a process of creating a device that realises an algorithm, and many of the problems are concerned with the nature of algorithms that may be realised. Thus fundamental research on the design of algorithms, programming and programming languages is directly relevant to research on the design of hardware. And conversely, research on hardware raises many new questions for research on software. These points are discussed at some length in the introductory chapter.
The papers that make up this volume are concerned with the theoretical foundations of the design of hardware, as viewed from computer science. The topics addressed are the complexity of computation; the methodology of design; and the specification, derivation and verification of designs. Most of the papers are based on lectures delivered at our workshop on Theoretical aspects of VLSI design held at the Centre for Theoretical Computer Science, University of Leeds in September 1986. We wish to express our thanks to the contributors and referees for their cooperation in producing this work.
One of the natural ways to model circuit behaviour is to describe a circuit as a function from signals to signals. A signal is a stream of data values over time, that is, a function from integers to values. One can choose to name signals and to reason about their values. We have taken an alternative approach in our work on the design language μFP (Sheeran [1984]). We reason about circuits, that is functions from signals to signals, rather than about the signals themselves. We build circuit descriptions by ‘plugging together’ smaller circuit descriptions using a carefully chosen set of combining forms. So, signals are first order functions, circuits are second order, and combining forms are third order.
Each combining form maps one or more circuits to a single circuit. The combining forms were chosen to reflect the fact that circuits are essentially two-dimensional. So, they correspond to ways of laying down and wiring together circuit blocks. Each combining form has both a behavioural and a pictorial interpretation. Because they obey useful mathematical laws, we can use program transformation in the development of circuits. An initial obviously correct circuit can be transformed into one with the same behaviour, but a more acceptable layout. It has been shown that this functional approach is particularly useful in the design of regular array architectures (Sheeran [1985, 1986], Luk & Jones [1988a]).
However, sometimes a relational description of a circuit is more appropriate than a functional one.
Combinational networks are a widely studied model for investigating the computational complexity of Boolean functions relevant both to sequential computation and parallel models such as VLSI circuits. Recently a number of important results proving non-trivial lower bounds on a particular type of restricted network have appeared. After giving a general introduction to Boolean complexity theory and its history this chapter presents a detailed technical account of the two main techniques developed for proving such bounds.
INTRODUCTION
An important aim of Complexity Theory is to develop techniques for establishing non-trivial lower bounds on the quantity of particular resources required to solve specific problems. Natural resources, or complexity measures, of interest are Time and Space, these being formally modelled by the number of moves made (resp. number of tape cells scanned) by a Turing machine. ‘Problems’ are viewed as functions, f : D → R; D is the domain of inputs and R the range of output values. D and R are represented as words over a finite alphabet Σ and since any such alphabet can be encoded as a set of binary strings it is sufficiently general to consider D to be the set of Boolean valued n-tuples {0, 1}n and R to be {0,1}. Functions of the form f : {0, 1}n → {0,1}, are called n-input single output Boolean functions. Bn denotes the set of all such functions and Xn = (x1,x2, …, xn) is a variable over {0, 1}n.
The theme of this chapter centres on the automatic synthesis of cost effective and highly parallel digital signal processors suitable for VLSI implementation. The proposed synthesis model is studied in detail and the concepts of signal modelling and data flow analysis are discussed. This is further illustrated by the COSPRO (COnfigurable Signal PROcessor) simulator – a primitive version of the automatic synthesis concept developed at the Department of Electrical & Electronic Engineering, University of Newcastle Upon Tyne. Binary addition is chosen as a case study to demonstrate the application of the concept.
INTRODUCTION
Digital signal processing
Digital signal processing (DSP), a counterpart of analog signal processing, began to blossom in the mid 1960s when semiconductor and computer technologies were able to offer a massive increase in flexibility and reliability. Within the short period of twenty years, this field has matured rapidly in both theory and applications, and contributed significantly to the understanding in many diverse areas of science and technology. The range of applications has grown to include almost every part of our lives, from microprocessor controlled domestic appliances to computerised banking systems and highly sophisticated missile guidance systems. Many other areas such as biomedical engineering, seismic research, radar and sonar detection and countermeasures, acoustics and speech, telecommunications, image processing and understanding, thermography, office automation and computer graphics employ DSP to a great extent, and are heavily applied in military, intelligence, industrial and commercial environments.
The VIPER microprocessor designed at the Royal Signals and Radar Estasblishment (RSRE) is probably the first commercially produced computer to have been developed using modern formal methods. Details of VIPER can be found in Cullyer [1985, 1986, 1987] and Pygott [1986]. The approach used by W. J. Cullyer and C. Pygott for its verification is explained in Cullyer & Pygott [1985], in which a simple counter is chosen to illustrate the verification techniques developed at RSRE. Using the same counter, we illustrate the approach to hardware verification developed at Cambridge, which formalizes Cullyer and Pygott's method. The approach is based on the HOL system, a version of LCF adapted to higher-order logic (Camilleri et al. [1987], Gordon [1983, 1985]). This research has formed the basis for the subsequent project to verify the whole of VIPER to register transfer level (Cohn [1987, 1989]).
In Cullyer and Pygott's paper, the implementation of the counter is specified at three levels of decreasing abstractness:
As a state-transition system called the host machine;
As an interconnection of functional blocks called the high level design;
As an interconnection of gates and registers called the circuit.
Ultimately, it is the circuit that will be built and its correctness is the most important. However, the host machine and high level design represent successive stages in the development of the implementation and so one would like to know if they too are correct.
Since our concern was speech, and speech impelled us
To purify the dialect of the tribe
And urge the mind to aftersight and foresight
T. S. Eliot Little Gidding
ABSTRACT
We analyse theoretically the process of specifying the desired behaviour of a digital system and illustrate our theory with a case study of the specification of a digital correlator.
First, a general theoretical framework for specifications and their stepwise refinement is presented. A useful notion of the consistency of two general functional specifications is defined. The framework has three methodological divisions: an exploration phase, an abstraction phase, and an implementation phase.
Secondly, a mathematical theory for specifications based on abstract data types, streams, clocks and retimings, and recursive functions is developed. A specification is a function that transforms infinite streams of data. The mathematical theory supports formal methods and software tools.
Thirdly, a digital correlator is studied in considerable detail to demonstrate points of theoretical and practical interest.
INTRODUCTION
Overview
How can we precisely define the desired behaviour of a digital system? What role can such precise definitions have in the imprecise process of designing a digital system, and in its subsequent use?
We wish to formulate answers to these questions by theoretically analysing the first step of a design assignment, when it must be determined what is to be designed.
The specification, design, construction, evaluation and maintenance of computing systems involve significant theoretical problems that are common to hardware and software. Some of these problems are long standing, although they change in their form, difficulty and importance as technologies for the manufacture of digital systems change. For example, theoretical areas addressed in this volume about hardware include
models of computation and semantics,
computational complexity,
methodology of design,
specification methods,
design and synthesis, and
verification methods and tools;
and the material presented is intimately related to material about software. It is interesting to attempt a comparison of theoretical problems of interest in these areas in the decades 1960–69 and 1980–89. Plus ça change, plus c'est la même chose?
Of course, the latest technologies permit the manufacture of larger digital systems at smaller cost. To enlarge the scope of digital computation in the world's work it is necessary to enlarge the scope of the design process. This involves the development of the areas listed above, and the related development of tools for CAD and CIM.
Most importantly, it involves the unification of the study of hardware and software. For example, a fundamental problem in hardware design is to make hardware that is independent of specific fabricating technologies. This complements a fundamental problem in software design – to make software that is independent of specific hardware (i.e., machines and peripherals).