We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Too often there is thought to be a dichotomy between science and engineering: science as a quest for knowledge and understanding, and engineering as the art of constructing useful objects. This book, based on the author's experience in leading a silicon compilation project at Philips Research, is exceptional in that it very convincingly demonstrates the effectiveness of combining the scientific method with sound engineering practices.
Aimed at bridging the gap between program construction and VLSI design, the research reported in this book extends over an unusually wide spectrum of disciplines, ranging from computer science and electrical engineering to logic and mathematics. In this exciting arena we encounter such topics as the power dissipation of an assignment statement, the mathematical theory of handshake circuits, the correctness proof of a compiler, and the problem of circuit initialization without reset wires, to mention just a few.
Such a multi-faceted study can be successful only if it is able to demonstrate a clear separation of concerns. In this respect, Kees van Berkel does an admirable job: his concept of handshake circuits provides an extremely elegant interface between algorithm design on the one hand and circuit implementations on the other. This separation between ‘what’ and ‘how’, which many researchers and practitioners find difficult to apply, turns out to be amazingly fruitful, as the readers of this book are encouraged to discover for themselves. In my opinion we are, with the publication of this book, witnessing a major step forward in the development of the discipline of VLSI programming.
This book is about the design of digital VLSI circuits. Whereas LSI circuits perform basic functions such as multiplication, control, storage, and digital-to-analog conversion, VLSI circuits contain complex compositions of these basic functions. In many cases all data and signal processing in a professional or consumer system can be integrated on a few square centimeters of silicon. Examples of such “systems on silicon” can be found in:
Disc (CD) players,
Disc Interactive (CDI) players,
Compact Cassette (DCC) players,
Audio Broadcast (DAB) receivers,
radios and mobile telephones,
High-Definition TeleVision (HDTV) sets,
video recorders,
processors,
car-navigation systems,
processors, and
test and measurement systems.
These systems generally process analog as well as digital signals, but the digital circuits dominate the surface of an IC. The memory needed for storing intermediate results often covers a significant fraction of the silicon area.
Systems on silicon are tending to become more complex and are tending to increase in number. The increase in complexity follows from advances in VLSI technology, and the rapid growth of the number of transistors integrated on a single IC. The constant reduction of the costs of integration makes integration economically attractive for an increasing number of systems. Also, the rapid succession of generations of a single product increases the pressure on design time. The ability to integrate systems on silicon effectively, efficiently, and quickly has thus become a key factor in the global competition in both consumer and professional electronic products.
This book pursues a programming approach to the design of digital VLSI circuits. In such an approach the VLSI-system designer constructs a program in a suitable high-level programming language. When he is satisfied with his program, the designer invokes a so-called silicon compiler which translates this program into a VLSI-circuit layout.
The choice of the programming language is a crucial one, for it largely determines the application area, the convenience of design, and the efficiency of the compiled circuits. A good VLSI-programming language.
0. is general purpose in that it allows the description of all digital functions;
1. encourages the systematic and efficient design of programs by abstracting from circuit, geometry and technology details;
2. is suitable for automatic translation into efficient VLSI circuits and test patterns.
Below follows a motivation for these requirements.
0. A wide range of applications is required to justify the investment in tools and training.
1. A major gain in design productivity can be expected by designing in a powerful highlevel language. Furthermore, system designers do not need to resort to VLSI specialists. Systematic design methods, supported by mathematical reasoning, are required to deal with the overwhelming complexity involved in the design of VLSI systems.
2. Automatic translation to VLSI circuits avoids the introduction of errors at the lower abstraction levels. It also becomes attractive to design alternative programs and compare the translated circuits in costs (circuit area) and performance (speed and power).
This text is intended to be a student textbook which primarily provides an introduction to (a particular kind of) categorical type theory, but which should also be useful as a reference to those mathematicians and computer scientists pursuing research in related areas. It is envisaged that it could provide the basis for a course of lectures aimed at advanced undergraduate or beginning graduate students. Given the current content of typical British undergraduate mathematics and computer science courses, it is difficult to describe an exact audience at whom the book is aimed. For example, the material on ordered sets should be readily accessible to first and second year undergraduates and indeed I know of courses which contain elements of such topics. However, the material on category theory, while probably accessible to good third year undergraduate students, does require a certain amount of mathematical maturity. Perhaps it is better suited to graduate students in their early stages. Chapters 3 and 4 are probably of second and third year undergraduate level respectively, assuming that the requisite category theory has been assimilated. The final two chapters are probably better suited to first year graduates. In summary, as well as serving as a textbook for (graduate) students, I hope “Categories for Types” will provide a useful reference for those conducting research into areas involving categorical type theory and program semantics.
Discussion 1.1.1 We shall begin by giving an informal description of some of the topics which appear in Chapter 1. The central concept is that of an ordered set. Roughly, an ordered set is a collection of items some of which are deemed to be greater or smaller than others. We can think of the set of natural numbers as an ordered set, where, for example, 5 is greater than 2, 0 is less than 100, 1234 is less than 12687 and so on. We shall see later that one way in which the concept of order arises in computer science is by regarding items of data as ordered according to how much information a certain data item gives us. Very crudely, suppose that we have two programs P and P′ which perform identical tasks, but that program P is defined (halts with success) on a greater number of inputs than does P′. Then we could record this observation by saying that P is greater than P′. These ideas will be made clearer in Discussion 1.5.1. We can perform certain operations on ordered sets, for example we have simple operations such as maxima and minima (the maximum of 5 and 2 in the ordered set of natural numbers is 5), as well as more complicated ones such as taking suprema and infima. If the reader has not met the idea of suprema and infima, then he will find the definitions in Discussion 1.2.7.
During the Michaelmas term of 1990, while at the University of Cambridge Computer Laboratory, the opportunity arose to lecture on categorical models of lambda calculi. The course consisted of sixteen lectures of about one hour's duration twice a week for eight weeks, and covered much of the material in this book, but excluded higher order polymorphism and some of the category theory. The lectures were delivered to an audience of computer scientists and mathematicians, with an emphasis on presenting the material to the former. It was kindly suggested by the Cambridge University Press that these lectures might form the core of a textbook, and the original suggestion has now been realised as “Categories for Types.”
What are the contents of “Categories for Types”? I will try to answer this question for those who know little about categorical type theory. In Chapter 1, we begin with a discussion of ordered sets. These are collections of things with an order placed on the collection. For example, the natural numbers form a set {1,2,3…} with an order given by 1 ≤ 2 ≤ 3 ≤ … where ≤ means “less than or equal to.” A number of different kinds of ordered set are defined, and results proved about them. Such ordered sets then provide a stock of examples of categories. A category is a very general mathematical world and various different sorts of mathematical structures form categories.
Discussion 2.1.1 A category consists of a pair of collections, namely a collection of “structures” together with a collection of “relations between the structures.” Let us illustrate this with some informal examples of categories.
The collection of all sets (thus each set is an example of one of the structures referred to above), together with the collection of all set-theoretic functions (the functions are the relations between the structures).
The collection of all posets (each poset is a structure), together with all monotone functions (the monotone functions are the relations between the structures).
The collection of all finite dimensional vector spaces, together with all linear maps.
The set of real numbers ℝ (in this case each structure is just a real number r ∈ ℝ), together with the relation of order ≤ on the set ℝ. Thus given two structures r, r′ ∈ℝ, there is a relation between them just in case r ≤r′.
All categories have this basic form, that is, consist of structures and relations between the structures: the structures are usually referred to as the objects of the category and the relations between the structures as morphisms. It is important to note that the objects of a category do not have to be sets (in the fourth example they are real numbers) and that the morphisms do not have to be functions (in the fourth example they are instances of the order relation ≤).
Discussion 3.1.1 The fundamental idea of algebraic type theory is to provide a formal framework for reasoning using the usual rules of equality. Simple algebraic type theory is far removed from the syntax and rules of real programming languages, but it is a good starting point from which to motivate and explain the ideas of categorical semantics. To a first approximation, algebraic type theory involves three entities, which will be called types, terms and equations. Think of a type as a collection of items (terms) having some common property, and an equation as a judgement that any two items (terms) are essentially the same. We aim to define the notion of a theory in algebraic type theory. Recall that the general idea of a theory is that one has some basic assumptions, usually referred to as axioms, and some rules for deducing theorems from the axioms. Thus a theory in algebraic type theory consists of a given collection of types, terms and equations, where the equations are the axioms of the theory. The theorems are the equations which one is able to deduce from the axioms using the rules of algebraic type theory. These rules say that equations may be manipulated according to the rules of reflexivity, transitivity and symmetry. So, for example, if a term t equals a term s, then we may deduce that the term s equals the term t, and this is the idea of symmetry.
Discussion 4.1.1 Our task now is to develop a categorical type theory correspondence for an equational type theory based on the “simply typed λ-calculus.” It will be helpful if the reader has a nodding acquaintance with simply typed λ-calculus, but this is not crucial. Let us review in an informal fashion the basic principles involved. Originally, the λ-calculus developed from attempts to produce a notation for representing and calculating with functions. (Strictly speaking, the original work in this area was concerned with (a primitive form of) a system now known as the untyped λ-calculus, but we shall not worry about such details in this very superficial discussion). Consider an expression such as x + y. We might think of this as being a definition of a function f given by x ↦ x + y (where the value of y is constant), or as a function g defined by y ↦ x + y (where the value of x is held constant). In day to day working life, mathematicians deal with such niceties simply by using ad hoc notations, and letting a context of discussion allow an intelligent reader to deduce precisely what the author means by his ad hoc notation. However, present day computers are not quite as intelligent as the typical reader, and it is essential to develop precise notations and syntax in order to program up mathematical functions. The λ-calculus is a formalism for dealing with these problems.
Discussion 6.1.1 Let us take stock of the type theories so far introduced in “Categories for Types.” We began with algebraic type theory, which gave us a basic framework in which to write down syntactical theories involving equational reasoning. This was extended to functional type theory in which there is a formal syntax for the representation of functions. We then noted that it would be desirable to work with a syntax in which certain programs (terms) yielded instances of a uniform procedure at differing types, this feature being known as polymorphism. We now extend this latter kind of type theory to one in which there is a syntax for describing functions “at the level of types” as well as functions at the level of terms. In this new system, the syntax splits into two levels, let us say level 1 and level 2. At level 1 there are types and terms, and one thinks of the types as “collections” of terms with a similar property. Analogously at level 2 there are (so-called) kinds and operators, and one thinks of a kind as a “collection” of operators with similar properties. These two levels are connected by a distinguished kind, often denoted by Type, which is thought of as the collection of all types. This new formal system will be referred to as higher order polymorphic functional type theory.
This chapter gives an example to illustrate the idea of an abstraction relationship between two hardware models which was introduced in chapter 4. The two models considered are the threshold switching model of CMOS transistor behaviour defined in chapter 5 and the simpler switch model defined in chapter 2.
Both of these models of the CMOS technology are, of course, abstractions of the physical reality they represent, and both models are therefore bound to be inaccurate in some respects. But the switch model is also an abstraction of the threshold switching model, in the sense that both models describe the same set of primitive components—power, ground, N-type and P-type transistors—but the switch model presents a more abstract view of these components. The threshold switching model reflects the fact that real CMOS transistors do not pass both logic levels equally well. But in the more abstract (and therefore simpler) switch model, this aspect of transistor behaviour is ignored.
The switch model is less accurate than the threshold switching model; a circuit that can be proved correct using the switch model may in fact be incorrect according to the threshold switching model. For certain circuits, however, the two models are effectively equivalent. For these circuits, a proof of correctness in the switch model amounts to a proof of correctness in the threshold switching model. The switch model is an adequate basis for verification of these circuits, and the extra accuracy of the more complex threshold switching model is not needed.
This book shows how formal logic can be used to reason about the behaviour of digital hardware designs. The main focus is on a fundamental tool for dealing with the complexity of this activity—namely the use of abstraction in bridging the gaps between logical descriptions of hardware at different levels of detail.
The text of this book was adapted from a Ph.D. dissertation on abstraction mechanisms for reasoning about hardware, written at the University of Cambridge Computer Laboratory. This work was originally motivated by my experience with using the LCF_LSM theorem prover to verify the correctness of an associative memory device intended for use in a local area network. This device comprised 37 SSI and MSI TTL chips, the most complex of which was an AM 2910 microprogram controller. Although the design was straightforward, its verification using the LCF_LSM system was remarkably difficult. The main problem was simply the almost intractably large size of the intermediate theorems generated during the proof. Single theorems were generated that were hundreds of lines long, and several minutes or even several hours of CPU time were needed to manipulate them. The proof was completed only with considerable difficulty—unfortunately, some time after the LCF_LSM system had become obsolete.
These difficulties were for the most part due not to problems with the LCF_LSM theorem prover itself, but to deficiencies in the underlying formalism for hardware verification supported by the system.