We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The cut calculus for Horn clauses is simple, but rather inefficient as the basis of a theorem prover. To prove a goal γ via this calculus means to derive γ from axioms (those of the specification and congruence axioms for equality predicates) using CUT and SUB (cf. Sect. 1.2). In contrast, the inference rules resolution (cf. [Rob65]) and paramodulation (cf. [RW69]) allow us to start out from γ and apply axioms for transforming γ into the empty goal ∅. The actual purpose of resolution and paramodulation is to compute goal solutions (cf. Sect. 1.2): If γ can be transformed into ∅, then γ is solvable. The derivation process involves constructing a solution f, and ∅ indicates the validity of γ[f].
A single derivation step from γ to via resolution or paramodulation proves the clause γ[g]⇐δ for some g. Since γ[g] is the conclusion of a Horn clause, which, if viewed as a logic program, is expanded (into γ), we call such derivations expansions. More precisely, the rules are input resolution and paramodulation where one of the two clauses to be transformed stems from an “input” set of axioms or, in the case of inductive expansion (cf. Chapter 5), arbitrary lemmas or induction hypotheses.
While input resolution is always “solution complete”, input paramodulation has this property only if the input set includes all functionally-reflexive axioms, i.e., equations of the form Fx≡Fx (cf., e.g., [Hö189]).
EXPANDER is a proof support system for reasoning about data type specifications and declarative programs. EXPANDER applies the rules of inductive expansion (cf. Chapter 5) to correctness conditions that are given as single Gentzen clauses or sets of guarded Horn clauses (cf. Chapter 2). The system provides a kernel for special-purpose theorem provers, which are tailored to restricted application areas and implement specific proof plans, strategies or tactics. It is written in the functional language SML/NJ.
EXPANDER executes single inference steps. Each proof is a sequence of goal sets. The user has full control over the proof process. He may backtrack the sequence, interactively modify the underlying specification and add lemmas or induction orderings suggested by subgoals obtained so far. When a proof has been finished, the system can generate the theorems actually proved and, if necessary, the remaining subconjectures.
We first describe the kind of specifications that can be processed, then present the commands currently provided and, finally, document the implementation. The latter serves for illustrating the suitability of functional languages for encoding deductive methods.
The specifications
Specifications to be processed by EXPANDER are generated by the following context-free grammar in extended Backus-Naur form, i.e., [_], *, | denote the usual operators for building regular expressions. Key words are enclosed in “…”.
Part I introduced a collection of notations and techniques for algebraic specifications. These notations and techniques are relatively close to usual mathematics. As already shown by the examples of Part I, algebraic specifications suffice to describe a wide range of data types (Booleans, numbers, sets, bags, sequences, tuples, maps, stacks, queues, etc.) and they can even be used to describe syntax and semantics of languages, to describe rules and strategies of games and to describe many more non-trivial aspects of complex systems. Of course there are some differences between the notations and techniques of Part I and usual mathematics, like the restriction to first-order predicate logic with inductive definitions, the special way of treating partial functions and undefinedness and, most of all, the modularisation constructs. The latter difference reveals that COLD-K has its roots in software engineering and systems engineering, rather than in general purpose mathematics.
There is one more phenomenon which is characteristic for many branches of software engineering and systems engineering: special provisions for describing state-based systems.
How do we benefit from ground confluent specifications? Most of the advantages follow from Thm. 6.5: If (SIG, AX) is ground confluent, then and only then directed expansions yield all ground AX-solutions. Sects. 7.1 and 7.2 deal with refinements of directed expansion: strategic expansion and narrowing. Sect. 7.3 presents syntactic criteria for a set of terms to be a set of constructors (cf. Sect. 2.3). The results obtained in Sects. 7.2 and 7.3 provide the failure rule and the clash rule that check goals for unsolvability and thus help to shorten every kind of expansion proof (see the final remarks of Sect. 5.4).
Sect. 7.4 deals with the proof of a set CS of inductive theorems by showing that (SIG,AX∪CS) is consistent w.r.t. (SIG,AX) (cf. Sect. 3.4). Using consequences of the basic equivalence between consistency and inductive validity (Lemma 7.9) we come up with reductive expansion, which combines goal reduction and subreductive expansion (cf. Sect. 6.4) into a method for proving inductive theorems. While inductive expansion is always sound, the correctness of reductive expansion depends on ground confluence and strong termination of (SIG,AX). Under these conditions, an inductive expansion can always be turned into a reductive expansion (Thm 7.18). Conversely, a reductive expansion can be transformed in such a way that most of its “boundary conditions” hold true automatically (Thm. 7.19).
The chapter will close with a deduction-oriented concept for specification refinements, or algebraic implementations.
This book is about formal specification and design techniques, including both algebraic specifications and state-based specifications.
The construction and maintenance of complex software systems is a difficult task and although many software projects are started with great expectations and enthusiasm, it is too often the case that they fail to achieve their goals within the planned time and with the given resources. The software often contains errors; attempts to eliminate the errors give rise to new errors, and so on. Moreover, the extension and adaptation of the software to new tasks turns out to be a difficult and tedious task, which seems unsuitable for scientific methods.
This unsatisfactory situation can be improved by introducing precise specifications of the software and its constituent parts. When a piece of software P has a precise specification S say, then ‘P satisfies S’ is a clear statement that could be verified by reasoning or that could be falsified by testing; users of P can read S and rely on it and the designer of P has a clearly formulated task. When no precise specifications are available, there are hardly any clear statements at all, for what could one say: ‘it works’ or more often ‘it almost works’? Without precise specifications, it becomes very difficult to analyse the consequences of modifying P into P', for example, and to make any clear statements about that modification. Therefore it is worthwhile during the software development process to invest in constructing precise specifications of well-chosen parts of the software system under construction. Writing precise specifications turns out to be a considerable task itself.
The conception, construction, maintenance and usage of computer-based systems are difficult tasks requiring special care, skills, methods and tools. Program correctness is a serious issue and in addition to that, the size of the programs gives rise to problems of complexity management. Computers are powerful machines which can execute millions of instructions per second and manipulate millions of memory cells. The freedom offered by the machine to its programmer is large; often it is too large, in the sense that the machine does not enforce order and structure upon the programs. Computer-based systems are artificial systems and therefore there are no natural system partitionings and interface definitions. All structure is man-made and all interfaces must be agreed upon and communicated to all parties involved. The description and communication of system structures and interfaces turns out to be a non-trivial task and ‘specification languages’ have become an active area of research and development in computer science. When discussing ‘language’ we must distinguish explicitly between syntactic objects and semantic objects. Wittgenstein has expressed this idea as follows:
Der Satz stellt das Bestehen und Nichtbestehen der Sachverhalte dar,
i.e. the proposition represents the existence or non-existence of certain states of affairs. The propositions are syntactic objects and in this text we shall call them specifications. To describe a state of affairs concerning the natural world and concerning human interaction, natural language is the tool par excellence; to describe a state of affairs concerning computer-based systems, special languages are required in addition to that. The situation is typical: special restricted domains require special languages and this is also the case for the domain of computer-based systems.
All the previous chapters are about techniques for unambiguously specifying hardware/software systems and transforming abstract specifications into efficient programs. One important motivation for presenting these techniques is the fact that it is often useful to have a distinction between the external view and the internal view of a system. The external view can take the form of a formal specification and it can be optimised with respect to abstractness, compactness and clarity. The internal view is a program which is devised with efficiency in mind. To have two descriptions corresponding to these two views can be considered as a separation of concerns: it helps to manage the complexity of large systems.
This approach introduces additional formal texts when compared with the older approaches dealing mostly with programs. As a consequence, care is needed to maintain the overview of all formal texts that arise when designing large systems.
This chapter presents two techniques developed in the context of COLD-K which serve for keeping this overview. These are certainly not the only useful techniques; they should be complemented with additional graphical techniques and classical software engineering techniques for configuration management, project management, etc. The first technique is to use simple pictures showing the modular structure of a formal specification. This is the topic of Section 11.2. The second technique is to add structure, putting specifications and implementations together in simple language constructs called components and designs. This is the topic of Section 11.3. Finally, Sections 11.4 and 11.5 present a number of applications as well as some concluding remarks.
This monograph promotes specification and programming on the basis of Horn logic with equality. As was pointed out in [Pad88a], this theoretical background equips us with a number of deductive methods for reasoning about specifications and designing correct programs. The term declarative programming stands for the combination of functional (or applicative) and relational (or logic) programming. This does not rule out the design of imperative programs with conditionals, loops and sequences of variable assignments, since all these features have functional or relational equivalents. In particular, variables become “output parameters” of functions. Hence the static view of declarative programming is not really a restriction. Only if correctness conditions concerned with liveness or synchronization are demanded, transition relations must be specified for fixing the dynamics of program execution (cf. Sect. 6.6).
Design specifications
With regard to the overall software design process, the methods considered here are tailored to design specifications, each consisting of a many-sorted signature SIG denoting data, functions and predicates to be specified and a set of Horn clauses over SIG, allowing more or less abstract presentations of declarative programs and the data structures they use and manipulate (cf. Sects. 1.1 and 1.2). Associated with a design specification DS is a requirement specification, the conjecture section of DS, which consists of correctness conditions on DS. In contrast to design axioms, Horn clauses are not always sufficient for specifying requirements. Hence we admit positive Gentzen clauses, which may involve disjunctions and existential quantifiers, in a requirement specification (cf. Sect. 1.4).