We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Another possible title for this chapter might have been “some non-standard logics”, since its principal aim is to illustrate how the methods we introduced for the standard logics M, I and C are applicable in different settings as well.
For the illustrations we have chosen two logics which are of considerable interest in their own right: the wellknown modal logic S4, and linear logic. For a long time modal logic used to be a fairly remote corner of logic. In recent times the interest in modal and tense logics has increased considerably, because of their usefulness in artificial intelligence and computer science. For example, modal logics have been used (1) in modelling epistemic notions such as belief and knowledge, (2) in the modelling of the behaviour of programs, (3) in the theory of non-monotonic reasoning.
The language of modal logics is an extension of the language of first-order predicate logic by one or more propositional operators, modal operators or modalities. Nowadays modal logics are extremely diverse: several primitive modalities, binary and ternary operators, intuitionistic logic as basis, etc. We have chosen S4 as our example, since it has a fairly well-investigated proof theory and provides us with the classic example of a modal embedding result: intuitionistic logic can be faithfully embedded into S4 via a so-called modal translation, a result presented in section 9.2.
Linear logic is one of the most interesting examples of what are often called “substructural logics” – logics which in their Gentzen systems do not have all the structural rules of Weakening, Exchange and Contraction.
Proof theory may be roughly divided into two parts: structural proof theory and interpretational proof theory. Structural proof theory is based on a combinatorial analysis of the structure of formal proofs; the central methods are cut elimination and normalization.
In interpretational proof theory, the tools are (often semantically motivated) syntactical translations of one formal theory into another. We shall encounter examples of such translations in this book, such as the Gödel-Gentzen embedding of classical logic into minimal logic (2.3), and the modal embedding of intuitionistic logic into the modal logic S4 (9.2). Other wellknown examples from the literature are the formalized version of Kleene's realizability for intuitionistic arithmetic and Gödel's Dialectica interpretation (see, for example, Troelstra [1973]).
The present text is concerned with the more basic parts of structural proof theory. In the first part of this text (chapters 2–7) we study several formalizations of standard logics. “Standard logics”, in this text, means minimal, intuitionistic and classical first-order predicate logic. Chapter 8 describes the connection between cartesian closed categories and minimal conjunctionimplication logic; this serves as an example of the applications of proof theory in category theory. Chapter 9 illustrates the extension to other logics (namely the modal logic S4 and linear logic) of the techniques introduced before in the study of standard logics. The final two chapters deal with first-order arithmetic and second-order logic respectively.
The discovery of the set-theoretic paradoxes around the turn of the century, and the resulting uncertainties and doubts concerning the use of high-level abstractions among mathematicians, led D. Hilbert to the formulation of his programme: to prove the consistency of axiomatizations of the essential parts of mathematics by methods which might be considered as evident and reliable because of their elementary combinatorial (“finitistic”) character.
Although, by Gödel's incompleteness results, Hilbert's programme could not be carried out as originally envisaged, for a long time variations of Hilbert's programme have been the driving force behind the development of proof theory. Since the programme called for a complete formalization of the relevant parts of mathematics, including the logical steps in mathematical arguments, interest in proofs as combinatorial structures in their own right was awakened. This is the subject of structural proof theory; its true beginnings may be dated from the publication of the landmark-paper Gentzen [1935].
Nowadays there are more reasons, besides Hilbert's programme, for studying structural proof theory. For example, automated theorem proving implies an interest in proofs as combinatorial structures; and in logic programming, formal deductions are used in computing.
There are several monographs on proof theory (Schütte [1960,1977], Takeuti [1987], Pohlers [1989]) inspired by Hilbert's programme and the questions this engendered, such as “measuring” the strength of subsystems of analysis in terms of provable instances of transfinite induction for definable wellorderings (more precisely, ordinal notations).
A list is a widely used container abstraction. Lists come in various flavors, so we really have a family of list abstractions. In real life, lists are used to hold information stored in a particular sequence. This information may be ordered as in a telephone directory or unordered as in a grocery shopping list.
Some lists allow items to be stored in any order whereas others require a sequential ordering of their data. The simplest list allows the addition of objects, removal of objects, and access to objects only at two ends, front and rear. An “indexable” list extends a simple list by allowing the insertion of objects, removal of objects, and access of objects at a particular index. A “positionable” list extends a simple list by allowing the addition of objects, removal of objects, and access to objects before or after a specified object in the list. Finally, an “ordered” list extends SearchTable (see Chapter 10) and requires that its elements be comparable. A strict ordering relationship is maintained among the elements of an ordered list. This is true in a search table.
Lists may be implemented in many ways, both fixed and dynamic. Perhaps the simplest implementation is a singly linked dynamic implementation. Here links flow in only one direction: from the start of the list to its end. One may move quite easily in a singly linked list from a given node to its successor but not to its predecessor.
Most modern software presents a GUI (graphical user interface) for interaction with the user. The GUI typically appears as a window with a variety of widgets (visual components enabling user interaction) in it. These widgets provide information to the user and provide a mechanism for accepting user actions to direct the application. Prior to the Windows revolution, most software was executed from a console using text commands with output in textual format as well. It is our goal to provide the reader with essential knowledge for understanding, designing, and implementing simple GUI applications. In this chapter we present the basic concepts that underlie GUI programming. An overview is given of Java classes that support GUI applications, including those classes that are part of the AWT (abstract windowing toolkit) and the JFC (Java foundation classes). Also in this chapter we present conceptually the design pattern called MVC (model view controller). Implementation in Java of GUI applications and MVC is covered in Chapter 6.
In discussing the operation of a GUI application we may choose one of two points of view: (1) that of the user, or (2) that of the application. As a user we clearly focus on the first point of view; whereas, the developer of a GUI application must focus on both points of view. In that spirit we develop a description of the roles, expectations, and responsibilities of the two major players: user and application.
Hash tables are containers that represent a collection of objects inserted at computed index locations. Each object inserted in the hash table is associated with a hash index. The process of hashing involves the computation of an integer index (the hash index) for a given object (such as a string). If designed properly, the hash computation (1) should be fast, and (2) when done repeatedly for a set of keys to be inserted in a hash table should produce hash indices uniformly distributed across the range of index values for the hash table. The term “hashing” is derived from the observation that there should be little if any obvious association between the object being inserted and its hash index. Two closely related objects such as the strings “time” and “lime” should generally produce unrelated hash indices. Thus hashing involves distributing objects into what appears to be random (but reproducible) locations in the table.
When two distinct objects produce the same hash index, we refer to this as a collision. Clearly the two objects cannot be placed at the same index location in the table. A collision resolution algorithm must be designed to place the second object at a location distinct from the first when their hash indices are identical.
The two fundamental problems associated with the construction of hash tables are:
the design of an efficient hash function that distributes the index values of inserted objects uniformly across the table
This chapter groups together three important data structures: trees, heaps, and priority queues. Trees are our first example of a nonlinear structure for containing objects. Although conceptually more complex than linear data structures, trees offer the opportunity for improved efficiency in operations such as inserting, removing, and searching for contained objects. Heaps are also nonlinear in structure and contained objects must be organized in agreement with an order relationship between each node and its descendants. A heap may be efficiently implemented using a binary tree. A priority queue is a special kind of queue that contains prioritized objects (usually based on a key) in a way that the objects are removed based on their priority (highest priority first). Priority queues may be implemented using a heap. There is a nonessential but beneficial relationship among these three data structures; that is why they are grouped together in this chapter. Additional variations on binary trees are covered in later chapters.
Trees
A tree is a nonlinear data structure that derives its name from a similarity between its defining terminology and our friends in the forest, real trees. A tree data structure is considerably more constrained in its variety than a real tree and is typically viewed upside down, with its root at the top and leaves on the bottom. A tree is usually accessed from its root, then down through its branches to the leaves.
A box of paper clips, a stack of trays in a cafeteria, and a room full of desks, chairs, lamps, and other furniture are containers. An array of records, a queue of customers at a movie theatre, a bag of groceries, a set of lottery tickets, a dictionary of words and their definitions, and a database of patient records are additional examples of containers. Some of the containers cited above – such as the box of paper clips, set of lottery tickets, and dictionary of words and their definitions – consist of identical types of objects, whereas the other containers consist of a mixture of object types. Each type of container has its own rules for ordering and accessing its entities.
It is important to make a distinction between the container object and the things that it contains. For example, we can distinguish the box that holds paper clips from the paper clips themselves. The box has an identity and existence even if it is empty. It is common to take home empty paper bags from a supermarket that may later be used as garbage bags.
This chapter, as its name implies, focuses on containers. It sets the stage for almost everything that will be done in later chapters. The study of data structures is the study of containers. In this chapter we delineate the behavior of many different container abstract data types.
All the foundations classes and the supporting source files for the book are contained in a single compressed file named foundations.zip. This file may be downloaded from the Cambridge University Press Web site at http://www.cup.org.
Extract the entire contents of the foundations.zip file into the directory of your choice. We will refer to this directory as user-dir in our discussion. A typical choice for user-dir might be C:\CS2notes. The structure of directories and files created in user-dir by the extraction is shown in Figure C.1.
Chapters 2 and 10, plus the appendices, have no supporting Java files. Each folder has supporting source files for laboratories and test programs discussed in its corresponding chapter. The entire structure is only 1.8 MB (1.13 MB of that is in a single file called distinct.txt containing words for use by examples in Chapter 16). File foundations.jar contains all the compiled class files for the foundations package.
A typical directory structure for the chapter folders is shown in Figure C.2 for Chapters 9 and 14. The docs folder in Chapter 9 provides javadoc generated documentation for class foundations.Fraction. Each GUI laboratory has application and user-interface source files plus a batch file that compiles and runs the application. These laboratories were developed using JBuilder3. The foundations folders contain compilable (do-nothing) source file stubs that are to be used in specific exercises. The support folders typically contain short test programs that are console-based or source file stubs to be used in exercises.
This appendix presents a brief introduction to UML notation as used in the book. For more detailed discussion of UML, its history, notation, documentation, and uses, the reader is referred to the UML Web page for Rational Software Corporation:
http://www.rational.com/uml/
Representing Classes in UML
UML notation provides a rich variety of options for graphically representing the details of a class. The basic icon for a class is a rectangular box with one, two, or three compartments as shown in Figure A.1. The compartments contain strings and special symbols. The Name compartment is required. The two List compartments typically contain attributes and operations and may be suppressed as desired. Within each compartment, UML offers many options for amount of detail to be shown.
Among the options for detail to be shown in the three compartments are the following:
String – an identifier representing a class name, field name, or method name.
≪stereotype-string≫ – A string in guillemets is a stereotype. Stereotypes may be thought of as categories that further qualify a class, field, or method. For example, we may use the stereotype «interface» to identify a class that is a Java interface. We may apply the stereotype «final» to a constant field and the stereotypes «command» or «query» to methods.
+, -, # - Visibility is indicated using a “+” symbol for public, a. “-” symbol for private, a “#” symbol for protected, or no symbol for package (Java default). […]
The main application of this chapter is algebraic expression evaluation. This is a classic and important problem. An algebraic expression containing single character operands and the four arithmetic operators is input as a string. When numeric values are assigned to each operand our goal is to be able to evaluate the arithmetic expression on the fly. The String representing the arithmetic expression is not known until runtime.
What makes this problem particularly interesting is that the core of the solution requires two stacks, each holding different types of data. The solution illustrates how abstractions (the stack in this case) may be utilized to provide an effective underpinning for the solution to a complex problem.
Algebraic Expression Evaluation
Problem: Develop a Java software application that takes an algebraic expression as an input string. An example of such an algebraic expression is (a + b) * c – d + e * f. After numeric values are assigned to each operand (values for a, b, c, d, e, and f), the algorithm must compute the value of the algebraic expression.
Input: A string representing an algebraic expression involving n operands and an n-tuple representing the values for the operands (i.e., numeric values for each operand).
Output: The value of the expression for the particular n-tuple of input operand values.
Solution of Problem:
1. Conversion from infix to postfix
The first step in solving this problem involves a transformation of the input algebraic expression from infix to postfix representation.
An essential and important part of computer problem solving is the development of algorithms – the detailed logic and steps required to solve a problem. All programmers are introduced very early to a number of useful programming constructs for building algorithms. These include assignment, branching, and iteration. Branching provides a means for conditional or alternative execution of steps in an algorithm. Iteration provides a convenient way to perform repetitive steps. Without branching and iteration the algorithms for even simple problem solutions would be either impossible or verbose and cumbersome. Another useful concept for construction of algorithms is recursion. Recursion is a construct that provides an alternative to iteration for repetitive steps. In many problems requiring repetitive steps we may find equivalent iterative and recursive algorithms as solutions.
What is recursion? A recursion may be described as the process of executing the steps in a recursive algorithm. So what is recursive? We sometimes tell our students, “If you look up ‘recursive’ in the dictionary, its definition is ‘see recursive.’” We deduce from this anecdotal definition that a recursive algorithm is defined in terms of itself. The actual definition found in one dictionary, “pertaining to or using a rule or procedure that can be applied repeatedly,” is not very helpful.
In developing an understanding for recursion we rely on its use in mathematics, algorithms, and computer programming. From mathematics we find recursive functions defined in terms of themselves.