We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Analysis & history To investigate calculations is to analyze symbolic processes carried out by calculators; that is a lesson we owe to Turing. Taking the lesson seriously, I will formulate restrictive conditions and well-motivated axioms for two types of calculators, namely, for human (computing) agents and mechanical (computing) devices. My objective is to resolve central foundational problems in logic and cognitive science that require a deeper understanding of the nature of calculations. Without such an understanding, neither the scope of undecidability and incompleteness results in logic nor the significance of computational models in cognitive science can be explored in their proper generality. The claim for logic is almost trivial and implies the claim for cognitive science; after all, the relevant logical notions have been used when striving to create artificial intelligence or to model mental processes in humans.
The foundational problems come to the fore in arguments for Church's or Turing's Thesis, asserting that an informal notion of effective calculability is captured fully by a particular precise mathematical concept. Church's Thesis, for example, claims in its original form that the effectively calculable number theoretic functions are exactly those functions whose values are computable in Gödel's equational calculus. My strategy, when arguing for the adequacy of a notion, is to bypass theses altogether and avoid the fruitless discussion of their (un-)provability. This can be achieved by conceptual analysis, i.e., by sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework. Such an analysis will be provided for the two types of calculators I mentioned, examining closely and recasting thoroughly work of Turing and Gandy. My paper builds on systematic and historical work I have pursued for more than a decade, much of it in collaboration with John Byrnes, Daniele Mundici, and Guglielmo Tamburrini. The considerations presented here reshape and extend the earlier systematic work in a novel and, for me, unexpected way; their aim is nevertheless extremely classical, namely, to provide what Hilbert called eine Tieferlegung der Fundamente. It will also become evident that they are embedded in an illuminating historical context.
There is general agreement that Turing, in 1936, gave the most convincing analysis of effective calculability in his paper “On computable numbers - with an application to the Entscheidungsproblem”.
We first consider techniques for algorithms (i.e., relatively small program components whose functional behavior may be specified in terms of “before” and “after” properties). Chapter 1 discusses how to specify what a simple algorithmic code fragment should do. Chapter 2 describes basic techniques for verifying code (i.e., proving mathematically that it does what its specification says it should do). Chapter 3 discusses a variety of small examples of these methods, demonstrating also how they may be used to construct (i.e., systematically code) correct programs. Chapter 4 describes some additional verification techniques and further examples.
Additional Reading
[Wir73, AA78, Rey81, Bac86, Gor88, Dro89, Dah92, BE+94, Cas94, Sta99, MS01] are recommended for further explanation and additional examples. A more formal approach to algorithm development is described in [Gri81, DF88, Kal90, Coh90, Mor94].
A detailed statement of what users (or clients or customers) of a program or program fragment expect it to do and what the implementers or developers of the code expect of its environment is called a specification for that code. Sometimes the user and developer of the code might happen to be the same person wearing different hats; however, it is best to think of them as independent, possibly with conflicting interests.
If the code being specified is sufficiently complex, several programmers might be involved in writing it and several other programmers might be involved in writing a program to use the code fragment. Furthermore, there might be several different implementations of a specification, and several different applications that use the implementations. A specification is essentially a contract among all these developers and users, stating exactly what must be agreed about the observable effects of executing the code and the environment in which it will be executing, and no more. The expectations of the users become obligations on the developers, and vice versa.
Normally, details of how the computational task is to be carried out would not be in a specification: the users shouldn't care, and implementers might then be prevented from using other implementation techniques. Similarly, a specification would normally not contain details of how applications are to use the code: the developers shouldn't care, and this might preclude other applications of the code being specified.
We now turn our attention to the specification, verification, and construction of data representations. These combine “private” definitions of data-representation variables (and possibly functions) with definitions of “public” functions (or “methods”) that provide a suitably abstract “view” of the data to application programs. Chapter 5 introduces the basic concepts in the context of a simple case study, and Chapter 6 discusses a variety of other examples.
Additional Reading
The following are recommended for further discussion and additional examples: [Hoa72a, Hoa72b, Jon80, Rey81, Jon86, LG86, LG00].
Recognizing patterns in strings is ubiquitous in computing. For example, a programming-language compiler has to recognize whether input programs match the “pattern” defined by the syntax rules of the programming language. Many important software tools, such as text editors, command interpreters, and formatters, require the capability to recognize patterns in strings. In fact, any program that reads textual input from its users must implicitly test the well-formedness of that input. In the following chapters, we introduce some of the interesting concepts and techniques that may be used to address this class of applications.
Strings
In computing, the term string is normally understood to refer to finite sequences of characters drawn from a character set such as ASCII. We will find it convenient to generalize this concept slightly by allowing string components to be drawn from an arbitrary finite set, termed the alphabet or the vocabulary.
Definition 7.1 If Σ is any finite set, a string over Σ is any finite sequence of elements of Σ.
Strings may also be termed words or sentences. String components (i.e., elements of Σ) might be termed, depending on the context, characters, tokens, symbols, atoms, or generators. If Σ is the ASCII character set, a string over Σ is exactly what is normally considered to be a string. But if we are discussing the syntax of a programming language, the relevant vocabulary might be a set of lexical tokens, ignoring, at this level of abstraction, the substructure of multiple-character tokens such as <=.
Professor Higgins wants to set up a small database to keep track of which students are enrolled in his course on computational metaphysics. He asks his programmer, Eliza Doolittle, to write a program that will allow him or his secretary to
add a student record to the database;
determine whether a particular student is currently enrolled; and
remove a student record from the database.
Students are to be identified by their student numbers.
Eliza considers the problem and points out that the specification is incomplete: what should happen if a student is “added” when he or she is already enrolled? Or if a student is “removed” when that student has not yet enrolled? Professor Higgins replies that he would never make such mistakes; however, because the system will also be used by his secretary, he agrees that such operations should produce appropriate warning messages to the user.
Eliza intends to structure the program as two modules: one to manage the data representation and the other to provide the user interface. She decides that it should not be the responsibility of the data-representation module itself to produce error messages to the user; the user-interface module should do this. But where should the errors be detected?
If the data-representation module is responsible for detecting errors, it would be necessary for the operations to return error flags or to raise exceptions, which would significantly complicate the interface between the modules.