We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We have given in earlier chapters several different proofs of Church's theorem to the effect that first-order logic is undecidable: there is no effective procedure that applied to any first-order sentence will in a finite amount of time tell us whether or not it is valid. This negative result leaves room on the one hand for contrasting positive results, and on the other hand for sharper negative results. The most striking of the former is the Löwenheim–Behmann theorem, to the effect that the logic of monadic (one-place) predicates is decidable, even when the two-place logical predicate of identity is admitted. The most striking of the latter is the Church–Herbrand theorem that the logic of a single dyadic (two-place) predicate is undecidable. These theorems are presented in sections 21.2 and 21.3 after some general discussion of solvable and unsolvable cases of the decision problem for logic in section 21.1. While the proof of Church's theorem requires the use of considerable computability theory (the theory of recursive functions, or of Turing machines), that is not so for the proof of the Löwenheim–Behmann theorem or for the proof that Church's theorem implies the Church–Herbrand theorem. The former uses only material developed by Chapter 11. The latter uses also the elimination of function symbols and identity from section 19.4, but nothing more than this. The proofs of these two results, positive and negative, are independent of each other.
This chapter and the next contain a summary of material, mainly definitions, needed for later chapters, of a kind that can be found expounded more fully and at a more relaxed pace in introductory-level logic textbooks. Section 9.1 gives an overview of the two groups of notions from logical theory that will be of most concern: notions pertaining to formulas and sentences, and notions pertaining to truth under an interpretation. The former group of notions, called syntactic, will be further studied in section 9.2, and the latter group, called semantic, in the next chapter.
First-Order Logic
Logic has traditionally been concerned with relations among statements, and with properties of statements, that hold by virtue of ‘form’ alone, regardless of ‘content’. For instance, consider the following argument:
(1) A mother or father of a person is an ancestor of that person.
(2) An ancestor of an ancestor of a person is an ancestor of that person.
(3) Sarah is the mother of Isaac, and Isaac is the father of Jacob.
(4) Therefore, Sarah is an ancestor of Jacob.
Logic teaches that the premisses (1)–(3) (logically) imply or have as a (logical) consequence the conclusion (4), because in any argument of the same form, if the premisses are true, then the conclusion is true.
In the preceding several chapters we have introduced the intuitive notion of effective computability, and studied three rigorously defined technical notions of computability: Turing computability, abacus computability, and recursive computability, noting along the way that any function that is computable in any of these technical senses is computable in the intuitive sense. We have also proved that all recursive functions are abacus computable and that all abacus-computable functions are Turing computable. In this chapter we close the circle by showing that all Turing-computable functions are recursive, so that all three notions of computability are equivalent. It immediately follows that Turing's thesis, claiming that all effectively computable functions are Turing computable, is equivalent to Church's thesis, claiming that all effectively computable functions are recursive. The equivalence of these two theses, originally advanced independently of each other, does not amount to a rigorous proof of either, but is surely important evidence in favor of both. The proof of the recursiveness of Turing-computable functions occupies section 8.1. Some consequences of the proof of equivalence of the three notions of computability are pointed out in section 8.2, the most important being the existence of a universal Turing machine, a Turing machine capable of simulating the behavior of any other Turing machine desired. The optional section 8.3 rounds out the theory of computability by collecting basic facts about recursively enumerable sets, sets of natural numbers that can be enumerated by a recursive function. […]
In the preceding chapter we introduced the classes of primitive recursive and recursive functions. In this chapter we introduce the related notions of primitive recursive and recursive sets and relations, which help provide many more examples of primitive recursive and recursive functions. The basic notions are developed in section 7.1. Section 7.2 introduces the related notion of a semirecursive set or relation. The optional section 7.3 presents examples of recursive total functions that are not primitive recursive.
Recursive Relations
A set of, say, natural numbers is effectively decidable if there is an effective procedure that, applied to a natural number, in a finite amount of time gives the correct answer to the question whether it belongs to the set. Thus, representing the answer ‘yes’ by 1 and the answer ‘no’ by 0, a set is effectively decidable if and only if its characteristic function is effectively computable, where the characteristic function is the function that takes the value 1 for numbers in the set, and the value 0 for numbers not in the set. A set is called recursively decidable, or simply recursive for short, if its characteristic function is recursive, and is called primitive recursive if its characteristic function is primitive recursive. Since recursive functions are effectively computable, recursive sets are effectively decidable. Church's thesis, according to which all effectively computable functions are recursive, implies that all effectively decidable sets are recursive.
Ramsey's theorem is a combinatorial result about finite sets with a proof that has interesting logical features. To prove this result about finite sets, we are first going to prove, in section 26.1, an analogous result about infinite sets, and are then going to derive, in section 26.2, the finite result from the infinite result. The derivation will be an application of the compactness theorem. Nothing in the proof of Ramsey's theorem to be presented requires familiarity with logic beyond the statement of the compactness theorem, but at the end of the chapter we indicate how Ramsey theory provides an example of a sentence undecidable in P that is more natural mathematically than any we have encountered so far.
Ramsey's Theorem: Finitary and Infinitary
There is an old puzzle about a party attended by six persons, at which any two of the six either like each other or dislike each other: the problem is to show that at the party there are three persons, any two of whom like each other, or there are three persons, any two of whom dislike each other.
The solution: Let a be one of the six. Since there are five others, either there will be (at least) three others that a likes or there will be three others that a dislikes. Suppose a likes them. (The argument is similar if a dislikes them.) Call the three b, c, d.
This chapter connects our work on computability with questions of logic. Section 11.1 presupposes familiarity with the notions of logic from Chapter 9 and 10 and of Turing computability from Chapters 3–4, including the fact that the halting problem is not solvable by any Turing machine, and describes an effective procedure for producing, given any Turing machine M and input n, a set of sentences Г and a sentence D such that M given input n will eventually halt if and only if Г implies D. It follows that if there were an effective procedure for deciding when a finite set of sentences implies another sentence, then the halting problem would be solvable; whereas, by Turing's thesis, the latter problem is not solvable, since it is not solvable by a Turing machine. The upshot is, one gets an argument, based on Turing's thesis for (the Turing–Büchi proof of) Church's theorem, that the decision problem for implication is not effectively solvable. Section 11.2 presents a similar argument–the Gödel-style proof of Church's theorem–this time using not Turing machines and Turing's thesis, but primitive recursive and recursive functions and Church's thesis, as in Chapters 6–7. The constructions of the two sections, which are independent of each other, are both instructive; but an entirely different proof, not dependent on Turing's or Church's thesis, will be given in a later chapter, and in that sense the present chapter is optional. […]
Suppose that a sentence A implies a sentence C. The Craig interpolation theorem tells us that in that case there is a sentence B such that A implies B, B implies C, and B involves no nonlogical symbols but such as occur both in A and in B. This is one of the basic results of the theory of models, almost on a par with, say, the compactness theorem. The proof is presented in section 20.1. The proof for the special case where identity and function symbols are absent is an easy further application of the same lemmas that we have applied to prove the compactness theorem in Chapter 13, and could have been presented there. But the easiest proof for the general case is by reduction to this special case, using the machinery for the elimination of function symbols and identity developed in section 19.4. Sections 20.2 and 20.3, which are independent of each other, take up two significant corollaries of the interpolation theorem, Robinson's joint consistency theorem and Beth's definability theorem.
Craig's Theorem and Its Proof
We begin with a simple observation.
Proposition. If a sentence A implies a sentence C, then there is a sentence B that A implies, that implies C, and that contains only such constants as are contained in both of A and C.
Tarski's theorem tells us that the set V of (code numbers of) first-order sentences of the language in arithmetic that are true in the standard interpretation is not arithmetically definable. In section 23.1 we show that this negative result is poised, so to speak, between two positive results. One is that for each n the set Vn of sentences of the language of arithmetic of degree of complexity n that are true in the standard interpretation is arithmetically definable (in a sense of degree of complexity to be made precise). The other is that the class {V} of sets of natural numbers whose one and only member is V is arithmetically definable (in a sense of arithmetical definability for classes to be made precise). In section 23.2 we take up the question whether the class of arithmetically definable sets of numbers is an arithmetically definable class of sets. The answer is negative, according to Addison's theorem. This result is perhaps most interesting on account of its method of proof, which is a comparatively simple application of the method of forcing originally devised to prove the independence of the continuum hypothesis in set theory (as alluded to in the historical notes to Chapter 18).
Arithmetical Definability and Truth
Throughout this chapter we use L and N for the language of arithmetic and its standard interpretation (previously called L* and N*), and V for the set of code numbers of first-order sentences of L ture in N.
The intuitive notion of an effectively computable function is the notion of a function for which there are definite, explicit rules, following which one could in principle compute its value for any given arguments. This chapter studies an extensive class of effectively computable functions, the recursively computable, or simply recursive, functions. According to Church's thesis, these are in fact all the effectively computable functions. Evidence for Church's thesis will be developed in this chapter by accumulating examples of effectively computable functions that turn out to be recursive. The subclass of primitive recursive functions is introduced in section 6.1, and the full class of recursive functions in section 6.2. The next chapter contains further examples. The discussion of recursive computability in this chapter and the next is entirely independent of the discussion of Turing and abacus computability in the preceding three chapters, but in the chapter after next the three notions of computability will be proved equivalent to each other.
Primitive Recursive Functions
Intuitively, the notion of an effectively computable function f from natural numbers to natural numbers is the notion of a function for which there is a finite list of instructions that in principle make it possible to determine the value f(x1, …, xn) for any arguments x1, …, xn. The instructions must be so definite and explicit that they require no external sources of information and no ingenuity to execute.