We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
René Cori, Université de Paris VII (Denis Diderot),Alexander Razborov, Institute for Advanced Study, Princeton, New Jersey,Stevo Todorčević, Université de Paris VII (Denis Diderot),Carol Wood, Wesleyan University, Connecticut
Abstract. We develop possible worlds semantics for as a predicate rather than as an operator of sentences. The unary predicate symbol is added to the language of arithmetic (or an extension thereof); this yields the language. Every world in our possible worlds semantics is the standard model of arithmetic plus an interpretation of. We investigate possible–worlds models where is true at a world w if and only if A is true in all worlds seen by w. The paradoxes exclude certain frames from being frames foras a predicate. We provide some sufficient and also some necessary conditions on frames that are allowed to act as frames for the predicate approach. Completeness results for certain infinitary systems corresponding to well known modal operator systems are established. We draw some conclusions concerning the current state of the predicate approach to modalities.
Modalities as predicates. Modalities like necessity and possibility, may be analysed logically in essentially two ways: either as predicates, or as operators. In the first case they are applied to singular terms, whereas in the second case they are applied to formula, but in both cases the application gives us new formula. Thus the distinction between the operator and the predicate conception of necessity is made on the syntactical level at first. Both conceptions are tied to certain semantics respectively. If “necessary” and “possible” are regarded as predicates, they are interpreted as properties of objects and a decision has to be made concerning what precisely they should be predicates of: syntactical entities like sentences, or contents of syntactical entities like propositions (let us ignore further options like utterances or mental objects). In either case, necessity and possibility are properties of such entities, or, perhaps, relations between such entities and further objects. If “necessary” and “possible” are regarded as operators, they do not express properties or relations like predicate and relation expressions; necessity does not apply to anything— much like the logical connectives or the quantifiers. In this sense the operator conception of necessity is radically deflationary. Similar considerations apply not only to necessity but also to the notions of knowledge, belief, future and past truth, obligation and so on, which have been treated in analogous fashions as necessity.
Edited by
René Cori, Université de Paris VII (Denis Diderot),Alexander Razborov, Institute for Advanced Study, Princeton, New Jersey,Stevo Todorčević, Université de Paris VII (Denis Diderot),Carol Wood, Wesleyan University, Connecticut
The simplest sentences of a natural language have two components: a subject and a predicate. ‘Africa is a continent’ and ‘One-half is a natural number’ are examples. In the first ‘Africa’ is the subject and ‘a continent’ is the predicate, while in the second ‘One-half’ is the subject and ‘a natural number’ is the predicate. The verb ‘is’ asserts that the subject has the property described by the predicate; briefly that the predicate is applied to the subject, or that the subject is an argument for the predicate. The resulting sentence may or may not be true.
Some sentences, although in the subject-predicate form, have a predicate with an embedded subject. For example the sentence ‘London is south of Paris’ has ‘London’ as subject and ‘south of Paris’ as predicate. The subject ‘Paris’ is embedded in the predicate. Indeed the sentence can be understood to have two subjects ‘London’ and ‘Paris’ and a predicate ‘south of’, often called a relation. The number of subjects to which a predicate may be applied is called the arity of the predicate. Thus ‘a continent’ and ‘a natural number’ have arity one, while ‘south of’ has arity two.
From an arity two predicate an arity one predicate can be formed by applying the first of its two subjects. Thus for example ‘south of Paris’ is an arity one predicate. Later, notation will be introduced for the predicate suggested by ‘London is south of’. In a slight generalization of the meaning of predicate, a sentence is understood to be a predicate of arity zero since a sentence results when it is applied to no subjects. Thus ‘Africa is a continent’ and ‘London is south of Paris’ are predicates of arity zero. Generalizing further, it will be assumed that there may be predicates of any given finite arity; that is, predicates which may be applied to any given finite number of subjects.
Abstract. We continue the study of effective Hausdorff dimension as it was initiated by Lutz. Whereas he uses a generalization of martingales on the Cantor space to introduce this notion we give a characterization in terms of effective s-dimensional Hausdorff measures, similar to the effectivization of Lebesgue measure by Martin-Löf. It turns out that effective Hausdorff dimension allows to classify sequences according to their ‘degree’ of algorithmic randomness, i.e., their algorithmic density of information. Earlier the works of Staiger and Ryabko showed a deep connection between Kolmogorov complexity and Hausdorff dimension. We further develop this relationship and use it to give effective versions of some important properties of (classical) Hausdorff dimension. Finally, we determine the effective dimension of some objects arising in the context of computability theory, such as degrees and spans.
§1. Introduction. Generally speaking, the concepts of Hausdorff measure and dimension are a generalization of Lebesgue measure theory. In the early 20th century, Hausdorff [9] used Caratheodory's construction of measures to define a whole family of outer measures. For examining a set of a peculiar topological or geometrical nature Lebesgue measure often is too coarse to investigate the features of the set, so one may ‘pick’ a measure from this family of outer measures that is suited to study this particular set. This is one reason why Hausdorff measure and dimension became a prominent tool in fractal geometry.
Hausdorff dimension is extensively studied in the context of dynamical systems, too. On the Cantor space, the space of all infinite binary sequences, the interplay between dimension and concepts from dynamical systems such as entropy becomes really close. Results of Besicovitch [3] and Eggleston [7] early brought forth a correspondence between the Hausdorff dimension of frequency sets (i.e., sets of sequences in which every symbol occurs with a certain frequency) and the entropy of a process creating such sequences as typical outcomes. Besides, under certain conditions the Hausdorff dimension of a set in the Cantor space equals the topological entropy of this set, viewed as a shift space.
Church described in [30] a logic of sense and denotation based on the simple theory of types [28]; that is, using the terminology of Carnap, a logic of intension and extension. ITT differs from that logic in two ways: First ITT is based on TT and not on the simple theory of types; but of greater importance, ITT identifies the intension of a predicate with its name while the logic of Church treats intensions as separate entities with their own types and notation. The justification for this identification is the belief that in a given context a user discovers the intension of a predicate from its name.
The types of ITT, an intensional type theory, are the types of TT, and the constants and variables of ITT are those of TT. The terms of ITT are an extension of those of TT. Definition 29 of term of TT in §2.1.1 is extended for ITT by a fourth clause that introduces a secondary typing for some of the terms of ITT. Since secondary typing is the main feature of ITT that distinguishes it from TT, it will be motivated in §3.1.1 before being formally expressed in clause (4) of Definition 43 in §3.1.2.
Motivation for secondary typing. The purpose of secondary typing in ITT is to provide a simple but unambiguous way of distinguishing between an occurrence of a predicate name where it is being used and an occurrence where it is being mentioned. The necessity for recognizing this distinction has been stressed many times and a systematic use of quotes is traditionally employed for expressing it; see for example [24] or [120]. But the systematic use of quotes is awkward in a formal logic and subject to abuses as described by Church in footnote 136 of [31]. Secondary typing exploits the typing notation of TT to distinguish between a used predicate name and a mentioned predicate name and is not subject to the abuses cited by Church.
Wenn man nicht weiß was man selber will,muß man zuerst wissen was die anderen wollen.
General Stumm von Bordwehr
§1. Introduction: Moschovakis's approach to intensionality. G. Frege introduced two concepts which are central tomodern formal approaches to natural language semantics; i.e., the notion of reference (denotation, extension, Bedeutung) and sense (intension, Sinn) of proper names2. The sense of a proper name is wherin the mode of presentation (of the denotation) is contained. For Frege proper names include not only expressions such as Peter, Shakespeare but also definite descriptions like the point of intersection of line l1 and l2 and furthermore sentences which are names for truth values. Sentences denote the True or the False. The sense of a sentence is the proposition (Gedanke) the sentence expresses. In the tradition of possible world semantics the proposition a sentence expresses ismodelled via the set ofworlds in which the sentence is true. This strategy leads to well known problems with propositional attitudes and other intensional constructions in natural languages since it predicts for example that the sentences in (1) are equivalent.
(1) a. Jacob knows that the square root of four equals two.
b. Jacob knows that any group G is isomorphic to a transformation group.
Even an example as simple as (1) shows that the standard concept of proposition in possible world semantics is not a faithful reconstruction of Frege's notion sense.
Frege developed his notion of sense for two related but conceptually different reasons. We already introduced the first one by considering propositional attitudes. The problem here is how to develop a general concept which can handle the semantics of Frege's ungerade Rede. The second problem is how to distinguish a statement like a = a which is rather uninformative from the informative statement a = b or phrased differently how to account for the semantic difference between (2-a) and (2-b).
(2) a. Scott is Scott.
b. Scott is the author ofWaverly.
Frege's intuitive concept of sense therefore was meant both to model information and provide denotations for intensional constructions.
[12] develops a formal analysis of sense and denotation which is certainly closer to Frege's intentions than is possible world semantics. Moschovakis's motivations are (at least) twofold. The first motivation is to give a rigorous definition of the concept algorithm [13] and thereby provide the basics for a mathematical theory of algorithms.
Edited by
René Cori, Université de Paris VII (Denis Diderot),Alexander Razborov, Institute for Advanced Study, Princeton, New Jersey,Stevo Todorčević, Université de Paris VII (Denis Diderot),Carol Wood, Wesleyan University, Connecticut
Abstract. Intensionality has generally been of more concern to logicians than intentionality. But also the latter merits their interest. This paper, a contribution to the logic of action, involves both concepts—the former implicitly, the latter explicitly.
Informal background. Many logicians and philosophers, the present writer included, are familiar with the term “intension” (with an s) but rather less familiar with the term “intention” (with a t). Outside philosophy and logic the situation is reverse: “intention” is used by all, “intension” by few, if any. A miniature history of the development of the two terms was given by E. J. Lemmon [6]:
The medieval term intentio was originally employed as a translation of the Arabic termma'na, a formin the soul identified with a meaning or a notion, and meant throughout medieval epistemology a natural sign in the soul. Later the Port Royal Logic distinguished between the comprehension and extension of a general term in something of the way in which Mill later distinguished connotation and denotation: whilst the extension is the set of things to which the term applies, its comprehension is the set of attributes which it implies. Sir William Hamilton replaced “comprehension” by “intension”, faultily spelling the word with an “s” by analogy with “extension”. Since then, the term “intentionality” has gone one way, via Brentano to Chisholm, and the word “intensionality” another via Carnap to Quine.
This elegant quotation is offered for what it is worth. It is of some help in explaining philosophers’ terminology, but unfortunately it leaves unexplained the connexion with everyday usage in the context of action. And it is the latter that is of concern in this paper.
We study a system in which one agent (“the agent”) operates in some environment (“the world”). The world may or may not be dynamic, in the sense that things can happen even if the agent does not do anything; when it is, it is convenient to postulate yet another agent, called “Nature”.