We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this lecture we investigate a distributed, physical system that is simple enough to explore in some detail but complex enough to illustrate some of the main points of the theory. Readers may work through this lecture as a way of getting a feeling for the motivations behind the mathematics that follow or skip ahead to the theory and come back to this example later. Readers who decide to work through this lecture should accept various assertions on faith, because they are justified by the work that follows.
Informal Description of the Circuit
The example consists of a light circuit LC. The circuit we have in mind is drawn from the home of one of the authors. It consists of a light bulb B connected to two switches, call them SW1 and SW2, one upstairs, the other downstairs. The downstairs switch is a simple toggle switch. If the bulb is on, then flipping switch SW2 turns it off; if it is off, then flipping SW2 turns it on. The upstairs switch SW1 is like SW2 except that it has a slider SL controlling the brightness of the bulb. Full up and the bulb is at maximum brightness (if lit at all); full down and the bulb is at twenty-five percent brightness, if lit, with the change in brightness being linear in between.
The three lectures in the first part of the book present an informal overview of a theory whose technical details will be developed and applied later in the book. In this lecture we draw the reader's attention to the problems that motivated the development of the theory. In Lecture 2 we outline our proposed solution. A detailed example is worked out in Lecture 3.
In the course of the first two lectures we draw attention to four principles of information flow. These are the cornerstones of our theory. We do not attempt to present a philosophical argument for them. Rather, we illustrate and provide circumstantial evidence for the principles and then proceed to erect a theory on them. When the theory is erected, we shall be in a better position to judge them. Until that job is done, we present the principles as a means of understanding the mathematical model to be presented – not as an analysis of all and sundry present day intuitions about information and information flow.
The Worldly Commerce of Information
In recent years, information has become all the rage. The Utopian vision of an information society has moved from the pages of science fiction novels to political manifestos. As the millennium approaches fast on the information highway, the ever-increasing speed and scope of communication networks are predicted to bring sweeping changes in the structure of the global economy.
In Lecture 8 we saw how to construct state spaces from classifications and vice versa. In Lecture 12 we saw how to associate a canonical logic Log(S) with any state space S. In this lecture we study the relation between logics and state spaces in more detail. Our aim is to try to understand how the phenomena of incompleteness and unsoundness get reflected in the state-space framework. We will put our analysis to work by exploring the problem of nonmonotonicity in Lecture 19.
Subspaces of State Spaces
Our first goal is to show that there is a natural correspondence between the subspaces of a state space S and logics on the event classification of S. We develop this correspondence in the next few results.
Definition 16.1. Let S be a state space. An S-logic is a logic ℒ on the event classification Evt(S) such that Log(S) ⊑ ℒ.
The basic intuition here is that an S-logic should build in at least the theory implicit in the state-space structure of S. We call a state σ of S ℒ-consistent if {σ}⊬ℒ and let Ωℒ the set of ℒ-inconsistent states.
Proposition 16.2.If S is a state space and ℒ is an S-logic, then ⊬ℒΩℒ. Indeed, Ωℒ is the smallest set of states such that ⊢ℒΩℒ.
Proof. To prove the first claim, let (Г, Δ) be any partition of the types of Evt(S) with Ωℒ∈Δ. We need to see that Г⊢ℒΔ.
In this lecture, we prepare the way for the notion of local logic by studying the ways that classifications give rise to “regular theories.” These theories can be seen as an idealized version of the scientific laws supported by a given closed system. The adjective “regular” refers to the purely structural properties that any such theory must satisfy. Any theory with these properties can be obtained from a suitable classification. At the end of the lecture, we will return to the question of how different scientific theories, based on different models of the phenomena under study, can be seen as part of a common theory. We will see conditions under which this obtains.
Theories
One way to think about information flow in a distributed system is in terms of a “theory” of the system, that is, a set of known laws that describe the system. Usually, these laws are expressed in terms of a set of equations or sentences of some scientific language. In our framework, these expressions are modeled as the types of some classification. However, we will not model a theory by means of a set of types. Because we are not assuming that our types are closed under the Boolean operations, as they are not in many examples, we get a more adequate notion of theory by following Gentzen and using the notion of a sequent.
With the groundwork laid in the preceding lectures, we come to the central material of the book, the idea of a local logic, which will take up the remainder of Part II. In this lecture we introduce local logics and proceed in the lectures that follow to show how local logics are related to channels and so to information flow.
If one is reasoning about a distributed system with components of very different kinds, the components will typically be classified in quite different ways, that is, with quite different types. Along with these different types, it is natural to think of each of the components as having its own logic, expressed in its own system of types. In this way, the distributed system gives rise to a distributed system of local logics. The interactions of the local logics reflect the behavior of the system as a whole.
In order to capture this idea, we introduce and study the notions of “local logic” and “local logic infomorphism” in this lecture. The main notions are introduced in the first two sections and studied throughout this lecture. The important idea of moving a logic along an infomorphism is studied in Lecture 13. In Lecture 14, we show that every local logic can be represented in terms of moving natural logics along binary channels. The idea of moving logics is put to another use in Lecture 15 to define the distributed logic of an information system.
In Lecture 7 we discussed the relationship between classifications and the Boolean operations. In this lecture, we study the corresponding relationship for theories. In particular, we discuss Boolean operations that take theories to theories, as well as what it would mean for operations to be Boolean operations in the context of a particular theory. In this way, we begin to see how the traditional rules of inference emerge from an informational perspective. The topic is a natural one but it is not central to the main development so this lecture could be skipped.
Boolean Operations on Theories
Given a regular theory T = (Σ, ⊢), one may define a consequence relation on the set pow(Σ) of subsets of Σ in one of two natural ways, depending on whether one thinks of the sets of types disjunctively or conjunctively. This produces two new theories, ∨T and ∧T, respectively.
These operations should fit with the corresponding power operations ∨A and ∧A on classifications A; we want vTh(A) to be the same as the theory Th(∨A), for example. Thus, to motivate our definitions, we begin by investigating the relationship of the theory Th(A) of a classification to the theories Th(∨A) and Th(∧A) of its two power classifications.
Definition 11.1. Given a set Γ of subsets of Σ, a set Y is a choice set on Γ if X ∩ Y ≠ Ø for each X ∈ Γ.
Information and talk of information is everywhere nowadays. Computers are thought of as information technology. Each living thing has a structure determined by information encoded in its DNA. Governments and companies spend vast fortunes to acquire information. People go to prison for the illicit use of information. In spite of all this, there is no accepted science of information. What is information? How is it possible for one thing to carry information about another? This book proposes answers to these questions.
But why does information matter, why is it so important? An obvious answer motivates the direction our theory takes. Living creatures rely on the regularity of their environment for almost everything they do. Successful perception, locomotion, reasoning, and planning all depend on the existence of a stable relationship between the agents and the world around them, near and far. The importance of regularity underlies the view of agents as information processors. The ability to gather information about parts of the world, often remote in time and space, and to use that information to plan and act successfully, depends on the existence of regularities. If the world were a completely chaotic, unpredictable affair, there would be no information to process.
Still, the place of information in the natural world of biological and physical systems is far from clear. A major problem is the lack of a general theory of regularity.
The view of information put forward here associates information flow with distributed systems. Such a system A, we recall, consists of an indexed family cla(A) = {Ai}i∈I of classifications together with a set inf (A) of isomorphisms, all of which have both a domain and a codomain in cla(A). With any such a system we want to associate a systemwide logic Log(A) on the sum ∑i∈I,Ai of the classifications in the system. The constraints of Log(A) should use the lawlike regularities represented by the system as a whole. The normal tokens of Log(A) model those indexed families of tokens to which the constraints are guaranteed to apply, by virtue of the structure of the system.
If we consider a given component classification Ai of A, there are at least two sensible logics on Ai that we might want to incorporate into Log(A), the a priori logic AP(Ai) and the natural logic Log(Ai). The former assumes we are given no information about the constraints of Ai except for the trivial constraints. The latter assumes perfect information about the constraints of Ai. There is typically quite a difference. But really these are just two extremes in our ordering of sound local logics on Ai. After all, in dealing with a distributed system, we may have not just the component classifications and their informorphisms, but also local logics on the component classifications. We want the systemwide logic to incorporate these local logics.
State-space models are one of the most prevalent tools in science and applied mathematics. In this lecture, we show how state spaces are related to classifications and how systems of state spaces are related to information channels. As a result, we will discover that state spaces provide a rich source of information channels. In later lectures, we will exploit the relationship between state spaces and classifications in our study of local logics.
State Spaces and Projections
Definition 8.1. A state space is a classification S for which each token is of exactly one type. The types of a state space are called states, and we say that a is in state σ if a ⊨s σ. The state space S is complete if every state is the state of some token.
Example 8.2. In Example 4.5 we pointed out that for any function f : A → B, there is a classification whose types are elements of B and whose tokens are elements of A and such that a ⊨b if and only if b = f(a). This classification is a state space and every state space arises in this way, so another way to put the definition is to say that a state space is a classification S in which the classification relation ⊨s is a total function. For this reason, we write states(a) for the state σ of a in S.