We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Before you can start to use ENVY effectively, many questions need answering, even if you already understand Smalltalk development. How do I write code using ENVY? How is that code organized? How do I coordinate with other team members? What's a configuration map?
This chapter sorts out the basic concepts in ENVY, and starts you on the path toward being a productive developer. We start out by discussing development as an individual. In Chapter 3 we move on to team development, and Chapter 4 covers more advanced topics.
The discussion here is aimed at a new user who has at least some experience with Smalltalk development. We'll discuss not only the features but their motivations and best practices, illustrating with examples. The chapter starts out simple, but things will get complicated quickly. Be patient; it'll take some time to become familiar with all of these concepts. More experienced users will find the best practices most interesting, if only to find things they disagree with. Those users looking for a shorter, more formal description of ENVY concepts should see Chapter 5.
Questions and Answers
The example project we'll be using is a simple questionnaire application. There are questions, and there are answers. Questions are strung together into groups, and depending on the answers to one question, the follow-up questions may change. For example, suppose we asked the question, How would you rate this book? If the response was Poor, then we don't need to ask any more questions because the user clearly has bad judgment.
To start with, we'll introduce ENVY/Developer, go over enough of the basic ENVY/Developer architecture to get you started, and describe in some detail how to set up the environment and configure the options available. All readers will want to review the architecture section. The setup sections will be of particular interest to new users and to administrators responsible for the maintenance of an ENVY installation.
Overview
First, let's identify what ENVY/Developer is, where it comes from, and what it's good for. This information will help you understand the architecture and where ENVY's concepts come from.
What Is ENVY/Developer?
ENVY/Developer is a software engineering environment for Smalltalk programming. Specifically, it provides facilities for team programming and delivering significant-sized applications, built on top of the regular Smalltalk development environment. Smalltalk is widely recognized as being very productive for individual developers, but the standard Smalltalk environment was not designed with the idea of collaborative development in mind. ENVY/Developer extends the basic Smalltalk environment to include facilities for team programming and configuration management.
Technically, ENVY is a generic term that refers to a family of products and technologies. These include ENVY/Smalltalk, ENVY/Developer, ENVY/Replicator, and others. However, ENVY/Developer is the most widely used of these products, and it is common usage to use just the term “ENVY” to refer to ENVY/Developer, or even more specifically to ENVY/Manager. Although this is technically incorrect, it is the common usage and is the standard we follow for this book.
The administrator of an ENVY system must look after the many necessary tasks to ensure that the system and the development process run smoothly. This chapter describes some of these tasks as well as tools that make life easier for an administrator. We first describe the tasks associated with managing ENVY users, followed by a discussion of moving code between different images, repositories, and development sites. Administration also includes responsibility for the repository – managing its growth and dealing with system upgrades and the needs of remote developers. Many administrative tasks are repetitive, and we include a section on writing scripts using the ENVY API to automate some of these tasks. This is followed by more abstract discussions on organizing an ENVY project and maximizing code re-use. Finally, you'll find some general tips and tricks for keeping an ENVY system running smoothly.
Obviously, this chapter is primarily of interest to administrators. Power users who want to understand more about the details of ENVY will also find it valuable, and toolsmiths may find the process descriptions and tools useful. Also of interest with respect to administration are Chapter 1, “Getting Started,” which covers installation and setup of an ENVY system, and Chapters 3 and 4, “Team Development” and “Advanced Development,” which address some of the code organization and project management issues that affect the administration of an ENVY system.
While ENVY provides an extensive toolset, tools and enhancements can make life easier for a developer or administrator. This section presents a few such tools, with example source code and explanations of the techniques involved. For full code, including complete source code for these tools, see the book's Web site. The tools we will look at are as follows:
▪ A script manager, which organizes a collection of scripts inside user fields in the repository.
▪ A three-way differences browser, which streamlines the process of merging two divergent streams together.
▪ A simple project management tool to support the layered architecture described in Chapter 4.
▪ A checkpointing facility, which creates versions of a component out of editions without modifying the underlying editions, tremendously simplifying backup, intermediate baselines, and management of multiple repositories. We think this is one of the most useful and important ENVY add-ons we've seen.
▪ An integration of the Refactoring Browser facilities into the standard ENVY browsers.
▪ A way to rename versions in the ENVY repository without creating new editions and versioning them.
▪ Ways of removing source code from ENVY components so that we can deliver libraries to other developers without exposing all of our internal source code.
The material on concepts and on using these tools is appropriate for novice developers and administrators. The material on implementation techniques is very advanced in some areas and is primarily intended for power users and toolsmiths.
We now embark on a more thorough study of natural deduction, normalization and the structure of normal derivations. We describe a simple normalization strategy w.r.t. a specific set of conversions which transforms every deduction in Ni into a deduction in normal form; moreover, for →Nm we prove that deductions are in fact strongly normalizable, i.e. every sequence of normalization steps terminates in a normal deduction, which is in fact unique.
As in the case of cut elimination, there is a hyperexponential upper bound on the rate of growth under normalization. From a suitable example we also easily obtain a hyperexponential lower bound. This still leaves open the possibility that each theorem might have at least some cutfree deduction of “modest” length; but this possibility is excluded by an example, due to Orevkov, of a sequence of statements Cn, n ∈ ℕ, with deductions linearly bounded by n, for which the minimum depth of arbitrary normal proofs has a hyperexponential lower bound.
This points to the very important role of indirect proof in mathematical reasoning: without indirect reasoning, exemplified by non-normal proofs, we cannot present proofs of manageable size for the Cn.
Conversions and normalization
In this and the next section we shall study the process of normalization for Ni, which corresponds to cut elimination for intuitionistic sequent calculi.
We shall assume, unless stated otherwise, that applications of ⊥i have atomic conclusions in the deductions we consider.
In this chapter we study another form of inference, which forms the keystone of logic programming and certain theorem-proving systems. We do not aim at giving a complete introduction to the theory of logic programming; rather, we want to show how resolution is connected with other formalisms and to provide a proof-theoretic road to the completeness theorem for SLD-resolution.
The first three sections deal with propositional resolution, unification and resolution in predicate logic. The last two sections illustrate for Cp and Ip how deductions in a suitably chosen variant of the Gentzen systems can be directly translated into deductions based on resolution, which often permits us to lift strategies for proof search in Gentzen systems to resolution-based systems. The extension of these methods to predicate logic is more or less straightforward.
Introduction to resolution
Propositional linear resolution is a “baby example” of resolution methods, which is not of much interest in itself, but may serve as an introduction to the subject.
We consider programs consisting of finitely many sequents (clauses) of the form Γ ⇒ P, P a propositional variable and Γ a finite multiset of propositional variables (“definite clauses”, “Horn clauses” or “Horn sequents”). A goal or query Γ is a finite (possibly empty) set of propositional variables, and may be identified with the sequent Γ ⇒. [] is the empty goal.
The “applications of cut elimination” in the title of this chapter may perhaps be described more appropriately as “applications of cutfree systems”, since the applications are obtained by analyzing the structure of cutfree proofs; and in order to prove that the various cutfree systems are adequate for our standard logics all we need to know is that these systems are closed under Cut (that is to say, Cut is a an admissible rule). Nevertheless there are good reasons to be interested in the process of cut elimination, as opposed to semantical proofs of closure under Cut. True, the usual semantical proofs establish not only closure under Cut, but also completeness for the semantics considered. On the other hand, the proof of cut elimination for G3c is at least as efficient as the semantical proof (although G3cp permits a very fast semantical proof of closure under Cut), and in the case of logics with a more complicated semantics (such as intuitionistic logic, and the modal logic S4 in chapter 9) often more efficient. For linear logic in section 9.3, so far no semantical proof of closure under Cut has been published. Other reasons for being interested in the process of cut elimination will be found in certain results in sections 5.1 and 6.9, which describe bounds on the increase in size of deductions under cut elimination and normalization respectively.
Gentzen [1935] introduced his calculi LK, LJ as formalisms more amenable to metamathematical treatment than natural deduction. For these systems he developed the technique of cut elimination. Even if nowadays normalization as an “equivalent” technique is widely used, there are still many reasons to study calculi in the style of LK and LJ (henceforth to be called Gentzen calculi or Gentzen systems, or simply G-systems):
Where normal natural deductions are characterized by a restriction on the form of the proof – more precisely, a restriction on the order in which certain rules may succeed each other – cutfree Gentzen systems are simply characterized by the absence of the Cut rule.
Certain results are more easily obtained for cutfree proofs in G-systems than for normal proofs in N-systems.
The treatment of classical logic in Gentzen systems is more elegant than in N-systems.
The Gentzen systems for M, I and C have many variants. There is no reason for the reader to get confused by this fact. Firstly, we wish to stress that in dealing with Gentzen systems, no particular variant is to be preferred over all the others; one should choose a variant suited to the purpose at hand. Secondly, there is some method in the apparent confusion.
As our basic system we present in the first section below a slightly modified form of Gentzen's original calculi LJ and LK for intuitionistic and classical logic respectively: the Gl-calculi. In these calculi the roles of the logical rules and the so-called structural rules are kept distinct.
Until we come to chapter 9, we shall concentrate on our three standard logics: classical logic C, intuitionistic logic I and minimal logic M. The informal interpretation (semantics) for C needs no explanation here. The logic I was originally motivated by L. E. J. Brouwer's philosophy of mathematics (more information in Troelstra and van Dalen [1988, chapter 1]); the informal interpretation of the intuitionistic logical operators, in terms of the primitive notions of “construction” and “constructive proof”, is known as the “Brouwer–Heyting–Kolmogorov interpretation” (see 1.3.1, 2.5.1). Minimal logic M is a minor variant of I, obtained by rejection of the principle “from a falsehood follows whatever you like” (Latin: “ex falso sequitur quodlibet”, hence the principle is often elliptically referred to as “ex falso”), so that, in M, the logical symbol for falsehood ⊥ behaves like some unprovable propositional constant, not playing a role in the axioms or rules.
This chapter opens with a precise description of N-systems for the full first-order language with proofs in the form of deduction trees, assumptions appearing at top nodes. After that we present in detailthe corresponding term system for the intuitionistic N-system, an extension of simple type theory. Once a precise formalism has been specified, we are ready for a section on the Godel-Gentzen embedding of classical logic into minimal logic. This section gives some insight into the relations between C on the one hand and M, I on the other hand.
This chapter is devoted to two topics: the rate of growth of deductions under the process of cut elimination, and permutation of rules.
It is not hard to show that there is a hyperexponential upper bound on the rate of growth of the depth of deductions under cut elimination. For propositional logic much better bounds are possible, using a clever strategy for cut elimination. This contrasts with the situation for normalization in the case of N-systems (chapter 6), where propositional logic is as bad as predicate logic in this respect.
In contrast to the case of normalization for N-systems, it is not easy to extract direct computational content from the process of cut elimination for G-systems, since as a rule the process is non-deterministic, that is to say the final result is not a uniquely defined “value”. Recent proof-theoretical studies concerning linear logic (9.3) lead to a more or less satisfactory analysis of the computational content in cut elimination for C (and I); in these studies linear logic serves to impose a “fine structure” on sequent deductions in classical and linear logic (some references are in 9.6.5).
We also show that in a GS-system for Cp with Cut there are sequences of deduction with proofs linearly increasing in size, while the size of their cutfree proofs has exponentially increasing lower bounds.
These results indicate that the use of “indirect proof”, i.e. deductions that involve some form of Cut play an essential role in formalized versions of proofs of theorems from mathematical practice, since otherwise the length of proofs would readily become unmanageable.
For this chapter preliminary knowledge of some basic notions of category theory (as may be found, for example, in Mac Lane [1971], Blyth [1986], McLarty [1992], Poigné [1992]) will facilitate understanding, but is not necessary, since our treatment is self-contained. Familiarity with chapter 6 is assumed.
In this chapter we introduce another type of formal system, inspired by notions from category theory. The proofs in formalisms of this type may be denoted by terms; the introduction of a suitable equivalence relation between these terms makes it possible to interpret them as arrows in a suitable category.
In particular, we shall consider a system for minimal →∧⊤-logic connected with a special cartesian closed category, namely the free cartesian closed category over a countable discrete graph, to be denoted by CCC(PV). In this category we have a decision problem: when are two arrows from A to B the same?
This problem will be solved by establishing a correspondence between the arrows of CCC(PV) and the terms of the extensional typed lambda calculus. For this calculus we can prove strong normalization, and the decision problem is thereby reduced to computing and comparing normal forms of terms of the lambda calculus.
Another interesting application of this correspondence will be a proof of a certain “coherence theorem” for CCC(PV). (A coherence theorem is a theorem of the form: “between two objects satisfying certain conditions there is at most one arrow”.)