We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Empirical evaluation has for many years been utilised to validate theories in other science disciplines. One of the first well-known reported examples of empirical evaluation occurred when Galileo wanted to prove that the rate of descent of objects was independent of their mass. This would disprove a theory put forward by Aristotle that the rate of descent is directly proportional to their weight. To prove his theory Galileo dropped two balls made from the same material but different masses from the top of the Tower of Pisa. When the experiment was performed Galileo's theory was proved correct through the empirical evidence collected. What this story demonstrates is the importance of empirical validation to verify or disprove theories and hypotheses. The purpose of this chapter is to emphasise the importance and difficulties of empirical evaluation in the domain of SPLE.
In addition to physics, experimentation plays a vital role in other disciplines. For example, medicine as a discipline did not really exist before experimentation was applied to this area (Basili, 1996). Instead, remedies and cures to illnesses were passed around based on hearsay, or from generation to generation. When experimentation was applied to medicine real progress was observed, with extra resources diverted to areas showing promise. Applying experimentation can speed up the progress of a discipline by quickly eliminating futile approaches and incorrect theories. Furthermore, experimentation can potentially open up new areas of research by uncovering unexpected results.
In software development, we have to make choices and take decisions, and these depend on obtaining answers for critical questions, such as the following:
How should an important decision be made when conflicting strategic goals and stakeholders’ desires or quality attributes must be considered?
How can stakeholders be assured that the decision has been made in a sound, rational and fair process that withstands the rigour of an aspect-oriented analysis and design, or a software product line, for example?
In software product line (SPL) development, the answers to these questions are critical, because they require dealing with modelling and implementation of common and variable requirements that can be composed and interact in different ways. Furthermore, they also require decisions that can impact several products at the same time. For example, we may simply want to know which requirements are in conflict and which features are negatively affected – considering different configurations of the software product lines – to choose the best architecture to design and implement the product line and to be able to decide which mandatory or optional features should have implementation priority. Therefore, help is required to support software engineers in making better, informed decisions, by offering them a systematic process for ranking a set of alternatives based on a set of criteria. In requirements engineering, for instance, it is useful to identify conflicting requirements with respect to which negotiations must be carried out and to which trade-offs need to be established (Rashid et al., 2003). A concrete typical use is to offer a ranking of non-functional requirements (NFRs) based on stakeholders’ wishes. This helps to establish early trade-offs between requirements, hence providing support for negotiation and subsequent decision-making among stakeholders. As discussed in Moreira et al. (2005a), having performed a trade-off analysis on the requirements, we are better informed with respect to each important quality attribute the system should satisfy, before making any architectural choices.
Requirements engineering in software product line engineering
Software product line engineering (SPLE) (Clements & Northrop, 2001) has been recognised as one of the foremost techniques for developing reusable and maintainable software within system families (Parnas, 2001a, 2001b). We focus on a feature-oriented form of SPLE, in which the key concern is to break the problem domain down into features, which are system properties, or functionalities, which are relevant to some stakeholders.
Domain and application engineering
Feature-oriented SPLE can be usefully broken down into two core activities: domain engineering and application engineering. The key task of domain engineering is to model the domain itself in order to lay the foundation for deriving individual products, which is the remit of application engineering. The work presented in this chapter belongs to the realm of domain engineering; we seek to aid the requirements engineer in analysing, understanding and modelling the domain by providing a framework for the automated construction of feature models from natural language requirements documents.
In the previous chapters of this book, it has been established that software product lines (SPL) have become one of the most popular means to providing a flexible product portfolio while achieving a short time-to-market. By reusing overlapping functionality, production time and cost of development can be significantly reduced for families of products (Pohl et al., 2005). But this increased flexibility comes at a price as software developers are faced with a considerable increase of complexity when designing the software product line.
Where the development of traditional software systems already requires substantial amounts of information, the development of SPLs involves even larger quantities of information. As an SPL supports a range of products, detailed information on all these products is required for SPL engineering. In addition, information is required on how the variability among these products is to be supported, what the design of the SPL infrastructure will look like and how the SPL will be aligned with market.
This chapter describes our approach for mapping the requirements processed by AMPLE techniques and tools, such as ArborCraft (Chapter 3), VML4RE (Chapter 5) and HAM (Chapter 5), to a product line architecture. In contrast to the implementation-related Chapter 6, which focuses on CaesarJ for implementing configurable software components, this chapter concentrates on a model-driven approach based on variability modelling, domain-specific languages (DSLs), architecture blueprints and templates, and libraries of artefacts (arbitrary software components, configuration and deployment data, etc.).
Model-driven engineering (MDE) is an approach that captures the key features of the system used in models, and develops and refines these models during development until code is finally generated. Models are defined at different conceptual levels, and are combined and transformed from a higher level of abstraction to a more concrete one. By integrating MDE into software product line engineering (SPLE), solution space artefacts can be systematically derived from problem space concepts, leading to a higher automation in application engineering saving cost and time. Models abstract the problem and facilitate rigorous descriptions using terms and concepts that are familiar to people who work in the domain of the problem, rather than in terms only familiar to IT experts. In particular, essential improvements can be achieved by using DSLs to represent the system design with terminology and abstractions of the problem domain, which is easier to understand for problem domain experts.
Traceability practices should help stakeholders with the understanding, capturing, tracking and verification of software artefacts and their relationships. A proper realisation of traceability is a necessary system characteristic, as it supports software management, software evolution, verification and validation. It is fundamental for the definition of the results of many kinds of analysis of software models, such as change impact analysis, variability analysis and separation of concerns analysis.
In software product line engineering (SPLE), traceability is a key practice. It is necessary to support variability management and to keep the goals and the structure of the product line definition consistent, updated and valuable. Traceability information is rarely considered in an isolated way. It is captured, updated and analysed from multiple perspectives, such as domain engineering and application engineering.
The book currently in your hands touches on a wide range of topics in the area of software product line engineering and offers unique solutions to particular problems appearing in the whole development cycle. We show how to semi-automatically derive feature models from requirements documents, dive deeper in modelling variability with a domain-specific language tailored for this purpose, and propose methodologies to develop items in a product-driven as well as in a solution-driven style. We also introduce aspects into core asset development, track changes and decisions in the development process and deal with potential conflicts and uncertainties. However, there is one thread that runs as a common theme through all chapters of this book: all the techniques and methodologies are centred around what we will call conventional software product line engineering. That means that a certain domain is analysed, and a number of components are produced, tested and later on assembled to form actual products, much like in a design–develop–compile–assemble style. It is easy to imagine how software running on modern smart phones, for instance, is developed this way. Other examples following this style can easily be found by looking around. However, the software landscape in which we are living has changed a lot in recent years. Software is no longer produced only by compiling source code, burning the final application onto a CD-ROM and delivering it to a customer. The Internet has opened the door to different styles of product delivery and consumption. Whole applications can be called by clicking a single link and a plethora of web services stands ready for delivering a wide range of functionalities never seen before.
The way of creating an application by consuming and composing services offered by different providers changes the style of application development and, therefore, also affects what we earlier called ‘traditional’ software product line engineering. For this reason new challenges will arise for SPLE that cannot be tackled by traditional solutions.
Many systems of quantified modal logic cannot be characterised by Kripke's well-known possible worlds semantic analysis. This book shows how they can be characterised by a more general 'admissible semantics', using models in which there is a restriction on which sets of worlds count as propositions. This requires a new interpretation of quantifiers that takes into account the admissibility of propositions. The author sheds new light on the celebrated Barcan Formula, whose role becomes that of legitimising the Kripkean interpretation of quantification. The theory is worked out for systems with quantifiers ranging over actual objects, and over all possibilia, and for logics with existence and identity predicates and definite descriptions. The final chapter develops a new admissible 'cover semantics' for propositional and quantified relevant logic, adapting ideas from the Kripke–Joyal semantics for intuitionistic logic in topos theory. This book is for mathematical or philosophical logicians, computer scientists and linguists.
Software product lines provide a systematic means of managing variability in a suite of products. They have many benefits but there are three major barriers that can prevent them from reaching their full potential. First, there is the challenge of scale: a large number of variants may exist in a product line context and the number of interrelationships and dependencies can rise exponentially. Second, variations tend to be systemic by nature in that they affect the whole architecture of the software product line. Third, software product lines often serve different business contexts, each with its own intricacies and complexities. The AMPLE (http://www.ample-project.net/) approach tackles these three challenges by combining advances in aspect-oriented software development and model-driven engineering. The full suite of methods and tools that constitute this approach are discussed in detail in this edited volume and illustrated using three real-world industrial case studies.
Plato was not present on the day that Socrates drank hemlock in the jail at Athens and died. Phædo, who was, later related that day's conversation to Echecrates in the presence of a gathering of Pythagorean philosophers at Phlius. Once again, Plato was not around to hear what was said. Yet he wrote a dialog, “Phædo,” dramatizing Phædo's retelling of the occasion of Socrates' final words and death. In it, Plato presents to us Phædo and Echecrates' conversation, though what these two actually said he didn't hear. In Plato's account of that conversation, Phædo describes to Echecrates Socrates' conversation with the Thebian Pythagoreans, Simmias and Cebes, though by his own account he only witnessed that conversation and refrained from contributing to it. Plato even has Phædo explain his absence: “Plato,” he tells Echecrates, “I believe, was ill.”
We look to Socrates' death from a distance. Not only by time, but by this doubly embedded narrative, we feel removed from the event. But this same distance draws us close to Socrates' thought. Neither Simmias nor Cebes understood Socrates' words as well as Phædo did by the time he was asked to repeat them. Even Phædo failed to notice crucial details that Plato points out. Had we overheard Socrates' conversation, we would not have understood it. We look to Socrates' death from a distance, but to understand Socrates, we don't need to access him—we need Plato.
Abstract. This paper discusses Tennenbaum's Theorem in its original context of models of arithmetic, which states that there are no recursive nonstandard models of Peano Arithmetic. We focus on three separate areas: the historical background to the theorem; an understanding of the theorem and its relationship with the Gödel–Rosser Theorem; and extensions of Tennenbaum's theorem to diophantine problems in models of arthmetic, especially problems concerning which diophantine equations have roots in some model of a given theory of arithmetic.
§ 1.Some historical background. The theorem known as “Tennenbaum's Theorem” was given by Stanley Tennenbaum in a paper at the April meeting in Monterey, California, 1959, and published as a one-page abstract in the Notices of the American Mathematical Society [28]. It is easily stated as saying that there is no nonstandard recursive model of Peano Arithmetic, and is an attractive and rightly often-quoted result.
This paper celebrates Tennenbaum's Theorem; we state the result fully and give a proof of it andother related results later. This introduction is in the main historical. The goals of the latter parts of this paper are: to set out the connections between Tennenbaum's Theorem for models of arithmetic and the Gödel–Rosser Theorem and recursively inseparable sets; and to investigate stronger versions of Tennenbaum's Theorem and their relationship to some diophantine problems in systems of arithmetic.
Tennenbaum's theorem was discovered in a period of foundational studies, associated particularly with Mostowski, where it still seemed conceivable that useful independence results for arithmetic could be achieved by a “handson” approach to building nonstandard models of arithmetic.
Conversation March 3, 1972. Husserl's philosophy is very different before 1909 from what it is after 1909. At this point he made a fundamental philosophical discovery, which changed his whole philosophical outlook and is even reflected in his style of writing. He describes this as a time of crisis in his life, both intellectual and personal. Both were resolved by his discovery. At this time he was working on phenomenological investigation of time.
There is a certain moment in the life of any real philosopher where he for the first time grasps directly the system of primitive terms and their relationships. This is what had happened to Husserl. Descartes, Schelling, Plato discuss it. Leibniz described it (the understanding or the system?) as being like the big dipper — it leads the ships. It was called understanding the absolute.
The analytic philosophers try to make concepts clear by defining them in terms of primitive terms. But they don't attempt to make the primitive terms clear. Moreover, they take the wrong primitive terms, such as “red”, etc., while the correct primitive terms would be “object”, “relation”, “well”, “good”, etc.
The understanding of the system of primitive terms and their relationships cannot be transferred from one person to another. The purpose of reading Husserl should be to use his experience to get to this understanding more quickly. (“Philosophy As Rigorous Science” is the first paper Husserl wrote after his discovery.)
Perhaps the best way would be to repeat his investigation of time. At one point there existed a 500-page manuscript on the investigation (mentioned in letters to Ingarden, with whom he wished to publish the manuscript).
Abstract. Finite set theory, here denoted ZFfin, is the theory obtained by replacing the axiom of infinity by its negation in the usual axiomatization of ZF (Zermelo-Fraenkel set theory). An ω-model of ZFfin is a model in which every set has at most finitely many elements (as viewed externally). Mancini and Zambella (2001) employed the Bernays-Rieger method of permutations to construct a recursive ω-model of ZFfin that is nonstandard (i.e., not isomorphic to the hereditarily finite sets Vω). In this paper we initiate the metamathematical investigation of ω-models of ZFfin. In particular, we present a new method for building ω-models of ZFfin that leads to a perspicuous construction of recursive nonstandard ω-models of ZFfin without the use of permutations. Furthermore, we show that every recursive model of ZFfin is an ω-model. The central theorem of the paper is the following:
Theorem A. For every graph (A, F), where F is a set of unordered pairs of A, there is an ω-model m of ZFfin whose universe contains A and which satisfies the following conditions:
(1) (A, F) is definable in m;
(2) Every element of m is definable in (m, a)a ∈ A;
(3) If (A, F) is pointwise definable, then so is m;
(4) Aut(m) ≅ Aut(A, F).
Theorem A enables us to build a variety of ω-models with special features, in particular:
Corollary 1. Every group can be realized as the automorphism group of an ω-model of ZFfin.
Corollary 2. For each infinite cardinal κ there are 2κrigid nonisomorphic ω-models of ZFfinof cardinality κ. […]