We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Eilenberg and MacLane invented (discovered) category theory in the early 1940s. They were working on Čech cohomology and wanted to separate the routine manipulations from those with more specific content. It turned out that category theory is good at that. Hence its other name abstract nonsense which is not always used with affection.
Another part of their motivation was to try to explain why certain ‘natural’ constructions are natural, and other constructions are not. Such ‘natural’ constructions are now called natural transformations, a term that was used informally at the time but now has a precise definition. They observed that a natural transformation passes between two gadgets. These had to be made precise, and are now called functors. In turn each functor passes between two gadgets, which are now called categories. In other words, categories were invented to support functors, and these were invented to support natural transformations.
But why the somewhat curious terminology? This is explained on pages 29 and 30 of Mac Lane (1998).
… the discovery of ideas as general as these is chiefly the willingness to make a brash or speculative abstraction, in this case supported by the pleasure of purloining words from philosophers: “Category” from Aristotle and Kant, “Functor” from Carnap …
That, of course, is the bowdlerized version.
Most of the basic notions were set up in Eilenberg and MacLane (1945) and that paper is still worth reading.
The isolation of the notion of an adjunction is one of the most important contributions of category theory. In a sense adjoints form the first ‘non-trivial’ part of category theory; at least it can seem that way now that all the basic stuff has been sorted out. There are adjunctions all over mathematics, and examples were known before the categorical notion was formalized. We have already met several examples, and later I will point you to them.
In this chapter we go through the various aspects of adjunctions quite slowly. We look at each part in some detail but, I hope, not in so much detail that we lose the big picture.
There is a lot going on in adjunctions, and you will probably get confused more than once. You might get things mixed up, forget which way an arrow is supposed to go, not be able to spell contafurious, and so on. Don't worry. I've been at it for over 40 years and I still can't remember some of the details. In fact, I don't try to. You should get yourself to the position where you can recognize that perhaps there is an adjunction somewhere around, but you may not be quite sure where. You can then look up the details. If you ever have to use adjunctions every day, then the details will become second nature to you.
As it says on the front cover this book is an introduction to Category Theory. It gives the basic definitions; goes through the various associated gadgetry such as functors, natural transformations, limits and colimits; and then explains adjunctions. This material could be developed in 50 pages or so, but here it takes some 220 pages. That is because there are many examples illustrating the various notions, some rather straightforward, and others with more content. More importantly, there are also over 200 exercises. And perhaps even more importantly, solutions to these exercises are available online.
The book is aimed primarily at the beginning graduate student, but that does not mean that other students or professional mathematicians will not find it useful. I have designed the book so that it can be used by a single student or small group of students to learn the subject on their own. The book will make a suitable text for a reading group. The book does not assume the reader has a broad knowledge of mathematics. Most of the illustrations use rather simple ideas, but every now and then a more advanced topic is mentioned. The book can also be used as a recommended text for a taught introductory course.
Every mathematician should at least know of the existence of category theory, and many will need to use categorical notions every now and then. For those groups this is the book you should have. Other mathematicians will use category theory every day.
All of the methodologies and tools introduced throughout this book rely on the evaluation of appropriate case studies. This chapter introduces three industrial-strength case studies serving as a foundation for all subsequent chapters in this book.
The Sales Scenario case study demonstrates business application engineering in the domain of enterprise software, a rather large domain encompassing, for example, enterprise resource planning (ERP), product life cycle management (PLM) and supply chain management (SCM). Such solutions must be adapted and customised to the particular company where the activities are employed. This is not a trivial task because of highly different needs of the respective stakeholders. For this reason business applications often have thousands of configuration settings. To reduce the complexity for the sake of conciseness, the Sales Scenario case study is focused on one specific sub-domain – customer relationship management (CRM) – combined with some parts of the aforementioned solutions.
The previous chapters of this book have presented a number of different techniques that are useful for developing software product lines (SPLs). These techniques can be combined in a variety of ways for different SPLs; each SPL is likely to require its own combination of techniques. To provide some guidance for SPL engineers, this and the next chapter discuss different scenarios for product line development and explain the ways in which the techniques previously presented can be used in these scenarios.
This chapter focuses on product-driven SPL engineering. We begin by explaining what we mean by this term, followed by an identification of requirements for this SPL scenario and a description of an approach for systematically developing such SPLs. The chapter closes by discussing the approach and how it meets the initial requirements as well as the challenges discussed in Chapter 1.
The implementation of a product line consists of a set of reusable components, called core assets, which are composed and configured in different ways to build different concrete products. The goal of a product line to support multiple products introduces additional complexity both to its assets and to the development process. The assets are more complicated because they must deal with variations of the concrete products. The development process is more complicated because it must deal not only with evolution of the common assets, but also with independent evolution of products and instantiation of new products.
In order to reduce the complexity of the implementation of a product line and to facilitate independent evolution, it is desirable to modularise the core features of a product line and the specific features of individual products. Considering features as units of variation in a product line, our goal is to support feature-oriented decomposition of software, in which each feature is implemented in a separate module.
Traceability is a quality attribute in software engineering that establishes the ability to describe and follow the life of a requirement in both the forward and backward directions (i.e. from its origins throughout its specification, implementation, deployment, use and maintenance, and vice-versa). The IEEE Standard Glossary of Software Engineering Terminology defines traceability as ‘the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another’ (IEEE, 1999).
According to (Palmer, 1997) ‘traceability gives essential assistance in understanding the relationships that exist within and across software requirements, design, and implementation’. Thus, trace relationships help in identifying the origin and rationale for artefacts generated during development lifecycle and the links between these artefacts. Identification of sources helps understanding requirements evolution and validating implementation of stakeholders’ requirements. The main advantages of traceability are: (i) to relate software artefacts and design decisions taken during the software development cycle; (ii) to give feedback to architects and designers about the current state of the development, allowing them to reconsider alternative design decisions, and to track and understand bugs; and (iii) to ease communication between stakeholders.
Variability management is a key challenge in software product line engineering (SPLE), as reflected in challenge 2 (identifying commonalities) introduced in Chapter 1. A software product line (SPL) is all about identifying, modelling, realising and managing the variability between different products in the SPL.
Variability management has two major parts: modelling the variability an SPL should encompass; and designing how this variability is to be realised in individual products. For the former part, different kinds of variability models can be employed: a typical approach is to use feature models (Kang et al., 1990) (or cardinality-based feature models, see Czarnecki et al. (2005b), in some cases), but domain-specific languages (DSLs) have also been used with some success. The latter part – modelling how variability is realised – is less well understood. Some approaches have been defined and will be discussed in Section 4.2, including their limitations. In this chapter, we therefore focus on DSLs for variability management and present a novel approach developed in the AMPLE project that aims at overcoming these limitations.
He sat, in defiance of municipal orders, astride the gun Zam-Zammah on her brick platform opposite the old Ajaib- Gher – the Wonder House, as the natives call the Lahore Museum. Who hold Zam-Zammah, that ‘fire-breathing dragon’, hold the Punjab.
(Rudyard Kipling, Kim)
As the size and complexity of software systems grows, so does the need for effective modularity, abstraction and composition mechanisms to improve the reuse of software development assets during software systems engineering. This need for reusability is dictated by pressures to minimise costs and shorten the time to market. However, such reusability is only possible if these assets are variable enough to be usable in different products. Variability support has thus become an important attribute of modern software development practices. This is reflected by the increasing interest in mechanisms such as software product lines (Clements & Northrop, 2001) and generative programming (Czarnecki & Eisenecker 2000). Such mechanisms allow the automation of software development as opposed to the creation of custom ‘one of a kind’ software from scratch. By utilising variability techniques, highly reusable code libraries and components can be created, thus cutting costs and reducing the time to market.
A software product line is a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. Core assets are produced and reused in a number of products that form a family. These core assets may be documents, models, etc., comprising product portfolios, requirements, project plans, architecture, design models and, of course, software components.
One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.
Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.