We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Structural testing is considered more technical than functional testing. It attempts to design test cases from the source code and not from the specifications. The source code becomes the base document which is examined thoroughly in order to understand the internal structure and other implementation details. It also gives insight in to the source code which may be used as an essential knowledge for the design of test cases. Structural testing techniques are also known as white box testing techniques due to consideration of internal structure and other implementation details of the program. Many structural testing techniques are available and some of them are given in this chapter like control flow testing, data flow testing, slice based testing and mutation testing.
CONTROL FLOW TESTING
This technique is very popular due to its simplicity and effectiveness. We identify paths of the program and write test cases to execute those paths. As we all know, path is a sequence of statements that begins at an entry and ends at an exit. As shown in chapter 1, there may be too many paths in a program and it may not be feasible to execute all of them. As the number of decisions increase in the program, the number of paths also increase accordingly.
Every path covers a portion of the program. We define ‘coverage’ as a ‘percentage of source code that has been tested with respect to the total source code available for testing’.
What is object orientation? Why is it becoming important and relevant in software development? How is it improving the life of software developers? Is it a buzzword? Many such questions come into our mind whenever we think about object orientation of software engineering. Companies are releasing the object oriented versions of existing software products. Customers are also expecting object oriented software solutions. Many developers are of the view that structural programming, modular design concepts and conventional development approaches are old fashioned activities and may not be able to handle today's challenges. They may also feel that real world situations are effectively handled by object oriented concepts using modeling in order to understand them clearly. Object oriented modeling may improve the quality of the SRS document, the SDD document and may help us to produce good quality maintainable software. The software developed using object orientation may require a different set of testing techniques, although few existing concepts may also be applicable with some modifications.
WHAT IS OBJECT ORIENTATION?
We may model real world situations using object oriented concepts. Suppose we want to send a book to our teacher who does not stay in the same city, we cannot go to his house for delivery of the book because he stays in a city which is 500 km away from our city. As we all know, sending a book is not a difficult task.
Software maintenance is becoming important and expensive day by day. Development of software may take a few years (2 to 4 years), but the same may have to be maintained for several years (10 to 15 years). Software maintenance accounts for as much as two-thirds of the cost of software production [BEIZ90].
Software inevitably changes, whatever well-written and designed initially it may be. There are many reasons for such changes:
(i) Some errors may have been discovered during the actual use of the software.
(ii) The user may have requested for additional functionality.
(iii) Software may have to be modified due to change in some external policies and principles. When European countries had decided to go for a single European currency, this change affected all banking system software.
(iv) Some restructuring work may have to be done to improve the efficiency and performance of the software.
(v) Software may have to be modified due to change in existing technologies.
(vi) Some obsolete capabilities may have to be deleted.
This list is endless but the message is loud and clear i.e. ‘change is inevitable’. Hence, software always changes in order to address the above mentioned issues. This changed software is required to be re-tested in order to ensure that changes work correctly and these changes have not adversely affected other parts of the software. This is necessary because small changes in one part of the software program may have subtle undesired effects in other seemingly unrelated parts of the software.
Is it possible to generate test data automatically? Generating test data requires proper understanding of the SRS document, SDD document and source code of the software. We have discussed a good number of techniques in the previous chapters for writing test cases manually. How can we automate the process of writing test cases? What is the effectiveness of such automatically generated test suite? Is it really beneficial in practice? We may ask such questions wherever and whenever we discuss about the relevance of automated software test data generation. As we all know, testing software is a very expensive activity and adds nothing to the software in terms of functionality. If we are able to automate test data generation, the cost of testing will be reduced significantly.
Automated test data generation is an activity that generates test data automatically for the software under test. The quality and effectiveness of testing is heavily dependent on the generated test data. Hoffman Daniel and others [DANI99] have rightly reported their views as:
“The assurance of software reliability partially depends on testing. However, it is interesting to note that testing itself also needs to be reliable. Automating the testing process is a sound engineering approach, which can make the testing efficient, cost effective and reliable.”
However, test data generation is not an easy and straightforward process. Many methods are available with their proclaimed advantages and limitations, but acceptability of any one of them is quite limited universally.
All of the methodologies and tools introduced throughout this book rely on the evaluation of appropriate case studies. This chapter introduces three industrial-strength case studies serving as a foundation for all subsequent chapters in this book.
The Sales Scenario case study demonstrates business application engineering in the domain of enterprise software, a rather large domain encompassing, for example, enterprise resource planning (ERP), product life cycle management (PLM) and supply chain management (SCM). Such solutions must be adapted and customised to the particular company where the activities are employed. This is not a trivial task because of highly different needs of the respective stakeholders. For this reason business applications often have thousands of configuration settings. To reduce the complexity for the sake of conciseness, the Sales Scenario case study is focused on one specific sub-domain – customer relationship management (CRM) – combined with some parts of the aforementioned solutions.
The previous chapters of this book have presented a number of different techniques that are useful for developing software product lines (SPLs). These techniques can be combined in a variety of ways for different SPLs; each SPL is likely to require its own combination of techniques. To provide some guidance for SPL engineers, this and the next chapter discuss different scenarios for product line development and explain the ways in which the techniques previously presented can be used in these scenarios.
This chapter focuses on product-driven SPL engineering. We begin by explaining what we mean by this term, followed by an identification of requirements for this SPL scenario and a description of an approach for systematically developing such SPLs. The chapter closes by discussing the approach and how it meets the initial requirements as well as the challenges discussed in Chapter 1.
The implementation of a product line consists of a set of reusable components, called core assets, which are composed and configured in different ways to build different concrete products. The goal of a product line to support multiple products introduces additional complexity both to its assets and to the development process. The assets are more complicated because they must deal with variations of the concrete products. The development process is more complicated because it must deal not only with evolution of the common assets, but also with independent evolution of products and instantiation of new products.
In order to reduce the complexity of the implementation of a product line and to facilitate independent evolution, it is desirable to modularise the core features of a product line and the specific features of individual products. Considering features as units of variation in a product line, our goal is to support feature-oriented decomposition of software, in which each feature is implemented in a separate module.
Traceability is a quality attribute in software engineering that establishes the ability to describe and follow the life of a requirement in both the forward and backward directions (i.e. from its origins throughout its specification, implementation, deployment, use and maintenance, and vice-versa). The IEEE Standard Glossary of Software Engineering Terminology defines traceability as ‘the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another’ (IEEE, 1999).
According to (Palmer, 1997) ‘traceability gives essential assistance in understanding the relationships that exist within and across software requirements, design, and implementation’. Thus, trace relationships help in identifying the origin and rationale for artefacts generated during development lifecycle and the links between these artefacts. Identification of sources helps understanding requirements evolution and validating implementation of stakeholders’ requirements. The main advantages of traceability are: (i) to relate software artefacts and design decisions taken during the software development cycle; (ii) to give feedback to architects and designers about the current state of the development, allowing them to reconsider alternative design decisions, and to track and understand bugs; and (iii) to ease communication between stakeholders.
Variability management is a key challenge in software product line engineering (SPLE), as reflected in challenge 2 (identifying commonalities) introduced in Chapter 1. A software product line (SPL) is all about identifying, modelling, realising and managing the variability between different products in the SPL.
Variability management has two major parts: modelling the variability an SPL should encompass; and designing how this variability is to be realised in individual products. For the former part, different kinds of variability models can be employed: a typical approach is to use feature models (Kang et al., 1990) (or cardinality-based feature models, see Czarnecki et al. (2005b), in some cases), but domain-specific languages (DSLs) have also been used with some success. The latter part – modelling how variability is realised – is less well understood. Some approaches have been defined and will be discussed in Section 4.2, including their limitations. In this chapter, we therefore focus on DSLs for variability management and present a novel approach developed in the AMPLE project that aims at overcoming these limitations.
He sat, in defiance of municipal orders, astride the gun Zam-Zammah on her brick platform opposite the old Ajaib- Gher – the Wonder House, as the natives call the Lahore Museum. Who hold Zam-Zammah, that ‘fire-breathing dragon’, hold the Punjab.
(Rudyard Kipling, Kim)
As the size and complexity of software systems grows, so does the need for effective modularity, abstraction and composition mechanisms to improve the reuse of software development assets during software systems engineering. This need for reusability is dictated by pressures to minimise costs and shorten the time to market. However, such reusability is only possible if these assets are variable enough to be usable in different products. Variability support has thus become an important attribute of modern software development practices. This is reflected by the increasing interest in mechanisms such as software product lines (Clements & Northrop, 2001) and generative programming (Czarnecki & Eisenecker 2000). Such mechanisms allow the automation of software development as opposed to the creation of custom ‘one of a kind’ software from scratch. By utilising variability techniques, highly reusable code libraries and components can be created, thus cutting costs and reducing the time to market.
A software product line is a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. Core assets are produced and reused in a number of products that form a family. These core assets may be documents, models, etc., comprising product portfolios, requirements, project plans, architecture, design models and, of course, software components.