We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Statistical mechanics has long studied how microscopic interaction rules between the elements of a system at equilibrium are translated into its macroscopic properties. In particular, many efforts have been devoted to the understanding of the phase transition phenomenon: as an external parameter (for example the temperature) is varied, a change occurs in the macroscopic behavior of the system under study. For example, a liquid can be transformed into a solid or a gas when pressure or temperature are changed. Another important example of phase transition is by the appearance in various metallic materials of a macroscopic magnetization below a critical temperature. Such spontaneous manifestations of order have been widely studied and constitute an important paradigm of the emergence of global cooperative behavior from purely local rules.
In this chapter, we first recall some generalities and definitions concerning phase transitions and the emergence of cooperative phenomena. For this purpose we introduce the paradigmatic Ising model, which is a cornerstone of the statistical physics approach to complex systems and, as we will see in other chapters, is at the basis of several models used in contexts far from physics such as social sciences or epidemics. After a brief survey of the main properties of the Ising model, we show how the usual scenarios for the emergence of a global behavior are affected by the fact that the interactions between the microscopic elements define a complex network.
Although we have reviewed or mentioned more than 600 scientific works on equilibrium and non-equilibrium processes in complex networks, the present book is by no means intended as an exhaustive account of all research activities in network science. It would have been extremely easy to list more than twice as many scientific papers and, as we have stressed in several passages of this book, we have decided to focus our attention on that subset of network research that deals with dynamical processes in large-scale networks characterized by complex features such as largescale fluctuations and heterogeneities. This implies the selection of studies and approaches tailored to tackle the large-scale properties and asymptotic behavior of phenomena and, as a consequence, models that usually trade details and realism for generality and a high level of abstraction. This corresponds to a methodological approach that has its roots in statistical physics and has greatly contributed to generate an entire research area labeled in recent years as the “new science of networks” (Watts, 2004). As network science is an interdisciplinary endeavor, however, we do not want to overlook the criticisms raised by several authors to this “new science of networks” and to the statistical physics and complex systems approaches. These criticisms are articulated around several key points that are echoed in several commentaries and essays (see for instance Urry [2004], Keller [2005] and Mitzenmacher [2006]), and at the end of 12 long chapters we believe we are obliged to provide a discussion of some of these key points.
In this chapter we introduce the reader to the basic definitions of network and graph theory. We define metrics such as the shortest path length, the clustering coefficient, and the degree distribution, which provide a basic characterization of network systems. The large size of many networks makes statistical analysis the proper tool for a useful mathematical characterization of these systems. We therefore describe the many statistical quantities characterizing the structural and hierarchical ordering of networks including multipoint degree correlation functions, clustering spectrum, and several other local and non-local quantities, hierarchical measures and weighted properties.
This chapter will give the reader a crash course on the basic notions of network analysis which are prerequisites for understanding later chapters of the book. Needless to say the expert reader can freely skip this chapter and use it later as a reference if needed.
What is a network?
In very general terms a network is any system that admits an abstract mathematical representation as a graph whose nodes (vertices) identify the elements of the system and in which the set of connecting links (edges) represent the presence of a relation or interaction among those elements. Clearly such a high level of abstraction generally applies to a wide array of systems. In this sense, networks provide a theoretical framework that allows a convenient conceptual representation of interrelations in complex systems where the system level characterization implies the mapping of interactions among a large number of individuals.
The mathematical modeling of epidemics is a very active field of research which crosses different disciplines. Epidemiologists, computer scientists, and social scientists share a common interest in studying spreading phenomena and rely on very similar models for the description of the diffusion of viruses, knowledge, and innovation. Epidemic modeling has developed an impressive array of methods and approaches aimed at describing various spreading phenomena, as well as incorporating many details affecting the spreading of real pathogens. In particular, understanding and predicting an epidemic outbreak requires a detailed knowledge of the contact networks defining the interactions between individuals. The theoretical framework for epidemic spreading has thus to be widened with opportune models and methods dealing with the intrinsic system complexity encountered in many real situations.
In this chapter, we introduce the general framework of epidemic modeling in complex networks, showing how the introduction of strong degree fluctuations leads to unusual results concerning the basic properties of disease spreading processes. Using some specific examples we show how plugging in complex networks in epidemic modeling enables one to obtain new interpretative frameworks for the spread of diseases and to provide a quantitative rationalization of general features observed in epidemic spreading. We end the chapter by discussing general issues about modeling the spread of diseases in complex environments, and we describe metapopulation models which are at the basis of modern computational epidemiology.
In Chapter 6, the case of percolation and resilience of undirected graphs has been considered. Various authors have tackled the problem of its generalization to directed networks (Newman et al., 2001; Schwartz et al., 2002; Dorogovtsev et al., 2001a; Boguñá and Serrano 2005).
As seen in Chapter 1, directed networks are topologically richer and more complex than their undirected counterparts. The links attached to a node can be either incoming or outgoing. The number of in-links is the in-degree kin, and the number of out-links is the out-degree kout. Since some pairs of nodes can be linked by two directed links pointing in the opposite directions, it is also possible to distinguish between strictly incoming, strictly outgoing or bidirectional links, the total degree being the sum of their respective numbers ki, ko, kb, with kin = ki + kb and kout = ko + kb. The degree distribution is thus the joint distribution of these degrees. Most often, strictly directed networks are considered, with no bidirectional links: kb = 0. Note that by a simple conservation rule, the averages of ki and ko are then equal: 〈ki〉 = 〈ko〉. Each (weakly) connected component (WCC) of a directed graph has also an internal structure as described in Chapter 1, and is composed of a strongly connected component (SCC), an in and an out components, tendrils and tubes.
In this chapter we present a review of network models that will be used to study dynamical processes in the context of computational approaches. These different models will also help to determine the influence that specific network features have on the various phenomena that will be considered in the next chapters. To this end, we will discuss the different modeling approaches and put the activity focused on each of the different dynamical models into the proper perspective. Particular emphasis will be devoted to models which have been developed as theoretical examples of the different specific classes of real-world networks empirically observed.
Randomness and network models
Static random graph models and topology generators such as the paradigmatic Erdős–Rényi model (Erdős and Rényi, 1959; 1960; 1961) and the network generation algorithm of Molloy and Reed (1995) are the simplest network models to include stochasticity as an essential element. They are characterized by an absolute lack of knowledge of the principles that guide the creation of edges between nodes. Lacking any information, the simplest assumption one can make is to connect pairs of nodes at random with a given connection probability p. In its original formulation, an Erdős–Rényi graph GN, E is constructed starting from a set of N different vertices which are joined by E edges whose ends are selected at random among the N vertices.
In the past few years, the study of large networked systems has received a boost from the ever-increasing availability of large data sets and computer power for their storage and manipulation. In particular, mapping projects of the World Wide Web and the physical Internet offered the first chance to study the topology of large complex networks. Gradually, other maps followed describing many networks of practical interest in social science, critical infrastructures, and biology. Indeed, large complex networks arise in a vast number of natural and artificial systems. The brain consists of many interconnected neurons; ecosystems consist of species whose interdependency can be mapped into intricate food webs. Social systems may be represented by graphs describing various interactions among individuals. Large networked infrastructures such as power grids and transportation networks are critical to our modern society. Finally, the living cell is not an exception, its organization and function being the outcome of a complex web of interactions among genes, proteins, and other molecules. In the search for the underlying laws governing the dynamics and evolution of these complex systems, researchers have started the systematic analysis and characterization of their network representations. A central result of these activities is that large-scale networks are generally characterized by complex topologies and very heterogeneous structures. These features usually find their signature in connectivity patterns statistically characterized by heavy tails and large fluctuations, scale-free properties, and non-trivial correlations such as high clustering and hierarchical ordering.
The large-scale collapses of the electric distribution grid or internet outages have shown that the stability of complex networks is a problem of crucial importance. The protection of critical infrastructures or the elaboration of efficient reaction strategies all rely on the identification of crucial (or weak) elements of the network and on the understanding of the progressive damage caused by successive removals or failures of nodes. In this context, it has been observed that complex network models usually display great stability even if they are confronted by a large number of repeated small failures, while at the same time major damage can be triggered, unpredictably, by apparently small shocks. This is also consistent with empirical experience in which unexpected, small perturbations may sometimes trigger large systemic failures.
In this chapter, we deal with the impact of the removal or failure of nodes in complex networks by studying their percolation behavior. Percolation models and the associated critical phenomena have been extensively studied in statistical physics and, even though they are not endowed with a high level of realism, they can be thought of as the zeroth order approximation to a wide range of damaging processes. In particular we focus on heterogeneous networks that typically display both a large robustness to random failures and a high vulnerability to targeted attacks on the most crucial nodes.
The navigation and exploration of complex networks are obviously affected by the underlying connectivity properties and represent a challenging scientific problem with a large number of practical applications. The most striking example is the World Wide Web (WWW), an immense data repository which acquires a practical interest and value only if it is possible to locate the particular information one is seeking. It is in this context and having in mind large-scale information and communication technologies (ICTs) that most of the models and studies are developed. Because information discovery and retrieval rely on the understanding of the properties of random walks and diffusion phenomena in complex networks, considerable activity has been devoted to the investigation of these basic processes.
In this chapter we will review the main strategies that can be used to explore and retrieve information from the vertices of a network. In particular, we want to expose how the topological properties of networks might affect search and retrieval strategies and how these strategies can effectively take advantage of the network's connectivity structure. We start from basic considerations on diffusion processes taking place on networks and naturally dive into more applied works that consider well-defined problems encountered in the ICT world.
Diffusion processes and random walks
The discovery process and the related algorithms in large-scale networks are clearly related to the properties of a random walk diffusing in a network of given topology.
The main role of many networks is to provide the physical substrate to the flow of some physical quantity, data, or information. In particular, this is the case for the transportation and technological infrastructures upon which the everyday functioning of our society relies. In this context, congestion phenomena, failures, and breakdown avalanches can have dramatic effects, as witnessed in major blackouts and transportation breakdowns. Also, in technological networks, if the single element does not have the capacity to cope with the amount of data to be handled, the network as a whole might be unable to perform efficiently. The understanding of congestion and failure avalanches is therefore a crucial research question when developing strategies aimed at network optimization and protection.
Clearly the study of network traffic and the emergence of congestion and large scale failures cannot neglect the specific nature of the system. The huge literature addressing these questions is therefore domain oriented and goes beyond the scope of this book. On the other hand, large-scale congestion and failure avalanches can also be seen as the emergence of collective phenomena and, as such, studied through simple models with the aim of abstracting the general features due to the basic topology of the underlying network. This chapter is devoted to an overview of results concerning traffic, congestions, and avalanches and is more focused on the general and ubiquitous features of these phenomena than on system-specific properties.
Networks have long been recognized as having a central role in biological sciences. They are the natural underlying structures for the description of a wide array of biological processes across scales varying from molecular processes to species interactions. Especially at smaller scales, most genes and proteins do not have a function on their own; rather they acquire a specific role through the complex web of interactions with other proteins and genes. In recent years this perspective, largely fostered by the recent abundance of high-throughput experiments and the availability of entire genome sequences and gene co-expression patterns, has led to a stream of activities focusing on the architecture of biological networks.
The abundance of large-scale data sets on biological networks has revealed that their topological properties in many cases depart considerably from the random homogeneous paradigm. This evidence has spurred intense research activity aimed at understanding the origin of these properties as well as their biological relevance. The problem amounts to linking structure and function, in most cases, by understanding the interplay of topology and dynamical processes defined on the network. Empirical observations of heterogeneities have also revamped several areas and landmark problems such as Boolean network models and the issue of stability and complexity in ecosystems.
While concepts and methods of complex network analysis are nowadays standard tools in network biology, it is clear that a discussion of their relevance and roles has to be critically examined by taking into account the specific nature of the biological problem.
The present chapter is intended to provide a short introduction to the theory and modeling of equilibrium and non-equilibrium processes on networks and to define the basic modeling approaches and techniques used in the theory of dynamical processes. In particular, we define the master equation formalism and distinguish between equilibrium and non-equilibrium phenomena. Unfortunately, while the master equation allows for important conceptual distinction and categorization, its complete solution is hardly achievable even for very simple dynamical processes. For this reason we introduce the reader to techniques such as mean-field and continuous deterministic approximations, which usually represent viable approaches to understand basic features of the process under study. We also discuss Monte Carlo and agent-based modeling approaches that are generally implemented in large-scale numerical simulation methods.
These different theoretical methods help to define a general framework to demonstrate how the microscopic interactions between the elements of the system lead to cooperative phenomena and emergent properties of the dynamical processes. This strategy, going from microscopic interaction to emergent collective phenomena, has its roots in statistical physics methodology and population dynamics, and is currently viewed as a general paradigm to bridge the gap between the local and the large-scale properties of complex systems. It is important to stress, however, that the following material is a greatly abbreviated presentation of a huge field of research and by necessity just scratches the surface of the statistical theory of dynamical processes.