We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This text on the theory and applications of network science is aimed at beginning graduate students in statistics, data science, computer science, machine learning, and mathematics, as well as advanced students in business, computational biology, physics, social science, and engineering working with large, complex relational data sets. It provides an exciting array of analysis tools, including probability models, graph theory, and computational algorithms, exposing students to ways of thinking about types of data that are different from typical statistical data. Concepts are demonstrated in the context of real applications, such as relationships between financial institutions, between genes or proteins, between neurons in the brain, and between terrorist groups. Methods and models described in detail include random graph models, percolation processes, methods for sampling from huge networks, network partitioning, and community detection. In addition to static networks the book introduces dynamic networks such as epidemics, where time is an important component.
Functional magnetic resonance imaging (fMRI) was conceived in the early 1990s due to the coincidence of two advances: (1) MRI scanner technology able to support fast echo-planar imaging imaging techniques with the required temporal stability and (2) the scientific knowledge that differences in the magnetic susceptibility of blood may be associated with MRI signal changes based on alterations in blood oxygenation levels. These elements, together with the assumption that changes in blood oxygenation and volume would accompany changes in neural activity in the brain, motivated research groups around the world to develop fMRI.
In the mid-1980s a number of scientists and research bodies conceived the idea of determining the DNA sequence of the entire human genome. Initiated in 1990 and known as the Human Genome Project (HGP), this ambitious, publicly funded project relied on contributions from numerous international laboratories and remains the world’s largest collaborative biological-based project to date. The completion of the HGP thirteen years later in 2003 allowed scientists to view the human genome in its entirety for the first time [1]. It was thought that this would usher in a new age for biological research, allowing for a more comprehensive understanding of complex human diseases and phenotypes. While this was true to an extent, completion of this project led to a series of new, more complicated questions, as is often the case in research.
Epilepsy affects approximately 1% of the population [1]. Although generally treatable, up to 30% of patients do not achieve seizure freedom from anticonvulsive medication alone. Due to its relationship with cognitive abilities [2], quality of life [3], and the associated risk of premature death [4], drug-refractory epilepsy should be treated promptly. Temporal lobe epilepsy (TLE) associated with mesiotemporal sclerosis [5] and extra-temporal lobe epilepsy related to focal cortical dysplasia (FCD) [6] constitute the most common refractory epilepsy syndromes. Surgical resection of these lesions remains the treatment of choice [7], with success rates approaching 80% [8]. By allowing the detection of epileptogenic lesions and offering system-level mechanisms of the disease process, MRI has shifted the field from electro-clinical correlations toward a multidisciplinary approach.
Complex systems theory is a nebulous field whose overarching goal is to understand the dynamical behavior of systems consisting of many interconnected component parts. It has attracted widespread interest from many domains that study examples of such systems, including ecologists, sociologists, engineers, artificial intelligence researchers, condensed matter physicists, neuroscientists, and many others. The results of these collected, multi-disciplinary efforts have not been so much a comprehensive theory of Complex Systems (capital-C, capital-S), but rather a set of techniques, analogies, and attitudes toward problem solving that emphasize interactions and dynamics over individual components and their functions. The chapters are written in a complex adaptive systems frame and therefore it is useful to provide a provisional theoretical description of such systems. Following Holland [1], a generalizable description of complex adaptive systems is that they are collections of relatively simple agents that have the property that they can aggregate, so that collections of agents can form meta-agents (and meta-meta-agents etc.) with higher-order structure. These aggregates interact nonlinearly, so that the aggregate behavior of a collection of agents is qualitatively different from the behavior of the individual agents. The interactions among agents mediate flows of materials or information. Finally, the agents are typically diverse with distinct specialties that are optimized through adaptation to selective pressures in their environments.
The genetic underpinnings of epilepsy have come into much clearer focus over the past two decades. Advances in high-throughput molecular techniques have markedly improved our ability to identify potential therapeutic targets in epilepsy. Many of the monogenic effects identified through these methods have resulted in effective therapeutic targets for seizure amelioration [1,2,3]. Currently, around 200 definitively annotated epilepsy genes causing a range of seizure disorders and phenotypes have been identified [4]. Many more genes with putative associations with epilepsy pathways require further study [5]. The expansion of known genetic mechanisms and risk factors presents us with several benefits, including an increased pool of possible drug targets [6], genetic subtyping of seizure disorders [7], and the possibility for integrative analysis across different disorders [8,9]. However, the increasingly rich collection of genetic associations has also revealed the complexity of seizure disorders. Many mutations in different genes can converge on a similar clinical presentation [10], while different mutations in the same gene can have radically divergent outcomes [11,12]. Moreover, while robust data from twin and family studies demonstrate that common epilepsies are highly heritable [13,14], association studies have only detected risk factors that account for a small fraction of risk [15]. Thus, the data on epilepsy suggests a dichotomy. On one side, genetics is critical for describing etiology [16]. On the other side, using this information for prognosis or therapeutic development is limited by our current understanding of the complex genetic underpinnings of the disease and our analytic tools [10,17]. As a response to this complexity, researchers have started to shift toward complex systems approaches to genetics, which changes the focus from individual mutations to interactions among many mutations. The purpose of this chapter is to elaborate this ethos and present examples of this approach.
Many will trace the earliest articulation of what we may today call the science of complexity to Weaver’s [1] classic essay. In this work, Weaver distinguished between (i) the science of “simplicity” with phenomena that could be understood when reduced to a few variables, such as classical mechanics in two dimensions, (ii) the science of “disorganized complexity” concerning systems with large numbers of variables analyzed by a process of averaging, such as statistical thermodynamics, and (iii) an emerging field of “organized complex” systems, also with large numbers of variables, that was not amenable to either approach. This third middle region, Weaver wrote, would form the next significant challenge for science, needing both the power of machines (computers) and large interdisciplinary scientific teams for progress. Today, the field of complex systems, though lacking a universally accepted definition, studies entities – physical, biological, or social – united by the presence of large numbers of nonlinearly interacting agents that yield collective behavior not directly predictable from the laws governing interactions of the individual agents [2]. The thesis of complexity is therefore in direct opposition to the philosophy of reductionism and the source of an important debate regarding the foundations of science itself [3]. Examples of collective behavior in complex systems include, for instance, the “emergent” phenomena of macroscopic patterns [4] and phase transitions [5]. These coherent structures occur at scales far removed from those governing the interaction of the individual entities of the system and are due to bifurcation and symmetry breaking [6] involving macroscopic “collective” variables. On the other hand, the large size and nonlinearity of complex systems endow them with a measure of unpredictability – arising from deterministic chaos as well as inherent “fluctuations” – that naturally invokes a probabilistic description. Complex systems are thus said to have an “open” future that generates information and “surprise” as they evolve [7].
Epilepsy is the most common of the chronic and severe neurological diseases. It affects 65 million people worldwide and is characterized by an augmented susceptibility to seizures. Seizures are “transient occurrence of signs and/or symptoms due to abnormal excessive or synchronous neuronal activity in the brain” [1]. Current therapeutic strategies have the goal of suppressing or reducing the occurrence of seizures, thus being symptomatic rather than curative. There are no known therapies able to modify the evolution of acquired epilepsy, or to prevent its development. Furthermore, 25–40% of patients do not respond to pharmacological treatment, and this number stays unchanged when using new generation antiepileptic drugs as compared to established ones. For drug-resistant patients with focal epilepsy (an epilepsy in which seizures start in one hemisphere) there exists an alternative to medication: surgical resection of the brain regions involved in the generation of seizures, the epileptogenic zone, under the constraints of limiting post-surgical neurological impairments. Rates of success of brain surgery for epilepsy treatment vary between 34% and 74% as a function of the type of epilepsy. Outcomes are very variable, depend on the patient condition, and can change in time.
The previous chapters have dealt with the complex adaptive nature of the genome. Similar concepts in terms of interacting elements, self-organization and adaptation can be applied at other hierarchical scales. In this chapter we will show how complex adaptive systems (CAS) concepts can be usefully applied at the level of action potential firing patterns of single neurons in terms of seizure generation and of associated morbidities.
Epilepsy is a family of neurological disorders in which patients experience unprovoked spontaneous seizures. Unfortunately, there is currently no cure for epilepsy, and seizure management is the target of most therapies. The first-line treatment of epilepsy is usually antiepileptic drugs. However, depending on the subtype of epilepsy and the individual, drug treatments fail to control the seizures in around one-third of patients. One challenge in the treatment of epilepsy is its heterogeneity. In each patient, seizures are thought to be generated by different mechanisms, processes, and parameters, and treatment outcomes will also depend on these.
Since the early 2000s, the growing field of computational neuroscience has shown remarkable applicability in the study of epilepsy. A number of different and complementary approaches have been applied to brain signals obtained with scalp and invasive electroencephalography (EEG) to address a variety of fundamental and clinical problems. Historically, researchers have focused on overt changes in brain electrical signals, which can be detected using signal processing techniques. More recent advances have also shown that connectivity and network-level effects can provide critical information that complements the classical brain regional perspective. Thus, the modern toolkit for epilepsy electrophysiology now includes complex systems approaches such as network science (e.g., graph theory), nonlinear signal processing, information theory, and machine learning techniques. Complex systems approaches have made their contribution to our understanding of epilepsy and to the development of new tools that might improve its diagnosis and treatment.
The epilepsies are devastating neurological disorders for which progress developing effective new therapies has slowed over recent decades, primarily due to the complexity of the brain at all scales. This reality has shifted the focus of experimental and clinical practice toward complex systems approaches to overcoming current barriers. Organized by scale from genes to whole brain, the chapters of this book survey the theoretical underpinnings and use of network and dynamical systems approaches to interpreting and modeling experimental and clinical data in epilepsy. The emphasis throughout is on the value of the non-trivial, and often counterintuitive, properties of complex systems, and how to leverage these properties to elaborate mechanisms of epilepsy and develop new therapies. In this essential book, readers will learn key concepts of complex systems theory applied across multiple scales and how each of these scales connects to epilepsy.
There is a close relationship between random graphs and percolation. In fact, percolation and random graphs have been viewed as “the same phenomenon expressed in different languages” (Albert and Barabási, ). Early ideas on percolation (although not under that name) in molecular chemistry can be found in the articles by Flory () and Stockmayer ().
Percolation can be defined more generally than as a process on , . In this chapter, we motivate the main ideas and theory of percolation on more general graphs by application to polymer gelation and amorphous computing.
In this chapter, we discuss various issues that arise when networks increase in size. What does it mean for a network to increase in size and how would we visualize that process? Can a sequence of networks, increasing in size, converge to a limit, and what would such a limit look like? We discuss the transformation of an adjacency matrix to a pixel picture and what it means for a sequence of pixel pictures to increase in size. If a limit exists, the resulting function is called a limit graphon, but it is not itself a network. Estimation of a graphon is also discussed and methods described include an approximation by SBM and a network histogram.