We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This extensive revision of the 2007 book 'Random Graph Dynamics,' covering the current state of mathematical research in the field, is ideal for researchers and graduate students. It considers a small number of types of graphs, primarily the configuration model and inhomogeneous random graphs. However, it investigates a wide variety of dynamics. The author describes results for the convergence to equilibrium for random walks on random graphs as well as topics that have emerged as mature research areas since the publication of the first edition, such as epidemics, the contact process, voter models, and coalescing random walk. Chapter 8 discusses a new challenging and largely uncharted direction: systems in which the graph and the states of their vertices coevolve.
Statistical mechanics employs the power of probability theory to shine a light upon the invisible world of matter's fundamental constituents, allowing us to accurately model the macroscopic physical properties of large ensembles of microscopic particles. This book delves into the conceptual and mathematical foundations of statistical mechanics to enhance understanding of complex physical systems and thermodynamic phenomena, whilst also providing a solid mathematical basis for further study and research in this important field. Readers will embark on a journey through important historical experiments in statistical physics and thermodynamics, exploring their intersection with modern applications, such as the thermodynamics of stars and the entropy associated with the mixing of two substances. An invaluable resource for students and researchers in physics and mathematics, this text provides numerous worked examples and exercises with full solutions, reinforcing key theoretical concepts and offering readers deeper insight into how these powerful tools are applied.
Statistical mechanics is hugely successful when applied to physical systems at thermodynamic equilibrium; however, most natural phenomena occur in nonequilibrium conditions and more sophisticated techniques are required to address this increased complexity. This second edition presents a comprehensive overview of nonequilibrium statistical physics, covering essential topics such as Langevin equations, Lévy processes, fluctuation relations, transport theory, directed percolation, kinetic roughening, and pattern formation. The first part of the book introduces the underlying theory of nonequilibrium physics, the second part develops key aspects of nonequilibrium phase transitions, and the final part covers modern applications. A pedagogical approach has been adopted for the benefit of graduate students and instructors, with clear language and detailed figures used to explain the relevant models and experimental results. With the inclusion of original material and organizational changes throughout the book, this updated edition will be an essential guide for graduate students and researchers in nonequilibrium thermodynamics.
Stochastic thermodynamics has emerged as a comprehensive theoretical framework for a large class of non-equilibrium systems including molecular motors, biochemical reaction networks, colloidal particles in time-dependent laser traps, and bio-polymers under external forces. This book introduces the topic in a systematic way, beginning with a dynamical perspective on equilibrium statistical physics. Key concepts like the identification of work, heat and entropy production along individual stochastic trajectories are then developed and shown to obey various fluctuation relations beyond the well-established linear response regime. Representative applications are then discussed, including simple models of molecular motors, small chemical reaction networks, active particles, stochastic heat engines and information machines involving Maxwell demons. This book is ideal for graduate students and researchers of physics, biophysics, and physical chemistry, with an interest in non-equilibrium phenomena.
Networks can get big. Really big. Examples include web crawls, online social networks, and knowledge graphs. Networks from these domains can have billions of nodes and hundreds of billions of edges. Systems biology is yet another area where networks will continue to grow. As sequencing methods continue to advance, more networks and larger, denser networks will need to be analyzed. This chapter discusses some of the challenges you face and solutions you can try when scaling up to massive networks. These range from implementation details to new algorithms and strategies to reduce the burden of such big data. Various tools, such as graph databases, probabilistic data structures, and local algorithms, are at our disposal, especially if we can accept sampling effects and uncertainty.
Every network has a corresponding matrix representation. This is powerful. We can leverage tools from linear algebra within network science, and doing so brings great insights. The branch of graph theory concerned with such connections is called spectral graph theory. This chapter will introduce some of its central principles as we explore tools and techniques that use matrices and spectral analysis to work with network data. Many matrices appear in different cases when studying networks, including the modularity matrix, nonbacktracking matrix, and the precision matrix. But one matrix stands out—the graph Laplacian. Not only does it capture dynamical processes unfolding over a networks structure, its spectral properties have deep connections to that structure. We show many relationships between the Laplacians eigendecomposition and network problems, such as graph bisection and optimal partitioning tasks. Combining the dynamical information and the connections with partitioning also motivates spectral clustering, a powerful and successful way to find groups of data in general. This kind of technique is now at the heart of machine learning, which well explore soon.
The fundamental practices and principles of network data are presented in this book, and the preface serves as an important starting point for readers to understand the goals and objectives of this text. The preface explains how the practical and fundamental aspects of network data are intertwined, and how they can be used to solve real-world problems. It also gives advice on how to use the book, including the boxes that will be featured throughout the book to highlight key concepts and provide practical examples of working with network data. Readers will find this preface to be a valuable resource as they begin their journey into the world of network science.
In this chapter, we begin our dive into the fundamentals of network data. We delve deep into the strange world of networks by considering the friendship paradox, the apparently contradictory finding that most people (nodes) have friends (neighbors) who are more popular than themselves. How can this be? Where are all these friends coming from? We introduce network thinking to resolve this paradox. As we will see, It is due to constraints induced by the network structure: pick a node randomly and you are much more likely to land next to a high-degree node than on a high-degree node because high-degree nodes have many neighbors. This is unexpected, almost profoundly so; a local (node-level) view of a network will not accurately reflect the global network structure. This paradox highlights the care we need to take when thinking about networks and network data mathematically and practically.
Network studies follow an explicit form, from framing questions and gathering data, to processing those data and drawing conclusions. And data processing leads to new questions, leading to new data and so forth. Network studies follow a repeating lifecycle. Yet along the way, many different choices will confront the researcher, who must be mindful of the choices they are making with their data and the choices of tools and techniques they are using to study their data. In this chapter, we describe how studies of networks begin and proceed, the life-cycle of a network study
In this chapter, we introduce visualization techniques for networks, what problems we face, and solutions we use, to make those visualizations as effective as possible. Visualization is an essential tool for exploring network data, revealing patterns that may not be easily inferred from statistics alone. Although network visualization can be done in many ways, the most common approach is through two-dimensional node-link diagrams. Properly laying out nodes and choosing the mapping between network and visual properties is essential to create an effective visualization, which requires iteration and fine-tuning. For dense networks, filtering or aggregating the data may be necessary. Following an iterative, back-and-forth workflow is essential, trying different layout methods and filtering steps to show the networks structure best while keeping the original questions and goals in mind. Visualization is not always the endpoint of a network analysis but can also be a useful step in the middle of an exploratory data analysis pipeline, similar to traditional statistical visualization of non-network data.
This chapter covers data provenance or data lineage, the detailed history of how data was created and manipulated, as well as the process of ensuring the validity of such data by documenting the details of its origins and transformations. Data provenance is a central challenge when working with data. Computing helps but also hinders our ability to maintain records of our work with the data. The best science will result when we adopt strategies to carefully and consistently record and track the origin of data and any changes made along the way. For instance, we want to know where (by whom) a dataset was created and what was the process used to create it. Then, if there were any changes, such as fixing erroneous entries, we need to have a good record of such changes. With these goals in mind, we discuss best practices for tracking data provenance. While such practices generally take time and effort to implement, making them seem tedious in the short term, over time, your research will become more reliable, and you and your collaborators will be grateful.
All fields of science benefit from gathering and analyzing network data. This chapter summarizes a small portion of the ways networks are found in research fields thanks to increasing volumes of data and the computing resources needed to work with that data. Epidemiology, dynamical systems, materials science, and many more fields than we can discuss here, use networks and network data. Well encounter many more examples during the rest of this book.
While there are cases where it is straightforward and unambiguous to define a network given data, often a researcher must make choices in how they define the network and that those choices, preceding most of the work on analyzing the network, have outsized consequences for that subsequent analysis. Sitting between gathering the data and studying the network is the upstream task: how to define the network from the underlying or original data. Defining the network precedes all subsequent or downstream tasks, tasks we will focus on in later chapters. Often those tasks are the focus of network scientists who take the network as a given and focus their efforts on methods using those data. Envision the upstream task by asking, what are the nodes? and what are the links?, with the network following from those definitions. You will find these questions a useful guiding star as you work, and you can learn new insights by reevaluating their answers from time to time.
Networks exhibit many common patterns. What causes them? Why are they present? Are they universal across all networks or only certain kinds of networks? One way to address these questions is with models. In this chapter, we explore in-depth the classic mechanistic models of network science. Random graph models underpin much of our understanding of network phenomena, from the small world path lengths to heterogeneous degree distributions and clustering. Mathematical tools help us understand what mechanisms or minimal ingredients may explain such phenomena, from basic heuristic treatments to combinatorial tools such as generating functions.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
As we have seen, network data are necessarily imperfect. Missing and spurious nodes and edges can create uncertainty in what the observed data tell us about the original network. In this chapter, we dive deeper into tools that allow us to quantify such effects and probe more deeply into the nature of an unseen network from our observations. The fundamental challenge of measurement error in network data is capturing the error-producing mechanism accurately and then inferring the unseen network from the (imperfectly) observed data. Computational approaches can give us clues and insights, as can mathematical models. Mathematical models can also build up methods of statistical inference, whether in estimating parameters describing a model of the network or estimating the networks structure itself. But such methods quickly become intractable without taking on some possibly severe assumptions, such as edge independence. Yet, even without addressing the full problem of network inference, in this chapter, we show valuable ways to explore features of the unseen network, such as its size, using the available data.