We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study of collective behavior in social systems has recently witnessed an increasing number of works relying on computational and agent-based models. These models use very simplistic schemes for the micro-processes of social influence and are more interested in the emerging macro-level social behavior. Agent-based models for social phenomena are very similar in spirit to the statistical physics approach. The agents update their internal state through an interaction with their neighbors and the emergent macroscopic behavior of the system is the result of a large number of these interactions.
The behavior of all of these models has been extensively studied for agents located on the nodes of regular lattices or possessing the ability to interact homogeneously with each other. But as described in Chapter 2, interactions between individuals and the structure of social systems can be generally represented by complex networks whose topologies exhibit many non-trivial properties such as small-world, high clustering, and strong heterogeneity of the connectivity pattern. Attention has therefore recently shifted to the study of the effect of more realistic network structures on the dynamical evolution and emergence of social phenomena and organization. In this chapter, we review the results obtained in four prototypical models for social interactions and show the effect of the network topology on the emergence of collective behavior.
I am conscious of being only an individual struggling weakly against the stream of time. But still remains in my power to contribute …
Ludwig Boltzmann
In Chapter 4 we discussed the connection between chaos and equilibrium statistical mechanics, in particular with respect to the ergodic hypothesis. We saw that in systems with many degrees of freedom, chaos (in the sense of at least one positive Lyapunov exponent) is not strictly necessary (nor sufficient) to obtain good agreement between the time average and phase average. This is due, as Boltzmann himself thought and Khinchin proved for an ideal gas, to the fact that in systems with many components, for a large class of observables, the validity of the ergodic hypothesis is basically a consequence of the law of large numbers, and it has a rather weak connection with the underlying dynamics.
From a conceptual point of view the ergodic approach (possibly in a “weak” variant, only pertaining to some interesting macroscopic variables) can be seen as a natural way to introduce probabilistic concepts in a deterministic context. In addition, since one deals with a unique system (although with many degrees of freedom) the ergodicity is a possible (unique?) way to found the equilibrium statistical mechanics on a physical ground, i.e. by exploiting the frequentistic interpretation of probability to extract a statistical description from the analysis of a single experimental trajectory. Finally, on the basis of the results in Chapter 4, and not forgetting that thermodynamics, as a physical theory, was developed to describe the properties of single systems made of many microscopic interacting parts, it seems to us that it is quite fair to conclude that the ensemble viewpoint is just a useful mathematical tool.
If we study the history of science we see produced two phenomena which are, so to speak, each the inverse of the other. Sometimes it is simplicity which is hidden under what is apparently complex; sometimes, on the contrary, it is simplicity which is apparent, and which conceals extremely complex realities.
Henri Poincaré
Devoting only a short chapter to the renormalization group (RG) is a challenge owing to the wealth of both fundamental concepts and computational tools associated with this term. The RG is indeed encountered in many different domains of theoretical physics, ranging from quantum electrodynamics to second-order phase transitions to fractal growth and diffusion processes – one should rather speak of renormalization groupS! We refer to Brown (1993) and Fisher (1998) for an historical account, to Goldenfeld (1992) and Lesne (1998) and references therein for an overview of the domains of application and variants of RGs.
We here emphasize that the RG is a way, maybe the most successful and with no doubt the most systematic and constructive, to derive effective low-dimensional descriptions that capture large-scale and/or long-time behavior. The RG can be extended far beyond the specific scope of critical phenomena, to an iterated multiscale approach allowing the construction of robust and minimal macroscopic models describing the universal large-scale features and asymptotics of a complex system. This generalized viewpoint brings out the close logical and even technical connections that bridge, within a unified framework, perturbative RG for singular series expansions, spin-block RG and momentum-shell RG for critical phenomena, RG for the asymptotic analysis of differential and partial differential equations, and probabilistic RG for the derivation of statistical laws and limit theorems.
Since in the differential equations of mechanics themselves there is absolutely nothing analogous to the Second Law of thermodynamics the latter can be mechanically represented only by means of assumptions regarding initial conditions.
Ludwig Boltzmann
By irreversibility here we mean a well evident fact which is part of the everyday experience of everybody: there are a lot of phenomena of which we do not see the reverse order evolution, and we do not expect to see it. For instance, if we put a hot coffee on the table then, after a while it gets colder, transferring some heat to the environment. However, if the coffee, at room temperature, has become too cold, and we want to warm it up, then we put it in the microwave oven. This is because we are pretty sure that the required heat will not come into the coffee from the surroundings, even if the reverse process has just taken place, as always. This certainty allows us to judge time ordering. Given the two states of the coffee on the table, hot and cold, we know that, without external intervention, the hot state cannot come after the cold state: it always comes before. As a consequence, once the cooling has occurred, we say that something (spontaneously) irreversible has happened. So, irreversibility is the asymmetric time evolution of certain macroscopic systems. The theoretical frame where this kind of irreversibility is accommodated is thermodynamics, where the Second Principle dictates the prohibitions.
The problem
As a fact within the coherent theoretical description of thermodynamics, irreversibility is not a problem.
Everything should be made as simple as possible, but not simpler.
Albert Einstein
Deterministic systems
Since the Pythagorean attempts to explain the tangible world by means of numerical quantities related to integer numbers, western culture has been characterized by the idea that Nature can be described by mathematics. This idea comes from the explicit or hidden assumption that the world obeys some precise rules. It may appear obvious today, but the systematic application of mathematics to the study of natural phenomena dates from the seventeenth century when Galileo inaugurated modern physics with the publication of his major work Discorsi e Dimostrazioni Matematiche Intorno a Due Nuove Scienze (Discourses and Mathematical Demonstrations Concerning Two New Sciences) in 1638. The fundamental step toward the mathematical formalization of reality was taken by Newton and his mechanics, explained in Philosophiae Naturalis Principia Mathematica (The Mathematical Principles of Natural Philosophy), often referred to as the Principia, published in 1687. This was a very important date not only for the philosophy of physics but also for all the other sciences; this great work can be considered to represent the high point of the scientific revolution, in which science as we know it today was born. From the publication of the Principia to the twentieth century, for a large community of scientists the main goal of physics has been the reduction of natural phenomena to mechanical laws. A natural phenomenon was considered really understood only when it was explained in terms of mechanical movements.
The meaning of the world is the separation of wish and fact.
Kurt Gödel
In the previous chapter we saw that in deterministic dynamical systems there exist well established ways to define and measure the complexity of a temporal evolution, in terms of either the Lyapunov exponents or the Kolmogorov–Sinai entropy. This approach is rather successful in deterministic low-dimensional systems. On the other hand in high-dimensional systems, as well as in low-dimensional cases without a unique characteristic time (as in the example discussed in Section 2.3.3), some interesting features cannot be captured by the Lyapunov exponents or the Kolmogorov–Sinai entropy. In this chapter we will see how an analysis in terms of the finite size Lyapunov exponents (FSLE) and ∊-entropy, defined in Chapter 2, allows the characterization of non-trivial systems in situations far from asymptotic (i.e. finite time and finite observational resolution). In particular, we will discuss the utility of ∊-entropy and FSLE for a pragmatic classification of signals, and the use of chaotic systems in the generation of sequences of (pseudo) random numbers. In addition we will discuss systems containing some randomness.
Characterization of the complexity and system modeling
Typically in experimental investigations, time records of only few observables are available, and the equations of motion are not known. From a conceptual point of view, this case can be treated in the same framework that is used when the evolution laws are known. Indeed, in principle, with the embedding technique one can reconstruct the topological features of the phase space and dynamics (Takens 1981, Abarbanel et al. 1993, Kantz and Schreiber 1997).
To develop the skill of correct thinking is in the first place to learn what you have to disregard. In order to go on, you have to know what to leave out: this is the essence of effective thinking.
Kurt Gödel
Almost all the interesting dynamic problems in science and engineering are characterized by the presence of more than one significant scale, i.e. there is a variety of degrees of freedom with very different time scales. Among numerous important examples we can mention protein folding and climate. While the time scale of vibration of covalent bonds is O(10–15 s), the folding time for proteins may be of the order of seconds. Also in the case of climate, the characteristic times of the involved processes vary from days (for the atmosphere) to O(103 yr) (for the deep ocean and ice shields). In such a situation one says that the system has a multiscale character (E and Engquist 2003).
The necessity of treating the “slow dynamics” in terms of effective equations is both practical (even modern supercomputers are not able to simulate all the relevant scales involved in certain difficult problems) and conceptual: effective equations are able to catch some general features and to reveal dominant ingredients which can remain hidden in the detailed description. The study of multiscale problems has a long history in science (in particular in mathematics): an early important example is the averaging method in mechanics (Arnold 1976).
At any time there is only a thin layer separating what is trivial from what is impossibly difficult. It is in that layer that discoveries are made …
Andrei N. Kolmogorov
An important aspect of the theory of dynamical systems is the formalization and quantitative characterization of the sensitivity to initial conditions. The Lyapunov exponents {λi} are the indicators used to measure the average rate of exponential error growth in a system.
Starting from the idea of Kolmogorov of characterizing dynamical systems by means of entropy-like quantities, following the work by Shannon in information theory, another approach to dynamical systems has been developed in the context of information theory, data compression and algorithmic complexity theory. In particular, the Kolmogorov–Sinai entropy, hks, can be defined and interpreted as a measure of the rate of information production of a system. Since the ability to produce information is tightly linked to the exponential diversification of trajectories, it is not a surprise that a relation exists between hks and {λi}, the Pesin relation.
One has to note that quantities such as {λi} and hks are properly defined only in specific asymptotic limits, that is, very long times and arbitrary accuracy. Since in realistic situations one has to deal with finite accuracy and finite time – as Keynes said, in the long run we shall all be dead – it is important to take into account these limitations. Relaxing the requirement of infinite time, one can investigate the relevance of finite time fluctuations of the “effective” Lyapunov exponent.
Statistical Mechanics has been founded during the XIX-th century by the seminal work of Maxwell, Boltzmann and Gibbs, with the main aim to explain the properties of macroscopic systems from the atomistic point of view. Accordingly, from the very beginning, starting from the Boltzmann's ergodic hypothesis, a basic question was the connection between the dynamics and the statistical properties. This is a rather difficult task and, in spite of the mathematical progress, by Birkhoff and von Neumann, basically ergodic theory had a marginal relevance in the development of the statistical mechanics (at least in the physics community). Partially this was due to a misinterpretation of a result of Fermi and a widely spreaded opinion (based also on the belief of influential scientists as Landau) on the key role of the many degrees of freedom and the practical irrelevance of ergodicity. This point of view found a mathematical support on some results by Khinchin who was able to show that, in systems with a huge number of particles, statistical mechanics works (independently of the ergodicity) just because, on the constant energy surface, the most meaningful physical observables are nearly constant, apart from regions of very small measure,
On the other hand the discovery of the deterministic chaos (from the anticipating work of Poincaré to the contributions, in the second half of the XX-th century, by Chirikov, Hénon, Lorenz and Ruelle, to cite just the most famous) beyond its undoubted relevance for many natural phenomena, showed how the typical statistical features observed in systems with many degrees of freedom, can be generated also by the presence of deterministic chaos in simple systems.
To know that you know when you do know, and know that you do not know when you do not know: that is knowledge.
Confucius
Statistical mechanics was founded by Maxwell, Boltzmann and Gibbs to account for the properties of macroscopic bodies, systems with a very large number of particles, without very precise requirements on the dynamics (except for the assumption of ergodicity).
Since the discovery of deterministic chaos it is now well established that statistical approaches may also be unavoidable and useful, as discussed in Chapter 1, in systems with few degrees of freedom. However, even after many years there is no general agreement among the experts about the fundamental ingredients for the validity of statistical mechanics.
It is quite impossible in a few pages to describe the wide spectrum of positions ranging from the belief of Landau and Khinchin in the main role of the many degrees of freedom and the (almost) complete irrelevance of dynamical properties, in particular ergodicity, to the opinion of those, for example Prigogine and his school, who consider chaos as the basic ingredient.
For almost all practical purposes one can say that the whole subject of statistical mechanics consists in the evaluation of a few suitable quantities (for example, the partition function, free energy, correlation functions). The ergodic problem is often forgotten and the (so-called) Gibbs approach is accepted because “it works.” Such a point of view cannot be satisfactory, at least if one believes that it is not less important to understand the foundation of such a complex issue than to calculate useful quantities.
Since 1990, when the first edition appeared, there has been a significant advance in the development of nonequilibrium systems. The centerpiece of the first edition was the nonequilibrium molecular-dynamics methods and their theoretical analysis, the connections between linear and nonlinear response theory, and the design of the simulation methods. This is now a mature field with only one significant addition, which is the new method for elongational flows.
Chapter 10 in the first edition was called “Towards a thermodynamics of steady states.” This contained an introduction to deterministic chaotic systems. The second edition has the same title for Chapter 10, but the contents are now completely different. The application of the ideas of modern dynamical-systems theory to nonequilibrium systems has grown enormously with all of Chapter 8 devoted to this. However, this still constitutes the barest of introductions with whole books (Gaspard, 1998; Dorfman, 1999; Ott, 2002; and Sprott, 2003) devoted to this theme. The theoretical advances in this area are some of the biggest. The development of methods to study the time evolution using periodic orbits, and the use of periodic orbits to develop SRB measures for nonequilibrium systems are exciting steps forward.
Based on the dynamical properties, Lyapunov exponents in particular, there have been great strides made in the development of the study of fluctuations in nonequilibrium systems.
Linear response theory can be used to design computer simulation algorithms for the calculation of transport coefficients. There are two types of transport coefficients: mechanical and thermal, and we will show how thermal transport coefficients can be calculated using mechanical methods.
In Nature nonequilibrium systems may respond essentially adiabatically, or depending upon circumstances, they may respond approximately isothermally — the quasi-isothermal response. No natural systems can be precisely adiabatic or isothermal. There will always be some transfer of the dissipative heat produced in nonequilibrium systems towards thermal boundaries. This heat may be radiated, convected, or conducted to the boundary reservoir. Provided this heat transfer is slow on a microscopic timescale and provided that the temperature gradients implicit in the transfer process lead to negligible temperature differences on a microscopic length scale, we call the system quasi-isothermal. We assume that quasi-isothermal systems can be modelled microscopically in computer simulations, as isothermal systems.
In view of the robustness of the susceptibilities and equilibrium time-correlation functions to various thermostatting procedures (see Sections 5.2 and 5.4), we expect that quasi-isothermal systems may be modeled using Gaussian or Nosé—Hoover thermostats or enostats. Furthermore, since heating effects are quadratic functions of the thermodynamic forces, the linear response of nonequilibrium systems can always be calculated by analyzing the adiabatic, isothermal, or isoenergetic response.
In nonequilibrium statistical mechanics we seek to model transport processes beginning with an understanding of the motion and interactions of individual atoms or molecules. The laws of classical mechanics govern the motion of atoms and molecules, so in this chapter we begin with a brief description of the mechanics of Newton, Lagrange, and Hamilton. It is often useful to be able to treat constrained mechanical systems. We will use a principle due to Gauss to treat many different types of constraint — from simple bond-length constraints, to constraints on kinetic energy. As we shall see, kinetic energy constraints are useful for constructing various constant temperature ensembles. We will then discuss the Liouville equation and its formal solution. This equation is the central vehicle of nonequilibrium statistical mechanics. We will then need to establish the link between the microscopic dynamics of individual atoms and molecules and the macroscopic hydrodynamical description discussed in the last chapter. We will discuss two procedures for making this connection. The Irving and Kirkwood procedure relates hydrodynamic variables to nonequilibrium ensemble averages of microscopic quantities. A more direct procedure, which we will describe, succeeds in deriving instantaneous expressions for the hydrodynamic field variables.
Newtonian mechanics
Classical mechanics (Goldstein, 1980) is based on Newton's three laws of motion.