We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Spin glasses are statistical mechanics systems with random interactions. The alternating sign of those interactions generates a complex physical behavior whose mathematical structure is still largely uncovered. The approach we follow in this book is that of mathematical physics, aiming at the rigorous derivation of their properties with the help of physical insight.
The book starts with the theoretical physics origins of the spin glass problem. The main models are introduced and a description of the replica approach is illustrated for the Sherrington–Kirkpatrick model.
Chapters 2 and 3 contain the starting points of the mathematical rigorous approach leading to the control of the thermodynamic limit for spin glass systems. Correlation inequalities are introduced and proved in various settings, including the Nishimori line. They are then used to prove the existence of the large-volume limit in both short-range and mean-field models.
Chapter 4 deals with exact results which belong to the mean-field case. The methods and techniques illustrated span from the Ruelle probability cascades to theAizenman–Sims–Starr variational principle. In this framework theGuerra upper bound theorem for the pressure is presented and the Talagrand theorem is reported.
Chapter 5 deals with the structural identities characterizing the spin glass phase. These are obtained by an extension of the stochastic stability method, i.e. an invariance property of the system under small perturbations, together with the self-averaging property.
Chapter 6 features some problems which are still out of analytical reach and are investigated with numerical methods: the equivalence among different overlap structures, the hierarchical organization of the states, the decay of correlations, and the energy interface cost.
The availability of large data sets has allowed researchers to uncover complex properties such as large-scale fluctuations and heterogeneities in many networks, leading to the breakdown of standard theoretical frameworks and models. Until recently these systems were considered as haphazard sets of points and connections. Recent advances have generated a vigorous research effort in understanding the effect of complex connectivity patterns on dynamical phenomena. This book presents a comprehensive account of these effects. A vast number of systems, from the brain to ecosystems, power grids and the internet, can be represented as large complex networks. This book will interest graduate students and researchers in many disciplines, from physics and statistical mechanics to mathematical biology and information science. Its modular approach allows readers to readily access the sections of most interest to them, and complicated maths is avoided so the text can be easily followed by non-experts in the subject.
Network science is the key to managing social communities, designing the structure of efficient organizations and planning for sustainable development. This book applies network science to contemporary social policy problems. In the first part, tools of diffusion and team design are deployed to challenges in adoption of ideas and the management of creativity. Ideas, unlike information, are generated and adopted in networks of personal ties. Chapters in the second part tackle problems of power and malfeasance in political and business organizations, where mechanisms in accessing and controlling informal networks often outweigh formal processes. The third part uses ideas from biology and physics to understand global economic and financial crises, ecological depletion and challenges to energy security. Ideal for researchers and policy makers involved in social network analysis, business strategy and economic policy, it deals with issues ranging from what makes public advisories effective to how networks influence excessive executive compensation.
Giving a detailed overview of the subject, this book takes in the results and methods that have arisen since the term 'self-organised criticality' was coined twenty years ago. Providing an overview of numerical and analytical methods, from their theoretical foundation to the actual application and implementation, the book is an easy access point to important results and sophisticated methods. Starting with the famous Bak-Tang-Wiesenfeld sandpile, ten key models are carefully defined, together with their results and applications. Comprehensive tables of numerical results are collected in one volume for the first time, making the information readily accessible to readers. Written for graduate students and practising researchers in a range of disciplines, from physics and mathematics to biology, sociology, finance, medicine and engineering, the book gives a practical, hands-on approach throughout. Methods and results are applied in ways that will relate to the reader's own research.
A number of experiments and observations have been undertaken to test for SOC in the ‘real world’. Ultimately, these observations motivate the research based on analytical and numerical tools, although the latter provide the clearest evidence for SOC whereas experimental evidence is comparatively ambiguous. What evidence suffices to call a system self-organised critical? One might be inclined to say scale invariance without tuning, but as discussed in Sec. 9.4, the class of such systems might be too large and comprise phenomena that traditionally are regarded as distinct from criticality, such as, for example, diffusion.
In most cases, systems suspected to be self-organised critical display a form of scaling and a form of avalanching, suggesting a separation of time scales. Because of the early link to 1/f noise (Sec. 1.3.2), some early publications regard this as sufficient evidence for SOC. At the other end of the spectrum are systems that closely resemble those that are studied numerically and whose scaling behaviour is not too far from that observed in numerical studies. Yet it remains debatable whether any numerical model is a faithful representation of any experiment or at least incorporates the relevant interactions.
At first sight, solid experimental evidence for scaling or even universality is sparse among the many publications that suggest links to SOC. This result is even more sobering as evidence for SOC is heavily biased – there are very few publications (e.g. Jaeger, Liu, and Nagel, 1989; Kirchner and Weil, 1998) on failed attempts to identify SOC where it was suspected.
When Bak, Tang, and Wiesenfeld (1987) coined the term Self-Organised Criticality (SOC), it was an explanation for an unexpected observation of scale invariance and, at the same time, a programme of further research. Over the years it developed into a subject area which is concerned mostly with the analysis of computer models that display a form of generic scale invariance. The primacy of the computer model is manifest in the first publication and throughout the history of SOC, which evolved with and revolved around such computer models. That has led to a plethora of computer ‘models’, many of which are not intended to model much except themselves (also Gisiger, 2001), in the hope that they display a certain aspect of SOC in a particularly clear way.
The question whether SOC exists is empty if SOC is merely the title for a certain class of computer models. In the following, the term SOC will therefore be used in its original meaning (Bak et al., 1987), to be assigned to systems
with spatial degrees of freedom [which] naturally evolve into a self-organized critical point.
Such behaviour is to be juxtaposed to the traditional notion of a phase transition, which is the singular, critical point in a phase diagram, where a system experiences a breakdown of symmetry and long-range spatial and, in non-equilibrium, also temporal correlations, generally summarised as (power law) scaling (Widom, 1965a,b; Stanley, 1971).
In this chapter, some important analytical techniques and results are discussed. The first two sections are concerned with mean-field theory, which is routinely applied in SOC, and renormalisation which has had a number of celebrated successes in SOC. As discussed in Sec. 8.3, Dhar (1990a) famously translated the set of rules governing an SOC model into operators, which provides a completely different, namely algebraic perspective. Directed models, discussed in Sec. 8.4, offer a rich basis of exactly solvable models for the analytical methods discussed in this chapter. In the final section, Sec. 8.5, SOC is translated into the language of the theory of interfaces.
It is interesting to review the variety of theoretical languages that SOC models have been cast in. Mean-field theories express SOC models (almost) at the level of updating rules and thus more or less explicitly in terms of a master equation. The same applies for some of the renormalisation group procedures (Vespignani, Zapperi, and Loreto, 1997), although Díaz-Guilera (1992) suggested very early an equation of motion of the local particle density in the form of a Langevin equation. The language of interfaces overlaps with this perspective in the case of Hwa and Kardar's (1989a) surface evolution equations, whereas the absorbing state (AS) approach as well as depinning use a similar formalism but a different physical interpretation – what evolves in the former case is the configuration of the system, while it is the number of charges in the latter.
Self-organised criticality (SOC) is a very lively field that in recent years has branched out into many different areas and contributed immensely to the understanding of critical phenomena in nature. Since its discovery in 1987, it has been one of the most active and influential fields in statistical mechanics. It has found innumerable applications in a large variety of fields, such as physics, chemistry, medicine, sociology, linguistics, to name but a few. A lot of progress has been made over the last 20 years in understanding the phenomenology of SOC and its causes. During this time, many of the original concepts have been revised a number of times, and some, such as complexity and emergence, are still very actively discussed. Nevertheless, some if not most of the original questions remain unanswered. Is SOC ubiquituous? How does it work?
As the field matured and reached a widening audience, the demand for a summary or a commented review grew. When Professor Henrik J. Jensen asked me to write an updated version of his book on self-organised criticality six years ago, it struck me as a great honour, but an equally great challenge. His book is widely regarded as a wonderfully concise, well-written introduction to the field. After more than 24 years since its conception, self-organised criticality is in a process of consolidation, which an up-todate review has to appreciate just as much as the many new results discovered and the new directions explored.
In his review of SOC, Jensen (1998) asked four central questions paraphrased here.
Can SOC be defined as a distinct phenomenon?
Are there systems that display SOC?
What has SOC taught us?
Does SOC have any predictive power?
As discussed in the following, the answers are positive throughout, but slightly different from what was expected ten years ago, when the general consensus was that the failure of SOC experiments and computer models to display the expected featureswasmerely amatter of improving the setup or increasing the system size. Firstly, this is not true: larger and purer systems have, in many cases, not improved the behaviour. Secondly, truly universal behaviour is not expected to be prone to tiny impurities or to display such dramatic finite size corrections. If the conclusion is that this iswhat generally happens in systems studied in SOC over the last twenty years, critical phenomena may not be the most suitable framework to describe them.
Can SOC be defined as a distinct phenomenon?
In the preceding chapters, SOC was regarded as the observation that some systems with spatial degrees of freedom evolve, by a form of self-organisation, to a critical point, where they display intermittent behaviour (avalanching) and (finite size) scaling as known from ordinary phase transitions (Bak et al., 1987, also Ch. 1). This definition makes it clearly distinct from other phenomena, although generic scale invariance has been observed elsewhere.
How does SOC work? What are the necessary and sufficient conditions for the occurrence of SOC? Can the mechanism underlying SOC be put towork in traditional critical phenomena? These questions are at the heart of the study of SOC phenomena. The hope is that an SOC mechanism would not only give insight into the nature of the critical state in SOC and its long-range, long-time correlations, but also provide a procedure to prompt this state in other systems. In the following, SOC is first placed in the context of ordinary critical phenomena, focusing on the question to what extent SOC has been preceded by phenomena with very similar features. The theories of these phenomena can give further insight into the nature of SOC. In the remainder, the two most successful mechanisms are presented, the second of which, the Absorbing State Mechanism (AS mechanism), is the most recent, most promising development. A few other mechanisms are discussed briefly in the last section.
SOC mechanisms generally fall in one of three categories. Firstly, there are those that show that SOC is an instance of generic scale invariance, by showing that SOC models cannot avoid being scale invariant, because of their characteristics, such as bulk conservation and particle transport. The mechanism developed by Hwa and Kardar (1989a), Sec. 9.2, is the most prominent example of this type of explanation. This approach focuses solely on criticality and dismisses any self-organisation.
Most computational physicists try to strike a balance between a number of conflicting objectives. Ideally, a model is quickly implemented, easy to maintain, readily extensible, fast and demands very little memory. A few general rules can help to get closer to that ideal. Well written code that uses proper indentation, comments and symmetries (see for example PUSH and POP below), helps to avoid bugs and improves maintainability. Howmuch tweaking and tuning can be done without spoiling readability and maintainability of the code is a matter of taste and experience. Sometimes an obfuscated implementation of an obscure algorithm makes all the difference. Yet, many optimisations have apparent limits where any reduction of interdependence and any improvement of data capture is compensated by an equal increase in computational complexity and thus runtime. Often a radical rethink is necessary to overcome such an ostensible limit of maximum information per CPU time, as examplified by the Swendsen-Wang algorithm (Swendsen and Wang, 1987) for the Ising Model, which represents a paradigmatic change from the classic Metropolis algorithm (Metropolis, Rosenbluth, Rosenbluth, et al., 1953).
Nevertheless, one should not underestimate the amount of real time as well as CPU time that can be saved by opting for a slightly less powerful code in favour of one that is more stable and correct from the start. On the same account, it usually pays to follow up even a little hunch that something is not working correctly.
In broad terms, the aim of the analysis of a supposed self-organised critical system is to determine whether the phenomenon is merely the sum of independent local events, or is caused by interactions on a global scale, i.e. cooperation, which is signalled by algebraic correlations and non-Gaussian event distributions. Self-organised criticality therefore revolves around scaling and scale invariance, as it describes the asymptotic behaviour of large, complex systems and hints at their universality (Kadanoff, 1990). Numerical and analytical work generally concentrates on the scaling features of a model. Understanding their origin and consequences is fundamental to the analysis as well as to the interpretation of SOC models, beginning at the very motivation of a particular model and permeating down to the level of the presentation of data.
During the last fifteen years or so, the understanding of scaling in complex systems has greatly improved and some standard numerical techniques have been established, which allow the comparison of different models, assumptions and approaches. Yet, there is still noticeable confusion regarding the implications of scaling as well as its quantification.
Most concepts, such as universality and generalised homogeneous functions, are taken from or are motivated by the equilibrium statistical mechanics of phase transitions (Stanley, 1971; Privman et al., 1991), and were first applied to SOC in a systematic manner by Kadanoff et al. (1989). Yet, what appears to be rather natural in the context of equilibrium statistical mechanics, might not be so for complex systems.
This chapter describes a number of (numerical) techniques used to estimate primarily universal quantities, such as exponents, moment ratios and scaling functions. The methods are applied during post-processing, i.e. after a numerical simulation, such as the OFC Model, Appendix A, has terminated. Many methods are linked directly to the scaling arguments presented in Ch. 2, i.e. they either probe for the presence of scaling or derive properties assuming scaling.
A time series is the most useful representation of the result of a numerical simulation, because it gives insight into the temporal evolution of the model and provides a natural way to determine the variance of the various observables reliably. Most of the analysis focuses on the stationary state of the model, where the statistics of one instance of the model with one particular initial state is virtually indistinguishable from that with another initial state. The end of the transient can be determined by comparing two or more independent runs, or by comparing one run to exactly known results (such as the average avalanche size) or to results of much later times. The transient can be regarded as past as soon as the observables are within one standard deviation. It pays to be generous with the transient, in particular when higher moments or complex observables are considered.
In the stationary state, the ensemble average (taking, at equal times, a sample across a large number of realisations of the model) is strictly time independent.