We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Traits that allow individuals to forage more efficiently can be expected to be naturally selected. The hypothesis that natural mechanisms should drive foraging organisms to maximize their energy intake gave rise to what became known as optimal foraging theory. The idea can be traced to studies undertaken by MacArthur and Pianka [219] and Emlen [109] in 1966.
Optimal foraging theory predicts that foragers will behave to maximize the net caloric gain per unit time of foraging. It assumes differentiated functional classes of predators (grazers, parasites, etc.) and provides insight into correlations between physiological features and predation skills (e.g., digestion and ingestion rates). It also highlights the importance of handling time (e.g., for killing and eating prey) [156, 190, 191, 192, 218, 267].
A large body of theoretical work [162, 171] grew in an attempt to deal with the multitude of determinant factors and in order to identify the relevant parameters involved in the predicted optimization [328]. An important example is the marginal value theorem [75, 76], which states that for the forager to maximize the net energy gain per unit time while foraging in a (more or less uniformly) patchy environment, the forager must leave a given patch when the expected net gain from staying in the patch drops to the expected net gain from traveling to (and starting to search in) the next patch.
Dealing with all aspects of Monte Carlo simulation of complex physical systems encountered in condensed-matter physics and statistical mechanics, this book provides an introduction to computer simulations in physics. This edition now contains material describing powerful new algorithms that have appeared since the previous edition was published, and highlights recent technical advances and key applications that these algorithms now make possible. Updates also include several new sections and a chapter on the use of Monte Carlo simulations of biological molecules. Throughout the book there are many applications, examples, recipes, case studies, and exercises to help the reader understand the material. It is ideal for graduate students and researchers, both in academia and industry, who want to learn techniques that have become a third tool of physical science, complementing experiment and analytical theory.
Two classical subjects in statistical physics are kinetic theory and phase transitions. The latter has been traditionally studied in the equilibrium framework, where the goal is to characterize the ordered phases that arise below the critical temperature in systems with short-range interactions. A more recent development has been to investigate how this order is formed dynamically. In the present and following chapter, we focus on this dynamics.
Phenomenology of coarsening
The theory of phase transitions was originally motivated by the goal of understanding ferromagnetism. The Ising model played a central role in this effort and provided a useful framework for investigating dynamics. The basic entity in the Ising model is a spin variable that can take two possible values, s = ±1, at each site of a lattice. A local ferromagnetic interaction between spins promotes their alignment, while thermal noise tends to randomize their orientations. The outcome of this competition is a disordered state for sufficiently high temperature, while below a critical temperature the tendency for alignment prevails and an ordered state arises in which the order parameter – the average magnetization – is non-zero.
Suppose that we start with an Ising model in an equilibrium disordered phase and lower the temperature. To understand how the system evolves toward the final state, we must endow this model with a dynamics and we also need to specify the quenching procedure. Quenching is usually implemented as follows:
• Start at a high initial temperature Ti > Tc, where spins are disordered; here Tc is the critical temperature.
• Instantaneously cool the system to a lower temperature Tf.
Suppose that gas molecules impinge upon and adsorb on a surface, or substrate. If the incident molecules are monomers that permanently attach to single adsorption sites on the surface and there are no interactions between adsorbed monomers, then the density ρ of occupied sites increases with time at a rate proportional to the density of vacancies, namely, dρ/dt = 1 − ρ. Thus ρ(t) = 1 − e−t, and vacancies disappear exponentially in time. However, if each arriving molecule covers more than one site on the substrate, then a vacant region that is smaller than the molecular size can never be filled. The substrate reaches an incompletely filled jammed state that cannot accommodate additional adsorption. What is the filling fraction of this jammed state? What is the rate at which the final fraction is reached? These are basic questions of adsorption kinetics.
Random sequential adsorption in one dimension
A simple example with non-trivial collective behavior is the random sequential adsorption of dimers – molecules that occupy two adjacent sites of a one-dimensional lattice (Fig. 7.1). We model the steady influx of molecules by adsorption attempts that occur one at a time at random locations on the substrate. An adsorption attempt is successful only if a dimer lands on two adjacent empty sites. If a dimer lands on either two occupied sites or on one occupied and one empty site, the attempt fails. After each successful attempt, the coverage increases.
The goal of statistical physics is to study collective behaviors of interacting many-particle systems. In equilibrium statistical physics, the simplest interaction is exclusion – for example, hard spheres that cannot overlap. This model depends on a single dimensionless parameter, the volume fraction; the temperature is irrelevant since the interaction energy is zero when the spheres are non-overlapping and infinite otherwise. Despite its apparent simplicity, the hard-sphere gas is incompletely understood except in one dimension. A similar state of affairs holds for the lattice version of hard spheres; there is little analytical understanding of its unusual liquid–gas transition when the spatial dimension d ≥ 2.
In this chapter we explore the role of exclusion on the simplest non-equilibrium models that are known as exclusion processes. Here particles occupy single lattice sites, and each particle can hop to a neighboring site only if it is vacant (see Fig. 4.1). There are many basic questions we can ask: What is the displacement of a single particle? How does the density affect transport properties? How do density gradients evolve with time? In greater than one dimension, exclusion does not qualitatively affect transport properties compared to a system of independent particles. Interestingly, exclusion leads to fundamentally new transport phenomena in one dimension.
Statistical physics is an unusual branch of science. It is not defined by a specific subject per se, but rather by ideas and tools that work for an incredibly wide range of problems. Statistical physics is concerned with interacting systems that consist of a huge number of building blocks – particles, spins, agents, etc. The local interactions between these elements lead to emergent behaviors that can often be simple and clean, while the corresponding few-particle systems can exhibit bewildering properties that defy classification. From a statistical perspective, the large size of a system often plays an advantageous, not deleterious, role in leading to simple collective properties.
While the tools of equilibrium statistical physics are well-developed, the statistical description of systems that are out of equilibrium is less mature. In spite of more than a century of effort to develop a formalism for non-equilibrium phenomena, there still do not exist analogs of the canonical Boltzmann factor or the partition function of equilibrium statistical physics. Moreover, non-equilibrium statistical physics has traditionally dealt with small deviations from equilibrium. Our focus is on systems far from equilibrium, where conceptually simple and explicit results can be derived for their dynamical evolution.
Non-equilibrium statistical physics is perhaps best appreciated by presenting wide-ranging and appealing examples, and by developing an array of techniques to solve these systems. We have attempted to make our treatment self-contained, so that an interested reader can follow the text with a minimum of unresolved methodological mysteries or hidden calculational pitfalls.
Non-equilibrium statistical physics describes the time evolution of many-particle systems. The individual particles are elemental interacting entities which, in some situations, can change in the process of interaction. In the most interesting cases, interactions between particles are strong and hence a deterministic description of even a few-particle system is beyond the reach of any exact theoretical approach. On the other hand, many-particle systems often admit an analytical statistical description when their number becomes large. In that sense they are simpler than few-particle systems. This feature has several different names – the law of large numbers, ergodicity, etc. – and it is one of the reasons for the spectacular successes of statistical physics and probability theory.
Non-equilibrium statistical physics is also quite different from other branches of physics, such as the “fundamental” fields of electrodynamics, gravity, and high-energy physics that involve a reductionist description of few-particle systems, as well as applied fields, such as hydrodynamics and elasticity, that are primarily concerned with the consequences of fundamental governing equations. Some of the key and distinguishing features of non-equilibrium statistical physics include the following:
• there are no basic equations (like Maxwell equations in electrodynamics or Navier–Stokes equations in hydrodynamics) from which the rest follows;
• it is intermediate between fundamental and applied physics;
• common underlying techniques and concepts exist in spite of the wide diversity of the field;
• it naturally leads to the creation of methods that are useful in applications outside of physics (for example the Monte Carlo method and simulated annealing).
The previous chapter focused on population dynamics models, where the reactants can be viewed as perfectly mixed and the kinetics is characterized only by global densities. In this chapter, we study diffusion-controlled reactions, in which molecular diffusion limits the rate at which reactants encounter each other. In this situation, spatial gradients and spatial fluctuations play an essential role in governing the kinetics. As we shall see in this chapter, the spatial dimension plays a crucial role in determining the importance of these heterogeneities.
Role of the spatial dimension
When the spatial dimension d exceeds a critical dimension dc, diffusing molecules tend to remain well mixed. This efficient mixing stems from the transience of diffusion in high spatial dimension, which means that a molecule is almost as likely to react with a distant neighbor as with a near neighbor. Because of this efficient mixing, spatial fluctuations play a negligible role, ultimately leading to mean-field kinetics. Conversely, when d < dc, nearby particles react with high probability. This locality causes large-scale heterogeneities to develop, even when the initial state is homogeneous, that invalidate a mean-field description of the kinetics.
To illustrate the role of the spatial dimension in a simple setting, consider the evolution of a gas of identical diffusing particles that undergo either irreversible annihilation or coalescence (see also Section 1.2). Suppose that each particle has radius R and diffusivity D.