We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The decreasing preference of montmorillonite for K+ relative to Na+ as the clay adsorbs increasing amounts of K+ is shown to be the general rule for the exchange of strongly hydrating ions by weakly hydrating ions. Variability in the mass-action selectivity coefficient is interpreted in terms of a composition-dependent surface entropy, which is a function of the chemical properties of the exchanging ion as well as the nature of the adsorption sites. The generally used mass-action form of exchange equation may only be applicable to exchange systems in which both ions have solution-like mobility at the exchanger surface. It is suggested that experimental variables such as ionic strength can greatly influence the degree of fit of data to a given ion-exchange equation.
This chapter begins the final section of the book, which presents both review and new results of original research on decoherence and measurement theory. In this chapter, it is shown that normal quantum mechanics can lead to irreversible behavior in an open system, in contrast to the expectation of the Poincaré theorem that predicts repeating, cyclical behavior for all closed systems. The quantum Boltzmann equation, which implies the famous H-theorem that underlies all statistical mechanics, is derived.
Quantum field theory (QFT) provides us with one and almost only suitable language (or mathematical tool) for describing not only the motion and interaction of particles but also their “annihilation” and “creation” out of a field considered a priori in a sophisticated way, whose view seems to be suited for describing dislocations, as a particle or a string embedded within a crystalline ordered field. This chapter concisely overviews the method of QFT, emphasizing distinction from the quantum mechanics, conventionally used for a single and/or many particle problems, and its equivalence to the statistical mechanics. The alternative formalism based on Feynman path integral and its imaginary time representation are reviewed, as the foundation for our use in Chapter 10.
Statistical mechanics is the third pillar of modern physics, next to quantum theory and relativity theory. It aims to account for the behaviour of macroscopic systems in terms of the dynamical laws that govern their microscopic constituents and probabilistic assumptions about them. In this Element, the authors investigate the philosophical and foundational issues that arise in SM. The authors introduce the two main theoretical approaches in SM, Boltzmannian SM and Gibbsian SM, and discuss how they conceptualise equilibrium and explain the approach to it. In doing so, the authors examine how probabilities are introduced into the theories, how they deal with irreversibility, how they understand the relation between the micro and the macro level, and how the two approaches relate to each other. Throughout, the authors also pinpoint open problems that can be subject of future research. This title is also available as Open Access on Cambridge Core.
We study two models of discrete height functions, that is, models of random integer-valued functions on the vertices of a tree. First, we consider the random homomorphism model, in which neighbours must have a height difference of exactly one. The local law is uniform by definition. We prove that the height variance of this model is bounded, uniformly over all boundary conditions (both in terms of location and boundary heights). This implies a strong notion of localisation, uniformly over all extremal Gibbs measures of the system. For the second model, we consider directed trees, in which each vertex has exactly one parent and at least two children. We consider the locally uniform law on height functions which are monotone, that is, such that the height of the parent vertex is always at least the height of the child vertex. We provide a complete classification of all extremal gradient Gibbs measures, and describe exactly the localisation-delocalisation transition for this model. Typical extremal gradient Gibbs measures are localised also in this case. Localisation in both models is consistent with the observation that the Gaussian free field is localised on trees, which is an immediate consequence of transience of the random walk.
Exact solutions for infinite Ising systems are rare, specific in terms of the interactions allowed, and limited to one and two dimensions. To study a wider range of models we must resort to various approximation techniques. One of the simplest and most comprehensive of these is the mean-field approximation, the subject of this chapter. Some versions of this approximation rely on a self-consistent requirement, and in this respect the mean-field method for the Ising model is similar to a number of other self-consistent approximation methods in physics, including the Hartree–Fock approximation for atomic and molecular orbitals, the BCS theory of superconductivity, and the relaxation method for determining electric potentials. We will also introduce a somewhat different mean-field approach, the Landau–Ginzburg approximation, which is based on a series expansion of the free energy. One of the drawbacks of all of the mean-field theories, however, is that they predict the same mean-field critical exponents, which, unfortunately, are at odds with the results of exact solutions and experiments.
A useful way to solve a complex problem – whether in physics, mathematics, or life in general – is to break it down into smaller pieces that can be handled more easily. This is especially true of the Ising model. In this chapter, we investigate various partial-summation techniques in which a subset of Ising spins is summed over to produce new, effective couplings among the remaining spins. These methods are useful in their own right and are even more important when used as a part of position-space renormalization-group techniques.
In the chapters so far, we have studied a number of exact methods of calculation for Ising models. These studies culminated in the exact solution for an infinite one-dimensional Ising model, as well as the corresponding solution on a 2 × ∞ lattice. Neither of these systems shows a phase transition, however. In this chapter, we start with Onsager’s exact solution for the two-dimensional lattice, which quite famously does have a phase transition. Next, we explore exact series expansions from low and high temperature, and show how these results can be combined, via the concept of duality, to give the exact location of the phase transition in two dimensions.
In Chapter 3 we explored transformations where a finite group of Ising spins is summed to produce effective interactions among the remaining spins. In all of these cases a finite sum of Boltzmann factors is sufficient to solve the problem. We turn now to infinite systems, where a straightforward, brute-force summation is not possible. Instead, we develop a number of new techniques that allow us to evaluate an infinite summation in full detail.
In this chapter, we explore Ising systems that consist of just one or a few spins. We define a Hamiltonian for each system and then carry out straightforward summations over all the spin states to obtain the partition function. No phase transitions occur in these systems – in fact, an infinite system is needed to produce the singularities that characterize phase transitions. Even so, our study of finite systems yields a number of results and insights that are important to the study of infinite systems.
Few models in theoretical physics have been studied for as long, or in as much detail, as the Ising model. It’s the simplest model to display a nontrivial phase transition, and as such it plays a unique role in theoretical physics. In addition, the Ising model can be applied to a wide range of physical systems, from magnets and binary liquid mixtures, to adsorbed monolayers and superfluids, to name just a few. In this chapter, we present some of the background material that sets the stage for a detailed study of the Ising model in the chapters to come.
The Ising model provides a detailed mathematical description of ferromagnetism and is widely used in statistical physics and condensed matter physics. In this Student's Guide, the author demystifies the mathematical framework of the Ising model and provides students with a clear understanding of both its physical significance, and how to apply it successfully in their calculations. Key topics related to the Ising model are covered, including exact solutions of both finite and infinite systems, series expansions about high and low temperatures, mean-field approximation methods, and renormalization-group calculations. The book also incorporates plots, figures, and tables to highlight the significance of the results. Designed as a supplementary resource for undergraduate and graduate students, each chapter includes a selection of exercises intended to reinforce and extend important concepts, and solutions are also available for all exercises.
Causes always seem to come prior to their effects. What might explain this asymmetry? Causation's temporal asymmetry isn't straightforwardly due to a temporal asymmetry in the laws of nature—the laws are, by and large, temporally symmetric. Nor does the asymmetry appear due to an asymmetry in time itself. This Element examines recent empirical attempts to explain the temporal asymmetry of causation: statistical mechanical accounts, agency accounts and fork asymmetry accounts. None of these accounts are complete yet and a full explanation of the temporal asymmetry of causation will likely require contributions from all three programs.
Climate emulators are a powerful instrument for climate modeling, especially in terms of reducing the computational load for simulating spatiotemporal processes associated with climate systems. The most important type of emulators are statistical emulators trained on the output of an ensemble of simulations from various climate models. However, such emulators oftentimes fail to capture the “physics” of a system that can be detrimental for unveiling critical processes that lead to climate tipping points. Historically, statistical mechanics emerged as a tool to resolve the constraints on physics using statistics. We discuss how climate emulators rooted in statistical mechanics and machine learning can give rise to new climate models that are more reliable and require less observational and computational resources. Our goal is to stimulate discussion on how statistical climate emulators can further be improved with the help of statistical mechanics which, in turn, may reignite the interest of statistical community in statistical mechanics of complex systems.
Anti-evolutionists sometimes use principles of thermodynamics in making their case. Though thermodynamics is normally considered a branch of physics, it has a strongly mathematical character that justifies its inclusion in this book. We discuss the basics of thermodynamics and statistical mechanics, and then explain why the anti-evolutionist version is such a caricature.
In this comparative historical analysis, we will analyze the intellectual tendency that emerged between 1946 and 1956 to take advantage of the popularity of communication theory to develop a kind of informational epistemology of statistical mechanics. We will argue that this tendency results from a historical confluence in the early 1950s of certain theoretical claims of the so-called English School of Information Theory, championed by authors such as Gabor (1956) or MacKay (1969), and from the attempt to extend the profound success of Shannon’s ([1948] 1993) technical theory of sign transmission to the field of statistical thermal physics. As a paradigmatic example of this tendency, we will evaluate the intellectual work of Léon Brillouin (1956), who, in the mid-fifties, developed an information theoretical approach to statistical mechanical physics based on a concept of information linked to the knowledge of the observer.
In addition to his ground-breaking research, Nobel Laureate Steven Weinberg is known for a series of highly praised texts on various aspects of physics, combining exceptional physical insight with his gift for clear exposition. Describing the foundations of modern physics in their historical context and with some new derivations, Weinberg introduces topics ranging from early applications of atomic theory through thermodynamics, statistical mechanics, transport theory, special relativity, quantum mechanics, nuclear physics, and quantum field theory. This volume provides the basis for advanced undergraduate and graduate physics courses as well as being a handy introduction to aspects of modern physics for working scientists.
Chapter 2 introduces the statistical physics description of the rheology of concentrated colloidal suspensions at low Reynolds number. While the solvent is a Newtonian fluid, the suspension exhibits viscoelasticity and non-Newtonian rheology. After explaining the hydrodynamic interactions between Brownian particles mediated by the intervening solvent flow, two theoretical methods for describing the suspension rheology are discussed. In the Langevin Equation method, stochastic particle trajectories are considered under the influence of direct and hydrodynamic interactions, and solvent-induced fluctuating forces. This method is fundamental to simulation schemes where the macroscopic suspension stress is calculated from time-averaging the microscopic stress over representative trajectories. The main focus is on the second so-called generalized Smoluchowski equation (GSmE) method invoking a many-particles diffusion-advection equation for the configurational probability density of Brownian particles in shear flow. Based on the GSmE, real-space and Fourier-space schemes are discussed for calculating rheological properties including the shear stress relaxation function and steady state and dynamic viscosities. Starting from exact Green-Kubo relations for the shear stress relaxation function, the linear mode coupling theory (MCT) and its non-linear extension, termed Integration Through Transients (ITT), are introduced as versatile Fourier-space schemes. They allow for studying the rheology of concentrated suspensions close to glass and gel transitions.
Chapter 7 delves into a handful of combinatorial problems in flat origami theory that are more general than the single-vertex problems considered in Chapter 5. First, we count the number of locally-valid mountain-valley assignments of certain origami tessellations, like the square twist and Miura-ori tessellations. Then the stamp-folding problem is discussed, where the crease pattern is a grid of squares and we want to fold them into a one-stamp pile in as many ways as possible.Then the tethered membrane model of polymer folding is considered from soft-matter physics, which translates into origami as counting the number of flat-foldable crease patterns that can be made as a subset of edges from the regular triangle lattice.Many of these problems establish connections between flat foldings and graph colorings and statistical mechanics.
This clear and pedagogical text delivers a concise overview of classical and quantum statistical physics. Essential Statistical Physics shows students how to relate the macroscopic properties of physical systems to their microscopic degrees of freedom, preparing them for graduate courses in areas such as biophysics, condensed matter physics, atomic physics and statistical mechanics. Topics covered include the microcanonical, canonical, and grand canonical ensembles, Liouville's Theorem, Kinetic Theory, non-interacting Fermi and Bose systems and phase transitions, and the Ising model. Detailed steps are given in mathematical derivations, allowing students to quickly develop a deep understanding of statistical techniques. End-of-chapter problems reinforce key concepts and introduce more advanced applications, and appendices provide a detailed review of thermodynamics and related mathematical results. This succinct book offers a fresh and intuitive approach to one of the most challenging topics in the core physics curriculum and provides students with a solid foundation for tackling advanced topics in statistical mechanics.