We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Phase transitions are defined, and the concepts of order parameter and spontaneously broken symmetry are discussed. Simple models for magnetic phase transitions are introduced, together with some experimental examples. Critical exponents and the notion of universality are defined, and the consequences of the scaling assumptions are derived.
Phase transitions and order parameters
It is a fact of everyday experience that matter in thermodynamic equilibrium exists in different macroscopic phases. Indeed, it is difficult to imagine life on Earth without all three phases of water. A typical sample of matter, for example, has the temperature–pressure phase diagram presented in Fig. 1.1: by changing either of the two parameters the system may be brought into a solid, liquid, or gas phase. The change of phase may be gradual or abrupt. In the latter case, the phase transition takes place at well defined values of the parameters that determine the phase boundary.
Phase transitions are defined as points in the parameter space where the thermodynamic potential becomes non-analytic. Such a non-analyticity can arise only in the thermodynamic limit, when the size of the system is assumed to be infinite. In a finite system the partition function of any system is a finite sum of analytic functions of its parameters, and is therefore always analytic. A sharp phase transition is thus a mathematical idealization, albeit one that describes the reality extremely well. Macroscopic systems typically contain ∼ 1023 degrees of freedom, and as such are very close to being in the thermodynamic limit. The phase boundaries in Fig. 1.1, for example, for this reason represent reproducible physical quantities.
The partition function for interacting bosons is derived as the coherent state path integral and then generalized to magnetic transitions. Phase transitions in the Ginzburg–Landau–Wilson theory for a fluctuating order parameter are discussed in Hartree's and Landau's approximations, and the fundamental limitation of perturbation theory near the critical point is exposed. The concept of upper critical dimension is introduced.
Partition function for interacting bosons
As a prototypical system with a continuous phase transition we will consider the system of interacting bosons. A well studied physical realization is provided by helium (4He) with the pressure–temperature phase diagram as depicted in Fig. 2.1. Since the atoms of helium are light and interact via weak dipole–dipole interactions, due to quantum zero-point motion helium stays liquid down to the lowest temperatures, at not too high pressures. Instead of solidifying it suffers a continuous normal liquid–superfluid liquid transition at Tc ≈ 2K, also called the λ-transition due to the characteristic form of the specific heat in Fig. 1.6. The λ-transition represents the best quantitatively understood critical point in nature. We have already quoted the specific heat exponent α = -0.0127 ± 0.0003, with the power-law behavior being observed over six decades of the reduced temperature! To achieve this accuracy the experiment had to be performed in the space shuttle so that the small variations in Tc along the height of the sample due to Earth's gravity would be minimized. At higher pressures He eventually solidifies, with the superfluid–solid and the normal liquid–solid phase transitions being discontinuous, the former being so rather weakly.
It has been more than thirty years since the theory of universal behavior of matter near the points of continuous phase transitions was formulated. Since then the principles and the techniques of the theory of such “critical phenomena” have pervaded modern physics. The basic tenets of our understanding of phase transitions, the concepts of scaling and of the renormalization group, have been found to be useful well beyond their original domain, and today constitute some of our basic tools for thinking about systems with many interacting degrees of freedom. When applied to the original problem of continuous phase transitions in liquids, magnets, and superfluids, the theory is in remarkable agreement with measurements, and often even ahead of experiment in precision. For this reason alone the theory of critical phenomena would have to be considered a truly phenomenal physical theory, and ranked as one of the highest achievements of twentieth century physics.
The book before you originated in part from the courses on theory of phase transitions and renormalization group I taught to graduate students at Simon Fraser University. The students typically had a solid prior knowledge of statistical mechanics, and thus had some familiarity with the notions of phase transitions and of the mean-field theory, both being commonly taught nowadays as parts of a graduate course on the subject. In selecting the material and in gauging the technical level of the lectures I had in mind a student who not only wanted to become familiar with the basic concepts of the theory of critical phenomena, but also to learn how to actually use it to explain and compute.
In the hydrodynamic theories, one writes equations which describe the system at hand on long time and length scales, using local conservation laws and the presence of broken symmetries to determine the identity of these slowly varying quantities. However, one would like a way to take account of the effects of the more rapidly varying degrees of freedom in such theories. Implicitly, these more rapid degrees of freedom are present in the integrals which determine the transport coefficients in the hydrodynamic equations by way of Kubo relations, but no method is presented to calculate the required integrands. One way to take account of the other degrees of freedom is to take on the entire many body problem all the way down to the atomic or electronic level, as one does in molecular dynamics simulations of various sorts. However, it is useful to have some approximate analytical ways to attack this problem as well. Here we review the basis for the most common such approach, the Langevin equation. For most of this discussion, we will assume that the identity of the slow variable or variables is known and that the separation of time scales is extreme so that the faster variables are essentially instantaneous in a sense we will discuss. These assumptions are not often particularly well justified. However, the resulting formulation has yielded very useful insights and is an important part of the subject.
Here we briefly review some aspects of the statistical mechanics of liquids. The distinction between a liquid and a gas is not sharp except in the neighborhood of the transition between them which we will discuss in Chapter 10. As a working definition we will consider a system to be a liquid if it lacks the geometrical structure associated, for example, with crystals and for which the density expansions discussed in the last chapter do not converge. This distinction can be made somewhat sharper when we have discussed the relevant correlation functions and phase diagrams. It is to be noted that for atomic and molecular systems which can be treated classically and for which the two body interactions contain a hard core at short distances, there will always be a region of the thermodynamic phase space (for example in the PT diagram) for which the system will behave as a liquid according to this definition.
We begin the discussion by defining correlation functions which are very useful for characterizing the structure of liquids and also for making measurements and formulating theories to describe them. The considerations here apply equally well to the imperfect gases discussed in the last chapter, but they are particularly useful and necessary for the discussion of liquids. We next describe experimental techniques which directly measure some of these correlation functions. Finally we briefly describe two distinct theoretical approaches to the description of liquids: analytical formulations based on approximate summations of series like those described in the last chapter, and numerical simulation.
Historically, the first and most successful case in which statistical mechanics has made the connection between microscopic and macroscopic description is that in which the system can be said to be in equilibrium. We define this carefully later but, to proceed, may think of the equilibrium state as the one in which the values of the macroscopic variables do not drift in time. The macroscopic variables may have an obvious relation to the underlying microscopic description (as for example in the case of the volume of the system) or a more subtle relationship (as for temperature and entropy). The macroscopic variables of a system in equilibrium are found experimentally (and in simulations) to obey historically empirical laws of thermodynamics and equations of state which relate them to one another. For systems at or near equilibrium, statistical mechanics provides the means of relating these relationships to the underlying microscopic physical description.
We begin by discussing the details of this relation between the microscopic and macroscopic physical description in the case in which the system may be described classically. Later we run over the same ground in the quantum mechanical case. Finally we discuss how thermodynamics emerges from the description and how the classical description emerges from the quantum mechanical one in the appropriate limit.
The problems of statistical mechanics are those which involve systems with a larger number of degrees of freedom than we can conveniently follow explicitly in experiment, theory or simulation. The number of degrees of freedom which can be followed explicitly in simulations has been changing very rapidly as computers and algorithms improve. However, it is important to note that, even if computers continue to improve at their present rate, characterized by Moore's “law,” scientists will not be able to use them for a very long time to predict many properties of nature by direct simulation of the fundamental microscopic laws of physics. This point is important enough to emphasize.
Suppose that, T years from the present, a calculation requiring computation time t0 at present will require computation time t(T) = t02−T/2 (Moore's “law,” see Figure 1). Currently, state of the art numerical solutions of the Schrödinger equation for a few hundred atoms can be carried out fast enough so that the motion of these atoms can be followed long enough to obtain thermodynamic properties. This is adequate if one wishes to predict properties of simple homogeneous gases, liquids or solids from first principles (as we will be discussing later). However, for many problems of current interest, one is interested in entities in which many more atoms need to be studied in order to obtain predictions of properties at the macroscopic level of a centimeter or more.
Here we introduce interactions between particles, beginning with the classical case. In practice we will call a system an imperfect gas when it is sufficiently dilute so that an expansion of the pressure in a power series in the density converges reasonably quickly. This series is called the virial series and we will introduce it in this chapter. This definition of an imperfect gas thus can depend on the temperature. If the power series in the density does not converge we may refer loosely to the system as a liquid, as long as it does not exhibit long range order characteristic of various solids and liquid crystals. The experimental distinction between a gas and a liquid will be discussed more precisely in Chapter 10.
We will develop the virial series for a classical gas in two different, but equivalent, ways here. In the first method we develop a series for the partition function Z using the grand canonical distribution. By making a partial summation of this series we get a series in the fugacity. In the second method we study a series for the free energy F = −kBT ln Z and use the canonical ensemble. Though the two methods are equivalent, we discuss them both in order to provide an opportunity to introduce several concepts common in the statistical mechanical literature.
The classical virial series will clarify more precisely than we were able to do in the last two chapters the conditions under which a gas can be treated as perfect or ideal.
We begin with some thermodynamic considerations and then proceed to a discussion of critical phenomena. In discussing critical phenomena we first describe the phenomenology, then some general considerations concerning Landau–Ginzburg free energy functionals and mean field theory and finally an introduction to the renormalization group.
Thermodynamic considerations
Consider a system at fixed pressure P and temperature T. (We will not be concerned with magnetic properties yet.) We will suppose that the system contains some integer number s of molecular species. We will also suppose that we have a means of distinguishing two or more phases of this system. Though this is an assumption it requires some examples and discussion. We distinguish between phases by consideration of their macroscopic properties so a spatial as well as a temporal average is involved. For example, we distinguish gas from liquid by the difference in average density and magnet from paramagnet by the existence of a finite average magnetization in the former. (We will discuss some more subtle cases later.) But if we wish to consider (as we do here) the possible coexistence of more than one phase then a problem arises concerning the length scale over which we ought to average spatially. If, in a system containing two coexisting phases, we find the average properties by averaging over the entire system, then we will always get just one number and not two and will have no means of distinguishing the phases.