We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Historically, the discipline of statistical physics originated in attempts to describe thermal properties of matter in terms of its constituent particles, but also played a fundamental role in the development of quantum mechanics. More generally, the formalism describes how new behavior emerges from interactions of many degrees of freedom, and as such has found applications in engineering, social sciences, and increasingly in biological sciences. This book introduces the central concepts and tools of this subject, and guides the reader to their applications through an integrated set of problems and solutions.
The material covered is directly based on my lectures for the first semester of an MIT graduate course on statistical mechanics, which I have been teaching on and off since 1988. (The material pertaining to the second semester is presented in a companion volume.) While the primary audience is physics graduate students in their first semester, the course has typically also attracted enterprising undergraduates. as well as students from a range of science and engineering departments. While the material is reasonably standard for books on statistical physics, students taking the course have found my exposition more useful, and have strongly encouraged me to publish this material. Aspects that make this book somewhat distinct are the chapters on probability and interacting particles. Probability is an integral part of statistical physics, which is not sufficiently emphasized in most textbooks.
Many physical problems involve calculating sums over paths. Each path could represent one possible physical realization of an object such as a polymer, in which case the weight of the path is the probability of that configuration. The weights themselves could be imaginary as in the case of Feynman paths describing the amplitude for the propagation of a particle. Path integral calculations are now a standard tool of the theoretical physicist, with many excellent books devoted to the subject [R.P. Feynman and A.R. Hibbs, Quantum Mechanics and Path Integrals (McGraw-Hill, New York, 1965)]; [F.W. Wiegel, Introduction to Path-Integral Methods in Physics and Polymer Science (World Scientific, Singapore, 1986)].
What happens to sums over paths in the presence of quenched disorder in the medium? Individual paths are no longer weighted simply by their length, but are influenced by the impurities along their route. The sum may be dominated by “optimal” paths pinned to the impurities; the optimal paths usually forming complex hierarchical structures. Physical examples are provided by the interface of the random bond Ising model in two dimensions, and by magnetic flux lines in superconductors. The actual value of the sum naturally depends on the particular realization of randomness and varies from sample to sample. I shall initially motivate the problem in the context of the high-temperature expansion for the random bond Ising model. Introducing the sums over paths for such a lattice model avoids the difficulties associated with short distance cutoffs.
Individual atoms have magnetic moments due to the orbits and spins of the electrically charged particles within them. Interaction with imposed external magnetic fields tends to produce some ordering of these magnetic moments. But this ordering is opposed by thermal motion, which tends to randomize their orientations. It is the balance of these two opposing influences that determines the magnetization of most materials.
Diamagnetism, paramagnetism, and ferromagnetism
Consider what happens when we place a material in an external magnetic field. According to Lenz's law, any change in magnetic field through a current loop produces an electromotive force that opposes the intruding field. On an atomic level, each electron orbit is a tiny current loop. The external field places an extra force on the orbiting electrons, which causes small modifications of their orbits and a slight magnetization of the material in the direction opposite to the external field (homework). This response is called “diamagnetism” and is displayed by all materials.
In addition, there is a tendency for the tiny atomic magnets to change their orientations to line up with an imposed external field (Figure 17.1). This response is called “paramagnetism.” It gives the material a net magnetic moment in a direction parallel to the imposed external magnetic field. Not all materials are paramagnetic, because in some materials the atoms have no net magnetization to begin with and in others the atomic magnets cannot change their orientations. But most materials are paramagnetic, and their paramagnetism dominates over diamagnetism.
The subject of thermodynamics was being developed on a postulatory basis long before we understood the nature or behavior of the elementary constituents of matter. As we became more familiar with these constituents, we were still slow to place our trust in the “new” field of quantum mechanics, which was telling us that their behaviors could be described correctly and accurately using probabilities and statistics.
The influence of this historical sequence has lingered in our traditional thermodynamics curriculum. Until recently, we continued to teach an introductory course using the more formal and abstract postulatory approach. Now, however, there is a growing feeling that the statistical approach is more effective. It demonstrates the firm physical and statistical basis of thermodynamics by showing how the properties of macroscopic systems are direct consequences of the behaviors of their elementary constituents. An added advantage of this approach is that it is easily extended to include some statistical mechanics in an introductory course. It gives the student a broader spectrum of skills as well as a better understanding of the physical bases.
This book is intended for use in the standard junior or senior undergraduate course in thermodynamics, and it assumes no previous knowledge of the subject. I try to introduce the subject as simply and succinctly as possible, with enough applications to indicate the relevance of the results but not so many as might risk losing the student in details.
Imagine you could shrink into the atomic world. On this small scale, motion is violent and chaotic. Atoms shake and dance wildly, and each carries an electron cloud that is a blur of motion. By contrast, the behavior of a very large number of atoms, such as a baseball or planet, is quite sedate. Their positions, motions, and properties change continuously yet predictably. How can the behavior of macroscopic systems be so predictable if their microscopic constituents are so unruly? Shouldn't there be some connection between the two?
Indeed, the behaviors of the individual microscopic elements are reflected in the properties of the system as a whole. In this course, we will learn how to make the translation, either way, between microscopic behaviors and macroscopic properties.
The translation between microscopic and macroscopic behavior
The statistical tools
If you guess whether a flipped coin will land heads or tails, you have a 50% chance of being wrong. But for a very large number of flipped coins, you may safely assume that nearly half will land heads. Even though the individual elements are unruly, the behavior of a large system is predictable (Figure 1.1).
Your prediction could go the other way, too. From the behavior of the entire system, you might predict probabilities for the individual elements.
As indicated in Chapter 1, we will begin our studies by considering “small systems” – those with relatively few elements. Small systems are important in many fields, such as microelectronics, thin films, surface coatings, and materials at low temperatures. The elements of small systems may be impurities in semiconductors, signal carriers, vortices in liquids, vibrational excitations in solids, elements in computer circuits, etc. We may wish to study some behavioral characteristic of a small population of plants or people or to analyze the results of a small number of identical experiments. Besides being important in their own right, the pedagogical reason for studying small, easily comprehensible systems first is that we gain better insight into the behaviors of larger systems and better appreciation for the statistical tools we must develop to study them.
The introduction to larger systems will begin in Chapter 4. Each macroscopic system contains a very large number of microscopic elements. A glass of water has more than 1024 identical water molecules, and the room you are in probably has over 1027 identical nitrogen molecules and one quarter that number of identical molecules of oxygen. The properties of large systems are very predictable, even though the behavior of any individual element is not (Figure 2.1). This predictability allows us to use rather elegant and streamlined statistical tools in analyzing them.
By contrast, the behaviors of smaller systems are more erratic and unpredictable, requiring the use of more detailed statistical tools.
Energy can be transferred between systems by the following three mechanisms
the transfer of heat ΔQ;
the transfer of work ΔW (i.e., one system does work on another);
the transfer of particles ΔN.
These are called thermal, mechanical, and diffusive interactions, respectively (see Figure 5.1). The first three sections of this chapter introduce these interactions in a manner that is intuitive and qualitatively correct, although lacking in the mathematical rigor of the chapters that follow.
Heat transfer – the thermal interaction
In the preceding chapter we learned that thermal energy gets distributed equally among all available degrees of freedom, on average. So the energy of interacting systems tends to flow from hot to cold until it is equipartitioned among all degrees of freedom. The energy that is transferred due to such temperature differences is called heat, and it travels via three distinct mechanisms: conduction, radiation, and convection.
Conduction involves particle collisions (Figure 5.2a). On average, collisions transfer energy from more energetic particles to less energetic ones. Energy flows from hot to cold.
Energy transfer via radiation can be illustrated by toy boats in a tub. (Figure 5.2b). If one is jiggled up and down, it sends out waves. Other toy boats will oscillate up and down as these waves pass by. In a similar fashion (but at much higher speeds), electromagnetic waves are generated by accelerating electrical charges, and this energy is absorbed by other electrical charges that these waves encounter.