We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This appendix summarizes the arguments of the papers by Luttinger and Ward to derive the Luttinger theorem that is stated in Sec. 3.6. This is not a proof that the theorem applies to all possible states of a crystal; it is the derivation of arguments that it applies to all states that can be analytically continued from some non-interacting system, i.e., a “normal state of matter” as defined in Sec. 3.4. The derivation is an example of the use of the T ≠ 0 Green's functions in App. D and the conclusions for T = 0.
The Luttinger theorem is a cornerstone in the theory of condensed matter. As described qualitatively in Sec. 3.6, it requires that the volume enclosed by the Fermi surface is conserved independent of interactions, i.e., it is the same as for a system of non-interacting particles. Similarly, the Friedel sum rule is the requirement that the sum of phase shifts around an impurity is determined by charge neutrality, which was derived by Friedel [163] for non-interacting electrons. This section is devoted to a short summary of the original work of Luttinger and Ward and the extension of the arguments to the Freidel sum rule [166]. Here we explicitly indicate the chemical potential μ, since the variation from μ is essential to the arguments.
There are two key points: in the interacting system the wavevector in the Brillouin zone k is conserved so that excitations can be labeled k, and the self-energy ∑k(ω) is purely real at the Fermi energy ω = μ at temperature T = 0. The latter point is an essential feature of a Fermi liquid or a “normal metal,” which is justified by the argument that the phase space for scattering at T = 0 vanishes as ω → μ (see Sec. 7.5). Thus, at the Fermi energy the Green's function as a function of k is the same as for an independent-particle problem with eigenvalues. (Of course, for an interacting system at any other energy cannot be described by independent particles.) In an independent-particle system at T = 0 the occupation numbers jump from 1 to 0 as a function of k at the Fermi surface, and in the interacting system there is still a discontinuity in nk that defines the surface (Sec. 7.5).
The calculation of a wavefunction took about two afternoons, and five wavefunctions were calculated in the whole ….
Wigner and Seitz, 1933
Summary
In order to explain many important properties of materials and phenomena, it is necessary to go beyond independent-particle approximations and directly account for many-body effects that result from electronic interaction. The many-body problem is a major scientific challenge, but there has been great progress resulting from theoretical developments and advances in computation. This chapter is a short introduction to the interacting-electron problem, with some of the history that has led up to the concepts and methods described in this book.
The many-body interacting-electron problem ranks among the most fascinating and fruitful areas of research in physics and chemistry. It has a rich history, starting from the early days of quantum mechanics and continuing with new intellectual challenges and opportunities. The vitality of electronic structure theory arises in large part from the close connection with experiment and applications. It is spurred on by new discoveries and advances in techniques that probe the behavior of electrons in depth. In turn, theoretical concepts and calculations can now make predictions that suggest new experiments, as well as provide quantitative information that is difficult or not yet possible to measure experimentally.
This book is concerned with the effects of interactions between electrons beyond independent-particle approximations. Some phenomena cannot be explained by any independent-electron method, such as broadening and lifetime of excited states and two-particle bound states (excitons) that are crucial for optical properties of materials. There are many other examples, such as the van derWaals interaction between neutral molecules that arises from the dipole–induced dipole interaction. This force, which is entirely due to correlation between electrons, is an essential mechanism determining the functions of biological systems. Other properties, such as thermodynamically stable magnetic phases, would not exist if there were no interactions between electrons; even though mean-field approximations can describe average effects, they do not account for fluctuations around the average. Ground-state properties, such as the equilibrium structures of molecules and solids, can be described by density functional theory (DFT) and the Kohn–Sham independent-particle equations. However, present approximations are often not sufficient, and for many properties the equations, when used in a straightforward way, do not give a proper description, even in principle.
It is a characteristic of wisdom not to do desperate things.
Henry David Thoreau
Summary
The essence of a mean-field method is to replace an interacting many-body problem with a set of independent-particle problems having an effective potential. It can be chosen either as an approximation for effects of interactions in an average sense or as an auxiliary system that can reproduce selected properties of an interacting system. The effective potential can have an explicit dependence on an order parameter for a system with a broken symmetry such as a ferro- or antiferromagnet. Mean-field techniques are vital parts of many-body theory: the starting points for practical many-body calculations and often the basis for interpreting the results. This chapter provides a summary of the Hartree–Fock approximation, the Weiss mean field, and density functional theory that have significant roles in the methods described in this book.
Mean-field methods denote approaches in which the interacting many-body system is treated as a set of non-interacting particles in a self-consistent field that takes into account some effects of interactions in some way. In the literature such methods are often called “one-electron”; however, in this book we use “non-interacting” or “independent-particle” to refer to mean-field concepts and approaches. We reserve the terms “one-electron” or “one-body” to denote quantities that involve quantum mechanical operators acting independently on each body in a many-body system. Mean-field approaches are relevant for the study of interacting, correlated electrons because they lead to approximate formulations that can be solved more easily than more sophisticated approaches; when judiciously chosen, mean-field solutions can yield useful, physically meaningful results, and they can provide the basis and conceptual structure for investigating the effects of correlation. The particles that are the “bodies” in a many-body theory can be the original particles with their bare masses and interactions, or, most often, they may be the solutions of a set of mean-field equations chosen to facilitate the solution of the many-body problem. A large part of many-body theory in condensed matter involves the choice of the most appropriate independent particles. Hence, it is essential to define clearly the particles that are created and annihilated by the operators and in which the many-body theory is formulated.
These two pages conclude a book of many chapters, that span an arc from fundamental theory to applications, from concepts to computation. This reflects an approach that the book is meant to promote: not the competition between research areas or methods, but the awareness that often exchange and combination leads to the most important advances.
As stated on the first pages of this book, the many-body interacting-electron problem ranks among the most fascinating and fruitful areas of research in science. It combines the intellectual challenges of quantum many-body physics with the opportunities to impact areas of science and technology, from engineering to biology, from archeology to astrophysics. There has been great progress that opens the door for the future, when all these disciplines can be greatly enhanced by quantitative calculations based on the fundamental laws of quantum mechanics.
To describe, understand, and predict the phenomena that are observed in the many-body world requires new concepts and ideas. At the same time, much of the progress in the past has been driven by the advances in computers. Very little described in this book on real materials would have been accomplished on computers from the 1970s, when many of the methods were invented. We expect this trend toward more available processing speed through parallel processing and memory to continue. Of course, advances in algorithms and software are also responsible for progress almost equally with advances in hardware. The field relies on a triangle formed by concepts, techniques, and tools: this triangle should be expanded in all directions in order to make progress.
The methods that translate concepts into feasible approaches and make best use of available hardware are at the heart of this book. We have described different ways to approach the problem. It is important to recognize that the various methods have different capabilities, so that a more complete picture of a given phenomenon or material can be obtained by using a variety of methods. The methods are complementary but not disjunct: there are many touching points that invite us to strive for possible combinations. Such an attitude has a long tradition in the field: Kohn–Sham DFT is used to construct trial wavefunctions for QMC, and QMC results of the homogeneous electron gas are used as input in DFT.
Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. It is now possible to make quantitative calculations and provide novel understanding of natural and man-made materials and phenomena vital to physics, chemistry, materials science, as well as many other fields. Electronic structure is indeed an active, growing field with enormous impact, as illustrated by the more than 10,000 papers per year.
Much of our understanding is based on mean-field models of independent electrons, such as Hartree–Fock and other approximations, or density functional theory. The latter is designed to treat ground-state properties of the interacting-electron system, but it is often also used to describe excited states in an independent-electron interpretation. Such approaches can only go so far; many of the most interesting properties of materials are a result of interaction between electrons that cannot be explained by independent-electron descriptions. Calculations for interacting electrons are much more challenging than those of independent electrons. However, thanks to developments in theory and methods based on fundamental equations, and thanks to improved computational hardware, many-body methods are increasingly essential tools for a broad range of applications. With the present book, we aim to explain the many-body concepts and computational methods that are needed for the reader to enter the field, understand the methods, and gain a broad perspective that will enable him or her to participate in new developments.
What sets this book apart from others in the field? Which criteria determine the topics included? We want the description to be broad and general, in order to reflect the richness of the field, the generality of the underlying theories, and the wide range of potential applications. The aim is to describe matter all the way from isolated molecules to extended systems. The methods must be capable of computing a wide range of properties of diverse materials, and have promise for exciting future applications. Finally, practical computational methods are an important focus for this book.
Choices have to be made since the number of different approaches, their variations, and applications is immense, and the book is meant to be more than an overview. We therefore cannot focus on such important areas as quantum chemistry methods, e.g. coupled cluster theory and configuration interaction methods, nor do we cover all of the developments in lattice models, or explore the vast field of superconductivity.
Dynamical mean-field theory is designed to treat systems with local effective interactions that are strong compared with the independent-particle terms that lead to delocalized band-like states. Interactions are taken into account by a many-body calculation for an auxiliary system, a site embedded in a dynamical mean field, that is chosen to best represent the coupling to the rest of the crystal. The methods are constructed to be exact in three limits: interacting electrons on isolated sites, a lattice with no interactions, and the limit of infinite dimensions d → ∞ where mean-field theory is exact. This chapter is devoted to the general formulation, the single-site approximation where the calculation of the self-energy is mapped onto a self-consistent quantum impurity problem, and instructive examples for the Hubbard model on a d → ∞ Bethe lattice. Further developments and applications are the topics of Chs. 17–21.
One of the most rewarding features of condensed matter theory is the ability to address difficult problems from different points of view. The preceding Chs. 9–15 present an approach based on perturbation expansions in the Coulomb interaction. In particular, the GW approximation for the self-energy has proven to be extremely successful in describing electronic spectra of many materials, as described in Ch. 13. The methods can be applied to the ordered states of materials with d and f states, for example, ferromagnetic Ni and anti-ferromagnetic NiO, as described in Secs. 13.4 and 20.7. However, the GW and related approximations have difficulties in treating cases with degenerate or nearly degenerate states and low-energy excitations. Present-day methods do not describe phenomena like the fluctuations of local moments in a ferromagnetic material above the Curie temperature or the insulating character of NiO in the paramagnetic phase; more generally, they have difficulty describing strong correlation.
The topic of this chapter and Chs. 17–21 is dynamical mean-field theory, which is also a Green's function method in which the key quantity is the self-energy. However, it is designed to treat strong interactions for electrons in localized atomic-like states, such as the d and f states in transition metals, lanthanide and actinide elements and compounds.
In no wave function of the type (1) [product of single determinants for each spin] is there a statistical correlation between the positions of the electrons with antiparallel spin. The purpose of the aforementioned generalization of (1) is to allow for such correlations. This will lead to an improvement of the wave function and, therefore, to a lowering of the energy value.
E. Wigner, Phys. Rev. 46, 1002 (1934)
Summary
Although the exact many-body wavefunction cannot be written down in general, we do know of some of its properties. For example, there are differences between the wavefunctions of insulators and metals and the cusp condition gives the behavior as any two charges approach each other. In this chapter we also discuss approximate wavefunctions, ways to judge their accuracy and how to include electronic correlation. Examples of many-body wavefunctions are the Slater–Jastrow (pair product) wavefunction and its generalization to pairing and backflow wavefunctions.
In other places in this book, we argue that it is not necessary to look at the many-body wavefunctions explicitly because they are unwieldy; the one- and two-body correlation functions discussed in Ch. 5 are sufficient to determine the energy and give information on the excitation spectra. However, these correlation functions do not always contain all information of interest. In principle, the ground-state energy of a many-electron system is a functional of the density, but the very derivation of the theorem invokes the many-body wavefunction, as expressed explicitly in Eq. (4.16). The effects of antisymmetry are manifest in the correlation functions, but antisymmetry is most simply viewed as a property of the wavefunction; electronic correlation is fundamentally a result of properties of the many-body wavefunction.
Studying many-body wavefunctions provides a very useful, different point of view of many-body physics from the approaches based on correlation functions. Many of the most important discoveries in physics have come about by understanding the nature of wavefunctions, such as the Laughlin wavefunction for the fractional quantum Hall effect, the BCS wavefunction for superconductors, p-wave pairing in superfluid 3He, and the Heitler– London approach for molecular binding. The role of Berry's phases has brought out the importance of the phase of the wavefunction in determining properties of quantum systems. This has led to new understanding of the classification of insulators, metals, superconductors, vortices, and other states of condensed matter.
… as suggested by Fermi, the time-independent Schrödinger equation … can be interpreted as describing the behavior of a system of particles each of which performs a random walk, i.e., diffuses isotropically and at the same time is subject to multiplication, which is determined by the value of the point function V.
N. Metropolis and S. Ulam, 1949
Summary
In the projector quantum Monte Carlo method, one uses a function of the hamiltonian to sample a distribution proportional to the exact ground-state wavefunction, and thereby computes exact matrix elements of it. An importance sampling transformation makes the algorithm much more efficient. In this chapter we introduce and develop the diffusion Monte Carlo method, which involves drifting, branching random walks. For any excited state, including any system with more than two electrons, one encounters the sign problem, limiting the direct application of these algorithms for most fermion systems. Instead, by using approximate fixed-node or fixed-phase boundary conditions, one can achieve efficiency similar to variational Monte Carlo. We also discuss the application of the projector method in a basis of Slater determinants.
In this chapter, we discuss a different quantum Monte Carlo method, projector Monte Carlo (PMC). This general method was first suggested by Fermi [1049]; see the quote at the start of this chapter by two of the inventors of the Monte Carlo method. An implementation of PMC was tried out in the early days of computing [1050]. Advances in methodology, in particular importance sampling, resulted in a significant large-scale application: the exact calculation of the ground-state properties of 256 hard-sphere bosons by Kalos, Levesque, and Verlet [1051] in 1974. Calculations for electronic systems and the fixednode approximation were introduced by Anderson [1052, 1053]. One of the most important projector MC algorithms, the diffusion Monte Carlo algorithm with importance sampling for fermions, was used to compute the correlation energy of homogeneous electron gas by Ceperley and Alder [109] in 1980; the resulting HEG correlation energy was crucial in the development of density functional calculations.
Types and properties of projectors
In this method, a many-body projector G(R, R) = Ĝ is repeatedly applied to filter out the exact many-body ground state from an initial state; the operation of the projector is carried out with a random walk, hence the name of this class of methods.
An idea that is developed and put into action is more important than an idea that exists only as an idea.
Buddha
Summary
In this chapter we sketch how GW calculations are performed in practice, touching upon approximations and numerical methods. Typical calculations are done in three steps: one has to determine the dynamically screened Coulomb interaction W, build the GW self-energy, and finally solve the quasi-particle or Dyson equation. All steps have their own difficulties. Choices have to be made, and the calculations are challenging for many materials. Computational approaches are constantly evolving, but many of the aspects contained in the chapter are expected to remain topical for quite some time.
GW calculations have become part of the standard toolbox in computational condensedmatter physics. Many details on foundations and putting into practice can be found in overviews and reviews, like [287, 334, 347, 408].
What does it mean to do a GWA calculation in practice? The formula for the GWA self-energy is as simple as its name, but GW calculations have a long history with continuous improvements. Modern GWA calculations are in the continuation of pioneering attempts to include correlations beyond Hartree–Fock using the concept of screening. Already in 1958 [409] correlation energies for the homogeneous electron gas were obtained from the study of the polarization of the gas due to an individual electron, and from the action of this polarization back on the electron, through the self-energy. These calculations, including several approximations, were limited to states close to the Fermi level. A GWA-like approach [410, 411] was applied to the electron gas in 1959, although these calculations didn't cover the range of densities rs ~ 2 - 5 which is typical for simple metals. Hedin's work [43] is fundamental in that it presented the GWA as the first term of a series in terms of the screened Coulomb interaction, and it contained an extensive description of the homogeneous electron gas on the GWA level. Many more studies on the HEG followed, including detailed investigation of the spectral functions [383, 384, 412], the importance of self-consistency [351, 352, 392], the electron gas in lower dimensions [369] (see also Ch. 11), and vertex corrections beyond the GWA (e.g., [413–415]), as discussed in Ch. 15.
The many-body problem consists of two parts: the first is the non-interacting system in a materials-specific external potential; the second is the Coulomb interaction that makes the problem so hard to solve. The most straightforward idea is to use perturbation theory, with the Coulomb interaction as perturbation. This is conceptually simple, but it turns out to be difficult in practice, since the Coulomb interaction is often not small compared with typical energy differences, it is long-ranged and in the thermodynamic limit there is an infinite number of particles, contributing with an infinite number of mutual interaction processes. The present chapter outlines how one can deal with this problem. It contains an overview of facts that one can also find in many standard textbooks on the many-body problem, but that are useful to keep in mind in order to look at later chapters from a sound and well-established perspective.
The many-body problem is a tough one, and it has many facets. Sorting it out is like putting together a huge puzzle. The eight introductory chapters of this book provide pieces of the puzzle, and ideas on what one might do about it. In the present chapter we choose to go in one of the possible directions, in order to arrive at something tangible. The chapter gives the general framework and the main ideas; specific approximations are the topic of Chs. 10–15.
The idea is to start from an independent-particle problem and add the Coulomb interaction as a perturbation. This is not easy: first, the interaction is responsible for a rich variety of phenomena that are completely absent otherwise, such as the finite lifetime of quasiparticles, or additional structures in spectra due to the fact that a quasi-particle excitation may transfer its energy to other elementary excitations, for example plasmons. Second, because of the two-body Coulomb interaction, the problem scales badly with the number of electrons, and straightforward perturbation theory for the many-body hamiltonian with the Coulomb interaction as perturbation rapidly becomes intractable or even useless, especially in large systems.
To get started, Sec. 9.1 recalls why things are not so easy. The following sections try to solve one problem after the other, starting from Sec. 9.2 where the Green's function is reformulated in a way that is appropriate for a perturbation expansion.
It is shown from first principles that, in spite of the large interatomic forces, liquid 4He should exhibit a transition analogous to the transition in an ideal Bose–Einstein gas. The exact partition function is written as an integral over trajectories, using the space-time approach to quantum mechanics.
R.P. Feynman, 1953
Summary
In this chapter we discuss imaginary-time path integrals and the path-integral Monte Carlo method for the calculation of properties of quantum systems at non-zero temperature. We discuss how Fermi and Bose statistics enter, and how to generalize the fixed-node procedure to non-zero temperature. We then discuss an auxiliary-field method for the Hubbard model. The path-integral method can be used to perform ground-state calculations, allowing calculations of properties with less bias than the projector Monte Carlo method. We also discuss the problem of estimating real-time response functions using information from imaginary-time correlation functions.
In previous chapters we described two QMC methods, namely variational QMC and projector (diffusion) QMC. Both of these methods are zero temperature, or, more properly, are formulated for single states. In this chapter we discuss the path-integral Monte Carlo (PIMC) method, which is explicitly formulated at non-zero temperature. Directly including temperature is important because many, if not most, measurements and practical applications involve significant thermal effects. One might think that to do calculations at a non-zero temperature we would have to explicitly sum over excited states. Such a summation would be difficult to accomplish once the temperature is above the energy gap, because there are so many possible many-body excitations. In addition, the properties for each excitation are more difficult to calculate than those for the ground state. As we will see, path-integral methods do not require an explicit sum over excitations. As an added bonus, they provide an interesting and enlightening window through which to view quantum systems. However, the sign problem, introduced in the previous chapter, is still present for fermion systems. The fixed-node approximation is used again.
An advantage of PIMC is the absence of a trial wavefunction. As a result, quantum expectation values, including ones not involving the energy, can be computed directly. For the expert, the lack of an importance function may seem a disadvantage; without it one cannot push the simulation in a preferred direction.
Condensed matter is constituted by a huge number of electrons and nuclei interacting with Coulomb potentials. The topic of this chapter is a way to deal with the full, coupled problem by separating it into a part that is tractable and the rest that is approximated. This naturally leads to the appearance of dynamical fields, and to the concept of “quasi-particles” that have the same quantum numbers as non-interacting electrons. The quasi-particles obey equations where the potential and bare interactions in the hamiltonian are replaced by dynamical self-energies and screened interactions that can describe many of the effects of correlation. In this chapter we discuss the intuitive concepts behind this approach in general terms to motivate the more rigorous formulations and approximations used in the Green's function methods of the following chapter and in Part III.
The first equations of this book, Eqs. (1.1) and (1.2), express the fundamental theory of matter in terms of electrons and nuclei that interact with Coulomb potentials. For systems with only a few electrons, a solution of the problem can be obtained by exact diagonalization, using configuration interaction methods. For many-electron systems one has to resort to other methods. For example, QMC stochastic simulations (Chs. 22–25) are among the most accurate methods known to calculate certain expectation values, such as equilibrium thermodynamic properties, total energies, the density, and various static correlation functions like those in Ch. 5. Other properties such as excitation spectra are more difficult to access with QMC. Straightforward perturbation theory is, in general, not appropriate since interactions among the electrons are of the same order of magnitude as the independent-particle terms. Hence, we must develop other approaches.
One strategy that gives access to equilibrium thermodynamic as well as dynamical excitation properties is to separate the interacting many-body problem into a part that is tractable by some means and the rest of the problem that is dealt with more approximately. The present chapter is devoted to this idea. It is a prelude to many-body Green's function methods; the aim is to provide a unified framework for the developments in Chs. 8–21, along with a qualitative description of the most relevant quantities.