We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.
Fast computers enable the solution of quantum many-body problems by Monte Carlo methods. As computing power increased dramatically over the years, similarly impressive advances occurred at the level of the algorithms, so that we are now in a position to perform accurate simulations of large systems of interacting quantum spins, Bosons, and (to a lesser extent) Fermions. The purpose of this book is to present and explain the quantum Monte Carlo algorithms being used today to simulate the ground states and thermodynamic equilibrium states of quantum models defined on a lattice. Our intent is not to review all relevant algorithms – there are too many variants to do so comprehensively – but rather to focus on a core set of important algorithms, explaining what they are and how and why they work.
Our focus on lattice models, such as Heisenberg and Hubbard models, has at least two implications. The first is obviously that we are not considering models in the continuum where extensive use of quantum Monte Carlo methods traditionally has focused on producing highly accurate ab initio calculations of the ground states of nuclei, atoms, molecules, and solids. Quantum Monte Carlo algorithms for simulating the ground states of continuum and lattice models, however, are very similar. In fact, the lattice algorithms are in many cases derived from the continuum methods. With fewer degrees of freedom, lattice models are compact and insightful representations of the physics in the continuum.
The second implication is a focus on both zero and finite temperature algorithms. On a lattice, it is natural to study phase transitions. In particular, the recent dramatic advances in quantum Monte Carlo lattice methods for the simulation of quantum spin models were prompted by a need for more efficient and effective ways to study finite-temperature transitions. While quantum Monte Carlo is profitably used to study zero temperature phase transitions (quantum critical phenomena), some ground state algorithms have no finite temperature analogs and vice versa. In many respects, the lattice is where the current algorithmic action is.
The book is divided into four parts. The first part is a self-contained, more advanced than average, discussion of the Monte Carlo method, its use, and its foundations.
A quantum Monte Carlo method is simply a Monte Carlo method applied to a quantum problem. What distinguishes a quantum Monte Carlo method from a classical one is the initial effort necessary to represent the quantum problem in a form that is suitable for Monte Carlo simulation. It is in making this transformation that the quantum nature of the problem asserts itself not only through such obvious issues as the noncommutivity of the physical variables and the need to symmetrize or antisymmetrize the wave function, but also through less obvious issues such as the sign problem. Almost always, the transformation replaces the quantum degrees of freedom by classical ones, and it is to these classical degrees of freedom that the Monte Carlo method is actually applied. Succeeding chapters present and explain many of the quantum Monte Carlo methods being successfully used on a variety of quantum problems. In Chapters 1 and 2 we focus on discussing what the Monte Carlo method is and why it is useful.
The Monte Carlo method
The Monte Carlo method is not a specific technique but a general strategy for solving problems too complex to solve analytically or too intensive numerically to solve deterministically. Often a specific strategy incorporates several different Monte Carlo techniques. In what is likely the first journal article to use the phrase “Monte Carlo,” Metropolis and Ulam (1949) discuss this strategy. To paraphrase them,
The Monte Carlo method is an iterative stochastic procedure, consistent with a defining relation for some function, which allows an estimate of the function without completely determining it.
This is quite different from the colloquialism, “a method that uses random numbers.” Let us examine the definition piece by piece. A key point will emerge.
Ulam and Metropolis were presenting the motivation and a general description of a statistical approach to the study of differential and integro-differential equations. These equations were their “defining relation for some function.” The “function” was the solution of these equations. This function is of course unknown a priori. Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953) a few years later would propose a statistical approach to the study of equilibrium statistical mechanics. The defining relation there was a thermodynamic average of a physical quantity over the Boltzmann distribution. The function was the physical quantity, and the unknown its average.
In this chapter, we discuss the fixed-node and constrained-path Monte Carlo methods for computing the ground state properties of systems of interacting electrons. These methods are arguably the two most powerful ones presently available for doing such calculations, but they are approximate. By sacrificing exactness, they avoid the exponential scaling of the Monte Carlo errors with system size that typically accompanies the simulation of systems of interacting electrons. This exponential scaling is called the Fermion sign problem. After a few general comments about the sign problem, we outline both methods, noting points of similarity and difference, plus points of strength and weakness. We also discuss the constrained-phase method, an extension of the constrained-path method, which controls the phase problem that develops when the ground state wave function cannot be real.
Sign problem
The “sign problem” refers to the exponential increase of the Monte Carlo errors with increasing system size or decreasing temperature (e.g., Loh et al., 1990, 2005; Gubernatis and Zhang, 1994) that often accompanies a Markov chain simulation whose limiting distribution is not everywhere positive. Such a case generally arises in simulations of Fermion and frustrated quantum-spin systems. It seems so inherent to Monte Carlo simulations of Fermion systems that the phrase “the sign problem” to many seems almost synonymous with the phrase “the Fermion sign problem.”
Explanations for the cause of the sign problem vary and are still debated. In this chapter, we choose to summarize two explanations that seem to connect the causes in ground state Fermion simulations in the continuum and on the lattice. The sign problem, of course, is not limited to ground state calculations or even to Fermion simulations. While the cause we discuss in Sections 11.2 and 11.3 focuses on the low-lying states of diffusion-like operators, several topological pictures have been proposed (Muramatsu et al., 1992; Samson, 1993; Gubernatis and Zhang, 1994; Samson, 1995). Some of these discussions are done in the context of the zero- and finite-temperature determinant methods (Muramatsu et al., 1992; Gubernatis and Zhang, 1994). Others are done more analytically from a Feynman path-integral point of view (Samson, 1993, 1995). Some are for particles with statistics other than Fermions. The presentation of the sign problem in this chapter is appropriate for the Monte Carlo methods discussed in this chapter.
The presence of dynamical information is a feature distinguishing a finite-temperature quantum Monte Carlo simulation from a classical one. We now discuss numerical methods for extracting this information that use techniques and concepts borrowed from an area of probability theory called Bayesian statistical inference. The use of these techniques and concepts provided a solution to the very difficult problem of analytically continuing imaginary-time Green's functions, estimated by a quantum Monte Carlo simulation, to the real-time axis. Baym and Mermin (1961) proved that a unique mapping between these functions exists. However, executing this mapping numerically, with a simulation's incomplete and noisy data, transforms the problem into one without a unique solution and thus into a problem of finding a “best” solution according to some reasonable criterion. Instead of executing the analytic continuation between imaginary- and real-time Green's functions, thereby obtaining real-time dynamics, we instead estimate the experimentally relevant spectral density function these Green's functions share. We present three “best” solutions and emphasize that making the simulation data consistent with the assumptions of the numerical approach is a key step toward finding any of these best solutions.
Preliminary comments
The title of this chapter, “Analytic Continuation,” is unusual in the sense that it describes the task we wish to accomplish instead of the method we use to accomplish it. If we used the name of the method, the title would be something like “Bayesian Statistical Inference Using an Entropic Prior.” A shorter title would be “The Maximum Entropy Method.”We hope by the end of the chapter the reader will agree that using the short title is perhaps too glib and the longer one has meaningful content.
The methods to sample from a discrete probability for n events, presented in Algorithms 1 and 2, become inefficient when n is large. To devise an efficient algorithm for large n, we must distinguish the case where the list of probabilities is constantly changing from the case where it remains unchanged. In the latter case, there are several ways to boost efficiency by performing a modestly expensive operation once and then using more efficient operations in subsequent samplings. One such approach is to sort the list of probabilities, an operation generally requiring O(n log n) operations, and use a bisection method, requiring O (log n) operations, to select the events from the list. What we really want, however, is a method requiring only O (1) operations. Walker's alias algorithm (Walker, 1977) has this property.
Suppose we need to repeatedly choose one of five colors randomly – red, blue, yellow, pink, and green – with weights 6, 1, 3, 2, and 8 (Fig. A.1). We first generate five sticks with lengths 6, 1, 3, 2, and 8 and paint each stick with the color that it represents. Next, we define a linked list of the sticks that are longer than the average (which is 4), and another for the sticks that are shorter than the average. In the present case, the long-stick list is (“red”→“green”), and the short-stick list is (“blue” → “yellow” → “pink”). We pick the first item from each list and cut the longer (red) stick in two pieces in such a way that if we join one of the two to the shorter (blue) stick, we obtain an average-length stick. As a result, we are left with a red stick of length 3 and a joint stick of length 4. Since the original red stick was made shorter than the average length, we remove it from the long-stick list and append it to the short-stick list. On the other hand, the original blue stick has become an average-length stick, so we remove it from the short-stick list. Then, we pick the first item from each list and repeat the same operations again and again. When finished, we have five sticks of average length, some with a single color and others with two colors.