We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The nature and origins of renormalization group ideas in statistical physics and condensed matter theory are recounted informally, emphasizing those features of prime importance in these areas of science in contradistinction to quantum field theory, in particular: critical exponents and scaling, relevance, irrelevance and marginality, universality, and Wilson's crucial concept of flows and fixed points in a large space of Hamiltonians.
Contents
Foreword
1 Introduction
2 Whence came renormalization group theory?
3 Where stands the renormalization group?
4 Exponents, anomalous dimensions, scale invariance and scale dependence
5 The challenges posed by critical phenomena
6 Exponent relations, scaling, and irrelevance
7 Relevance, crossover, and marginality
8 The task for renormalization group theory
9 Kadanoff's scaling picture
10 Wilson's quest
11 The construction of renormalization group transformations: the epsilon expansion
12 Flows, fixed points, universality and scaling
13 Conclusions
Acknowledgments
Selected bibliography
Appendix A Asymptotic behavior
Appendix B Unitarity of the renormalization group
Appendix C Nature of a semigroup
Foreword
It was a pleasure to participate in the Colloquium cosponsored by the Departments of Philosophy and of Physics at Boston University “On the Foundations of Quantum Field Theory.” In the full title, this was preceded by the phrase: “A Historical Examination and Philosophical Reflections,” which set the aims of the meeting. Naturally, the participants were mainly high-energy physicists, experts in field theories, and interested philosophers of science.
Quantum field theory was originally thought to be simply the quantum theory of fields. That is, when quantum mechanics was developed physicists already knew about various classical fields, notably the electromagnetic field, so what else would they do but quantize the electromagnetic field in the same way that they quantized the theory of single particles? In 1926, in one of the very first papers on quantum mechanics, Born, Heisenberg and Jordan presented the quantum theory of the electromagnetic field. For simplicity they left out the polarization of the photon, and took space-time to have one space and one time dimension, but that didn't affect the main results. (Comment from audience.) Yes, they were really doing string theory, so in this sense string theory is earlier than quantum field theory. Born et al. gave a formula for the electromagnetic field as a Fourier transform and used the canonical commutation relations to identify the coefficients in this Fourier transform as operators that destroy and create photons, so that when quantized this field theory became a theory of photons. Photons, of course, had been around (though not under that name) since Einstein's work on the photoelectric effect two decades earlier, but this paper showed that photons are an inevitable consequence of quantum mechanics as applied to electromagnetism.
The quantum theory of particles like electrons was being developed at the same time, and made relativistic by Dirac in 1928-1930.
This commentator format is quite unfamiliar to me, and somewhat uncomfortable. I don't know whether I'm supposed to flatter the speakers, criticize them, grade them, or merely parrot them. Or should I go off on my own? This latter is tempting. If unleashed I could provide you with true answers to all the questions before us, philosophical, historical, physical, mathematical, physical-mathematical, and so on. But I'll bite my tongue and try to stick to the commentator role.
First, a few words on the topic of the opening session this afternoon - why are philosophers interested in quantum field theory? Quantum mechanics undoubtedly abounds in genuine, deep and still unresolved philosophical questions. These are usually posed, at least in popular accounts, in the context of finite systems of nonrelativistic point particles. In preparation for the conference, I asked myself: does relativistic QFT introduce any really distinctive philosophical problems? There are several possibilities, which I can only express in low-brow form. For one thing, a field is really a collection of infinitely many degrees of freedom, one for each point in space. I can well suppose that infinity raises interesting questions for philosophers. Certainly for physicists, especially in the field theory context, it has been a great preoccupation. As we'll hear later from Jackiw, infinity has its useful aspects. Next, for systems of relativistic particles in interaction, it is no longer so easily possible to speak of measurements of position and momentum of individual particles, except perhaps in asymptotic regions.
This volume is the result of a two-tier conference consisting of a two-day symposium followed by a one-day workshop, which was first conceived by a group of philosophers and historians of physics in the Greater Boston area, the core members of which were Babak Ashirafi of Massachusetts Institute of Technology, Ronald Anderson of Boston College, Tian Yu Cao of Boston University, David Kaiser of Harvard University and Silvan S. Schweber of Brandeis University, and then sponsored by the Center for Philosophy and History of Science, Boston University, and held at Boston University on March 1-3 1996, with financial support provided by the U.S. National Science Foundation and the Boston Philosophy of Science Association.
The intention was to offer an opportunity for a group of leading scholars to present their penetrating and in-depth analysis of various formulations and understandings of the foundations of quantum field theory, and to investigate philosophical and historical issues associated with these formulations, and also to provide a forum for the desirable, mutually beneficial but difficult exchange of views and ideas between physicists and mathematicians on the one side and philosophers and historians on the other. Although the experiment in dialogue was not completely successful, the publication of this volume will make the valuable contributions to this conference as well as interesting material about the tension between two groups of scholars accessible to a much wider audience for further theoretical, philosophical, historical, and sociological analysis.
For centuries we have said that mathematics is the language of physics. In 1960 Eugene Wigner wrote the famous article expounding the ‘unreasonable effectiveness of mathematics in the natural sciences’ [Wig]. He concludes that lecture by stating, ‘… the miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in the future and that it will extend… to wide branches of learning.’
Two basic questions have driven mathematical physics. First, what mathematical framework (basically what equations) describes nature? Once we decide on the equations, one can ask the second question: what are the mathematical properties of their solutions? We would like to know both their qualitative and quantitative properties. While it appears reasonable to answer these questions in order, they may be intimately related.
In that vein, let us inquire about the foundations of quantum electrodynamics. In this area of physics we encounter the most accurate quantitative and qualitative predictions about nature known to man. The Maxwell-Dirac equations are well accepted in their individual domains. But combined as a nonlinear system of equations, they lead to agreement between perturbation theoretic rules on the one hand - and to experiment on the other - that startles the imagination.
The topic I want to address principally is what exactly philosophers can be expected to contribute to discussions of the foundations of QFT. This of course is part of the broader problem of the relation between science and philosophy generally. I will begin, then, with some general points before turning to more detailed issues with specific reference to QFT.
Philosophy is a second-order activity reflecting on the concepts, methods and fundamental presuppositions of other disciplines, art, politics, law, science and so on, but also, most importantly, examining reflexively its own arguments and procedures. To put it crudely other disciplines may refer to philosophy as a sort of ‘court of appeal’ to resolve foundational disputes (at a sufficient level of generality). But philosophy has to serve as its own court of appeal, to pull itself up by its own bootstraps so to speak.
To many, science is the paradigmatic example of objective, rational, empirically warranted knowledge. So it is to epistemology, the theory of knowledge, that we must turn to examine the credentials of the scientific enterprise. But the first thing to notice about philosophy as compared with science, and in particular physics, is that there is a striking lack of consensus about what knowledge is and how it can be achieved.
Once upon a time, there was a controversy in particle physics. [Some physicists] searched for a self-consistent interpretation wherein all [particles] were equally elementary. Others... insisted on the existence of a small number of fundamental constituents and a simple underlying force law... Many recent experimental and theoretical developments seem to confirm the latter philosophy and lead toward a unique, unified, and remarkably simple and successful view of particle physics.
Introduction
A consistent description of all observed phenomena of the microworld is at hand! The so-called standard model of elementary particle physics grew by fits and starts from an exceedingly complex interplay between experiment and theory. In recent decades, experimentalists, with immense fortitude and generous funding, have identified and studied what appear to be the basic building blocks of matter and the fundamental forces that govern their interactions. Meanwhile, theorists have laboriously created, refined and reformulated a mathematical framework - quantum field theory - in terms of which the standard model is expressed. Aside from the occasional appearance (and disappearance) of conflicting data, the standard model has met every experimental test. And yet, too many deep questions remain unanswered for the standard model, and hence quantum field theory, to be the last word. Many theoretical physicists believe that an entirely new framework is needed - superstring theory or something like it - that will once again revise our understanding of space, time and matter.
In the course of these developments, the conceptual basis of the present theory has been obscured.
One of Newton's most far-reaching intuitions was to break the ‘mathematical theory of the world’ into two components. One component is given by the various specific force laws, or, in later formulations, specific Lagrangians. The other and more fundamental component is the general theory of motion, which we may denote as ‘Mechanics’. Quantum field theory (QFT) is an example of the second. It is a general scheme for treating physical theories, which is not committed to specific systems or to specific Lagrangians.
As a general scheme for treating physical theories, QFT is extraordinarily successful and remarkably flexible. Its impressive effectiveness has been emphasized in this conference by Jackiw, Shankar, and others. QFT had its periods of ill-fortune, for instance in the sixties, at the time of S-matrix theory, recalled in this conference by Kaiser and by Shankar. But then it had its glorious comebacks ‘to a glory even greater than before’. Today, our understanding of the world at the fundamental level is based on the Standard Model, which is formulated within the framework of QFT, and on classical general relativity. General relativity cannot be seen as a ‘fundamental’ theory since it neglects the quantum behavior of the gravitational field, but many of the directions that are explored with the aim of finding a quantum theory of the gravitational field and/or extending the Standard Model - perhaps to a theory of everything - are grounded in QFT.
Deser: We are about to see what is to me at least a unique ‘five fork’ round table here. That is to say we have five people who are amongst the most important creators and makers of quantum field theory. Sidney Coleman is of course the conscience of modern field theory and the greatest teacher in the field. He's done a few nifty things himself besides that. I'm going alphabetically here of course. Shelly Glashow, as we have been told yesterday, is a self-confessed consumer of quantum field theory, but don't let that fool you. The faux naive attitude hides a very profound knowledge of the field as well. David Gross is of course Mr QCD, and unlike Moses, he's also been able to make the transition over to the promised string land, so that we have it both ways with him. Steve Weinberg has really made much of quantum field theory, of what is modern quantum field theory over the decades, and he has used it, in spades. He has now just published a classic textbook on the subject. Finally, in alphabetical order, is Arthur Wightman, who is the father of rigorous quantum field theory amongst many other things, for example things which have now passed into the vernacular, the use of Euclidean techniques to understand quantum field theory and the wisdom of understanding many of the things as you heard in talk, understanding of how quantum field theory was formed.
Several speakers at this conference have emphasized the conceptual difficulties of quantum gravity (see particularly [1-3]). As they pointed out, when we bring in gravity, some of the basic premises of quantum field theory have to undergo radical changes: we must learn to do physics in the absence of a background space-time geometry. This immediately leads to a host of technical difficulties as well, for the familiar mathematical methods of quantum field theory are deeply rooted in the availability of a fixed space-time metric, which, furthermore, is generally taken to be flat. The purpose of this contribution is to illustrate how these conceptual and technical difficulties can be overcome.
For concreteness, we will use a specific non-perturbative approach and, furthermore, limit ourselves to just one set of issues: exploration of the nature of quantum geometry. Nonetheless, the final results have a certain degree of robustness and the constructions involved provide concrete examples of ways in which one can analyze genuine field theories, with an infinite number of degrees of freedom, in absence of a background metric. As we will see, the underlying diffeomorphism invariance is both a curse and a blessing. On the one hand, since there is so little background structure, concrete calculations are harder and one is forced to invent new regularization methods. On the other hand, when one does succeed, the final form of results is often remarkably simple since the requirement of diffeomorphism invariance tends to restrict the answers severely.
Very few physicists nowadays would challenge the statement that the idea of renormalization group occupies a central place in our theoretical understanding of the physical world. Not only are many crucial components of the standard model, such as asymptotic freedom and quark confinement, somewhat justified by the idea of renormalization group, but our understanding of quantum field theory itself, from the nature of parameters that characterize the system it describes and their renormalization, to the justification of effective field theories and nonrenormalizable interactions and their theoretical structures, have been substantially dependent on the idea of renormalization group. More profoundly, this idea has also shaped our understanding of the relationship between fundamental and effective theories, which in turn has suggested a hierarchical structure of the physical world. Since all these issues have been discussed elsewhere in the past few years, there is no point in repeating them again.
What I am going to do now is to lay out the difficulties that I have felt in my desire to justify the idea of renormalization group.
The renormalization group, as embodied in the Callan-Symanzik equation, has had a profound impact on field theory and particle physics. You've heard from Ramamurti Shankar and Michael about some of its impact on condensed matter physics. I'd like to tell you my view of what the renormalization group has meant to practising condensed matter physicists.
What have we learned from renormalization theory? We learned that the detailed physics of matter at microscopic length scales and high energies is irrelevant for critical phenomena. Many different microscopic theories lead to exactly the same physical laws at a critical point. As Michael Fisher explained, one can even make precise quantitative predictions about certain ‘universal’ critical exponents without getting the microscopic physics right in detail. What is important is symmetry, conservation laws, the range of interactions, and the dimensionality of space. The physics of the diverging fluctuations at a critical point, which take place on scales of a micron or more, that's 104 angstrom, is largely ‘decoupled’ from the physics at angstrom length scales.
This story about scaling laws at a critical point tells us something about the meaning of a ‘fundamental physics’. Fundamental physics is not necessarily the physics of smaller and smaller length scales, to the extent that these length scales decouple from the physics that we're interested in at the moment. To elaborate on this point, I'd like to refer to a short paper, which influenced me a lot as a graduate student, that Ken Wilson wrote in 1972.
Aharonov and Bohm (1959) drew attention to the quantum mechanical prediction that an interference pattern due to a beam of charged particles could be produced or altered by the presence of a constant magnetic field in a region from which the particles were excluded. This effect was first experimentally detected by Chambers (1960), and since then has been repeatedly and more convincingly demonstrated in a series of experiments including the elegant experiments of Tonomura et al. (1986).
At first sight, the Aharonov-Bohm effect seems to manifest nonlocality. It seems clear that the (electro)magnetic field acts on the particles since it affects the interference pattern they produce; and this must be action at a distance since the particles pass through a region from which that field is absent. Now it is commonly believed that this appearance of nonlocality can be removed by taking it to be the electromagnetic potential Aμ rather than the field Fμν that acts locally on the particles: indeed, Bohm and Aharonov themselves took the effect to demonstrate the independent reality of the (electro)magnetic potential. But the nonlocality is real, not apparent, and cannot be removed simply by invoking the electromagnetic potential. While there may indeed be more to electromagnetism than can be represented just by the values of Fμν at all space-time points, acknowledging this fact still does not permit a completely local account of the Aharonov-Bohm effect.