We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In comparison with STR, which is a static theory of the kinematic structures of Minkowskian spacetime, GTR as a dynamical theory of the geometrical structures of spacetime is essentially a theory of gravitational fields. The first step in the transition from STR to GTR, as we discussed in section 3.4, was the formulation of EP, through which the inertial structures of the relative spaces of the uniformly accelerated frames of reference can be represented by static homogeneous gravitational fields. The next step was to apply the idea of EP to uniformly rotating rigid systems. Then Einstein (1912a) found that the presence of the resulting stationary gravitational fields invalidated the Euclidean geometry. In a manner characteristic of his style of theorizing, Einstein (with Grossmann, 1913) immediately generalized this result and concluded that the presence of a gravitational field generally required a non-Euclidean geometry, and that the gravitational field could be mathematically described by a four-dimensional Riemannian metric tensor gμv (section 4.1). With the discovery of the generally covariant field equations satisfied by gμv, Einstein (1915a–d) completed his formulation of GTR.
It is tempting to interpret GTR as a geometrization of gravity. But Einstein's interpretation was different. For him, ‘the general theory of relativity formed the last step in the development of the programme of the field theory … Inertia, gravitation, and the metrical behaviour of bodies and clocks were reduced to a single field quality’ (1927).
Modern gauge theory started with Yang and Mills's proposal about isotopic gauge invariance of the strong interactions. The Yang–Mills theory, a non- Abelian gauge theory, emerged totally within the framework of the quantum field programme, in which interactions are transmitted by field quanta and realized through localized coupling between field quanta. Physically, it obtained impetus from the charge independence of the strong nuclear forces, but at the same time was constrained by the short-range character of the same forces. Methodologically, it was driven by the desire of having a universal principle to fix a unique form of couplings among many possibilities. Physicists took some interest in the Yang–Mills theory, in part because they thought it renormalizable, but soon abandoned it because there seemed no way to have a gauge-invariant mechanism to account for the short-range character of the nuclear forces. (Chapter 9)
The difficulty was overcome, first in the early 1960s, by the discovery of the spontaneous breakdown of symmetry (section 10.1), and then in the early 1970s by the discovery of asymptotic freedom (section 10.2). With the proof, by Veltman and 't Hooft, of the renormalizability of non-Abelian gauge theories (section 10.3), a seemingly self-consistent conceptual framework was available to the particle physics community.
Conceptually, the framework is very powerful in describing various fundamental interactions in nature, and in exploring novel, global features of field theories that were supposed to be local, which features have direct bearings on our understanding of the structure of the vacuum and the quantization of charges (section 10.4).
Quantum field theory (QFT) can be analyzed in terms of its mathematical structure, its conceptual system for physical descriptions, or its basic ontology. The analysis can be done logically or historically. In this chapter, only the genesis of the conceptual foundations of QFT relevant to its basic ontology will be treated; no discussion of its mathematical structures or its epistemological underpinnings will be given. Some conceptual problems, such as those related to probability and measurement, will be discussed, but only because of their relevance to the basic ontology of QFT, rather than their intrinsic philosophical interest. Here, by basic ontology I mean the irreducible entities that QFT is invented to describe. The often mentioned candidates for the basic ontology of QFT, in fact of the physical world, are the discrete particle and the continuous field. Another possible candidate (the spacetime point) has also been suggested recently (Redhead, 1983). Since the aim of this chapter is to analyze the historical process in which the conceptual foundations of QFT were laid down, rather than the logical structure of QFT which philosophers of the present day treat, no discussion of the last possibility will be given.
The content of this chapter involves the formation and interpretation, in a roughly chronological order, of the concepts of the wave function, quantization, quantization of the field, the vacuum, interactions between fields, and renormalization. The first two topics will be discussed in relation to the quantization of the field, with their role being taken as the starting point of the conceptual development of QFT.
Although the developments that I plan to explore began with Einstein's general theory of relativity (GTR), without a proper historical perspective, it would be very difficult to grasp the internal dynamics of GTR and subsequent developments as further stages of a field programme. Such a perspective can be suitably furnished with an adequate account of the rise of the field programme itself. The purpose of this chapter is to provide such an account, in which major motivations and underlying assumptions of the developments that led to the rise of the field programme are briefly outlined.
Physical actions in a mechanical framework
As we mentioned in chapter 1, two intellectual trends, the mechanization and mathematization of the world that occurred in the early modern period, effectively changed people's conceptions of reality and causality. According to mechanical philosophers, such as Descartes and Boyle, the physical world was nothing but matter in motion. According to the Neoplatonists, such as Kepler and Henry More, the physical world was mathematical in its structure. As a synthesis of the two, the inner reality of the physical world appeared as merely material bodies with their motions governed by mathematical laws. Here, matter can take either the form of plenum, as in the case of Descartes, or the form of corpuscles, as in the case of Gassendi, Boyle, and Newton. The difference between the two mechanical systems led to different understandings of physical action, as we shall see in a moment.
The mechanization of the world also implied that the true nature of phenomena, the essence and cause of all changes and effects, can be found in the motion of material bodies in space.
For a gauge invariant system of quantum fields to be a self-consistent framework for describing various interactions, some mechanisms for short-range interactions must be found (sections 10.1 and 10.2) and its renormalizability proved (section 10.3). In addition, non-Abelian gauge theories have exhibited some novel features, which have suggested certain interpretations concerning the structure of the vacuum state and the conditions for the quantization of physical parameters such as charges. Thus a new question, which never appeared in the traditional foundational investigations of (Abelian-gaugeinvariant) QED or other non-gauge-invariant local field theories, has posed itself with a certain urgency, attracted intense attention, and become a favorite research topic among a sizable portion of mathematics-oriented physicists in recent years. This is the question of the global features of non-Abelian gauge field theories (section 10.4). This chapter will review the formation of these conceptual foundations of gauge theories, both as a theoretical framework and as a research programme, and will register some open questions that remain to be addressed by future investigators.
Mechanisms for short-range interactions (I): spontaneous symmetry breaking
The original Yang–Mills theory failed to be an improvement on the already existing theories of strong nuclear interactions, because it could not reproduce the observed short-range behavior of the nuclear force without explicitly violating gauge symmetry. A major obstacle to be overcome in the further development of gauge theories was then the need to have a consistent scheme with massive gauge quanta while retaining (in some sense) gauge invariance. One way of solving the problem is so-called spontaneous symmetry breaking (SSB).
The study of the interactions between electrically charged particles and electromagnetic fields within the framework of QFT is called quantum electrodynamics (QED). QED, and in particular its renormalized perturbative formulation, was modeled by various theories to describe other interactions, and thus became the starting point for a new research programme, the quantum field programme (QFP). The programme has been implemented by a series of theories, whose developments are strongly constrained by some of its characteristic features, which have been inherited from QED. For this reason, I shall start this review of the sinuous evolution of QFP with an outline of these features.
Essential features
QED is a theoretical system consisting of local field operators that obey equations of motion, certain canonical commutation and anticommutation relations (for bosons and fermions, respectively), and a Hilbert space of state vectors that is obtained by the successive application of the field operators to the vacuum state, which, as a Lorentz invariant state devoid of any physical properties, is assumed to be unique. Let us look in greater detail at three assumptions that underlie the system.
First is the locality assumption. According to Dirac (1948), ‘a local dynamical variable is a quantity which describes physical conditions at one point of space-time. Examples are field quantities and derivatives of field quantities,’ and “a dynamical system in quantum theory will be defined as localizable if a representation for the wave function can be set up in which all the dynamical variables are localizable’.
Einstein's GTR initiated a new programme for describing fundamental interactions, in which the dynamics was described in geometrical terms. After Einstein's classic paper on GTR (1916c), the programme was carried out by a sequence of theories. This chapter is devoted to discussing the ontological commitments of the programme (section 5.2) and to reviewing its evolution (section 5.3), including some topics (singularities, horizons, and black holes) that began to stimulate a new understanding of GTR only after Einstein's death (section 5.4), with the exception of some recent attempts to incorporate the idea of quantization, which will be addressed briefly in section 11.3. Considering the enormous influence of Einstein's work on the genesis and developments of the programme, it seems reasonable to start this chapter with an examination of Einstein's views of spacetime and geometry (section 5.1), which underlie his programme.
Einstein's views of spacetime and geometry
The relevance of spacetime geometry to dynamics
Generally speaking, a dynamical theory, regardless of its being a description of fundamental interactions or not, must presume some geometry of space for the formulation of its laws and interpretation. In fact a choice of a geometry predetermines or summarizes its dynamical foundations, namely, its causal and metric structures. For example, in Newtonian (or special relativistic) dynamics, Euclidean (or Minkowskian) (chrono-) geometry with its affine structure, which is determined by the kinematic symmetry group (Galileo or Lorentz group) as the mathematical description of the kinematic structure of space (time), determines or reflects the inertial law as its basic dynamical law. In these theories, the kinematic structures have nothing to do with dynamics.
The aim of this volume is to give a broad synthetic overview of 20th century field theories, from the general theory of relativity to quantum field theory and gauge theory. These theories are treated primarily as conceptual schemes, in terms of which our conceptions of the physical world are formed. The intent of the book is to give a historico-critical exposition of the conceptual foundations of the theories, and thereby detect a pattern in the evolution of these conceptions.
As an important component of culture, a conception of the physical world involves a model of the constitution and workings of nature, and includes assumptions about the mechanisms for fundamental interactions among the ultimate constituents of matter, and an interpretation of the nature of space and time. That is, the conception involves what philosophers usually call metaphysical assumptions. Talking about metaphysics is out of fashion these days. This is particularly so in the profession of science studies, where the primary concern now is with local and empirical successes, social interests, and power relations. Who would care for the ontological status of curved spacetime or virtual quanta when even the objective status of observed facts is challenged by the social constructivists? However, as we shall see in the text, metaphysical considerations are of crucial importance for path-breaking physicists in their investigations. One reason for this is that these considerations constitute essential ingredients of their conceptual frameworks. Yet the cultural importance of metaphysics goes much deeper and wider than its contribution to professional research. My own experience might be illuminating.
The treatment of the subject in this monograph is selective and interpretive, motivated and guided by some philosophical and methodological considerations, such as those centered around the notions of metaphysics, causality, and ontology, as well as those of progress and research programme. In the literature, however, these notions are often expressed in a vague and ambiguous way, and this has resulted in misconceptions and disputes. The debates over these motivations, concerning their implications for realism, relativism, rationality, and reductionism, have become ever more vehement in recent years, because of a radical reorientation in theoretical discourses. Thus it is obligatory to elaborate as clearly as possible these components of the framework within which I have selected and interpreted the relevant material. I shall begin this endeavor by recounting in section 1.1 my general view on science. After expounding topics concerning the conceptual foundations of physics in sections 1.2–1.4, I shall turn to my understanding of history and the history of science in section 1.5. The introduction ends with an outline of the main story in section 1.6.
Science
Modern science as a social institution emerged in the 16th and 17th centuries as a cluster of human practices by which natural phenomena could be systematically comprehended, described, explained, and manipulated. Among important factors that contributed to its genesis we find crafts (instruments, skills, and guilds or professional societies), social needs (technological innovations demanded by emerging capitalism), magic, and religion. As an extension of everyday activities, science on the practical level aims at solving puzzles, predicting phenomena, and controlling the environment. In this regard, the relevance of crafts and social needs to science is beyond dispute.
The rise of classical field theory had its deep roots in the search for an efficient cause of apparent actions at a distance. In the case of electromagnetism, Thomson and Maxwell in their ether field theory succeeded in explaining the distant actions by introducing a new entity, the electromagnetic field, and a new ontology, the continuous ether. The field possessed energy and thus represented physical reality. But as a state of the mechanical ether, it had no independent existence. In Lorentz's electrodynamics, the field was still a state of the ether. However, since Lorentz's ether was deprived of all material properties and became synonymous with a void space, the field enjoyed an independent ontological status on a par with matter. Thus in physical investigations there emerged a new research programme, the field theory programme based on a field ontology, in contrast with the mechanical programme based on a particle ontology (together with space and force).
The new programme acquired a fresh appearance in Einstein's special theory of relativity (STR), in which the superfluous ontology of the Lorentz ether was removed from the theoretical structure. But in some sense the field theory programme was not yet completed. In Lorentz's electrodynamics as well as in STR, the fields had to be supported by a space (or spacetime). Thus the ultimate ontology of these field theories seemed not to be the fields, but the space (or spacetime), or more exactly the points of spacetime. As we shall see in chapter 4, this hidden assumption concerning the ultimate ontology of a field theory was not without consequences.
The historical study of 20th century field theories in the preceding chapters provides an adequate testing ground for models of how science develops. On this basis I shall argue in this chapter that one of the possible ways of achieving conceptual revolutions is what I shall call ‘ontological synthesis’, and thus propose an argument for a certain kind of scientific realism and for the rationality of scientific growth.
Two views on how science develops
There are many views in contemporary philosophy of science concerning the question of how science develops. I shall consider in particular two of them. According to the first view, science evolves through the progressive incorporation of past results in present theories, or in short, science is a continuing progression. Such a ‘growth by incorporation’ view was taken by the empiricist philosopher Ernest Nagel. Nagel took for granted that knowledge tended to accumulate and claimed that ‘the phenomenon of a relatively autonomous theory becoming absorbed by, or reduced to, some inclusive theory is an undeniable and recurrent feature of the history of modern science’ (1961). Thus he spoke of stable content and continuity in the growth of science, and took this stable content as a common measure for comparing scientific theories. The idea of commensurability was taken to be the basis for a rational comparison of scientific theories.
A more sophisticated version of the ‘growth by incorporation’ view was proposed by Wilfrid Sellars (1965) and Heinz R. Post (1971).
In this part of the book an analysis of the formation of the conceptual foundations of the quantum field programme for fundamental interactions (QFP) will be given, with special concern for the basic ontology and the mechanism for transmitting fundamental interactions posited by QFR Chapter 6 reconstructs the history of quantum physics up to 1927 along two lines: (i) the quantization of the mechanical motions of atomic systems, and (ii) the quantization of wave fields. It also describes the basic ideas of uncertainty and complementarity, which were suggested by Werner Heisenberg and Niels Bohr to characterize quantum mechanics. Chapter 7 reviews, historically and critically, the positions adopted with respect to the conceptual foundations of quantum field theory (QFT), both by its founders and by later commentators. Its first three sections serve to analyze the ontological shift that occurred in the early history of QFT, namely, a shift from the particle ontology to an ontology of a new kind. Section 7.4 examines the dilemma facing the original ontological commitment of QFT, which was embodied in Dirac's notion of the vacuum. Section 7.5 reconstructs the evolution of the ideas about local coupling, the exchange of virtual quanta, and invariance principles, which were supposed to be obeyed by quantum interactions and thus to impose restrictions on the forms of the interactions. Section 7.6 reviews the recognition of divergences and the formulation of the renormalization programme in the late 1940s and early 1950s. Chapter 8 summarizes the essential features of QFP, its ups and downs, and various attempts to explore alternatives, until its revival, in the form of gauge field theories, in the early 1970s.
Einstein in his formative years (1895–1902) sensed a deep crisis in the foundations of physics. On the one hand, the mechanical view failed to explain electromagnetism, and this failure invited criticisms from the empiricist philosophers, such as Ernst Mach, and from the phenomenalist physicists, such as Wilhelm Ostwald and Georg Helm. These criticisms had a great influence on Einstein's assessment of the foundations of physics. His conclusion was that the mechanical view was hopeless. On the other hand, following Max Planck and Ludwig Boltzmann, who were cautious about the alternative electromagnetic view and also opposed to energeticism, Einstein, unlike Mach and Ostwald, believed in the existence of discrete and unobservable atoms and molecules, and took them as the ontological basis for statistical physics. In particular, Planck's investigations into black body radiation made Einstein recognize a second foundational crisis, a crisis in thermodynamics and electrodynamics, in addition to the one in the mechanical view. Thus it was ‘as if the ground had been pulled out from under one, with no firm foundation to be seen anywhere, upon which one could have built’ (Einstein, 1949).
Einstein's reflections on the foundations of physics were guided by two philosophical trends of the time: critical scepticism of David Hume and Mach, and certain Kantian strains that existed, in various forms, in the works of Helmholtz, Hertz, Planck, and Henri Poincaré. Mach's historico-conceptual criticism of Newton's idea of absolute space shook Einstein's faith in the received principles, and paved for him a way to GTR.
This chapter is devoted to examining the physical and speculative roots of the notion of gauge fields, reviewing early attempts at applying this attractive notion to various physical processes, and explaining the reasons why these heroic attempts failed.
Gauge invariance
The idea of gauge invariance, as we mentioned in section 5.3, originated in 1918, from Weyl's attempt to unify gravity and electromagnetism, based on a geometrical approach in four-dimensional spacetime (1918a, b). Weyl's idea was this. In addition to the requirement of GTR that coordinate systems have only to be defined locally, the standard of length, or scale, should also only be defined locally. So it is necessary to set up a separate unit of length at every spacetime point. Weyl called such a system of unit-standards a gauge system. In Weyl's view, a gauge system is as necessary for describing physical events as a coordinate system. Since physical events are independent of our choice of descriptive framework, Weyl maintained that gauge invariance, just like general covariance, must be satisfied by any physical theory. However, Weyl's original idea of scale invariance was abandoned soon after its proposal, since its physical implications appeared to contradict experiments. For example, as Einstein pointed out, this concept meant that spectral lines with definite frequencies could not exist.
Despite the initial failure, Weyl's idea of a local gauge symmetry survived, and acquired new meaning with the emergence of quantum mechanics (QM). As is well known, when classical electromagnetism is formulated in Hamiltonian form, the momentum Pμ is replaced by the canonical momentum (Pμ-eAμ/C).
The origin of the relativity theories was closely bound up with the development of electromagnetic concepts, a development that approached a coherent fieldtheoretical formulation, according to which all actions may vary in a continuous manner. In contrast, quantum theory arose out of the development of atomic concepts, a development that was characterized by the acknowledgment of a fundamental limitation to classical physical ideas when applied to atomic phenomena. This restriction was expressed in the so-called quantum postulate, which attributed to any atomic process an essential discontinuity that was symbolized by Planck's quantum of action.
Quantum field theory (QFT) is a later phase of the conceptual developments of quantum theory, and has the old quantum theory and non-relativistic quantum mechanics, essentially the preliminary analyses of the interactions between atoms and radiation, as its predecessors. This chapter will review some features of quantum physics that are relevant to the rise of QFT.
The quantization of motion
In solving the problem of the equilibrium between matter and radiation, Max Planck (1900) showed that the laws of heat radiation demanded an element of discontinuity in the description of atomic processes. In the statistical behavior of atoms represented, in Planck's description, by linear resonators in their interactions with radiation, only states of vibration should be taken into account whose energy was an integral multiple of a quantum, hv, where h is Planck's constant and v is the frequency of the resonator.
Planck himself believed that the discontinuity of energy was only a property of atoms, and was not ready to apply the idea of energy quantization to radiation itself.