We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Robert R. Wilson, Born Frontier, Wyoming, 1914; Ph.D., 1940 (physics), University of California at Berkeley; first Director of Fermi National Accelerator Laboratory and Professor Emeritus, Cornell University; high-energy physics (experimental).,
Adrienne Kolb, Born New Orleans, Louisiana, 1950; B.A., 1972 (history) University of New Orleans; Fermilab Archivist.
Building Fermilab was a many-faceted endeavor; it had scientific, technical, aesthetic, social, architectural, political, conservationist, and humanistic aspects, all of which were interrelated. Because the emphasis of this Symposium is on the history of science, I intend to highlight the scientific and technical aspects of the design and construction of the experimental facilities, but these other considerations were also important in building the experimental areas (Fig. 19.1). Neither the experiments made at the laboratory, nor improvements such as the Tevatron, made under the aegis of succeeding Directors, will be discussed here.
Before becoming director of Fermilab in 1967, I had been a trustee of URA since its formation in 1965. This experience had sensitized me to the growing number of particle physicists throughout the country who, with no accelerator at their home universities, had become dependent on sharing the use of larger accelerators constructed at national laboratories. It was they who started the revolt against the benevolent rule typified by the University of California's Radiation Laboratory at Berkeley and (on a smaller scale) by my own institution, Cornell University. In 1963 that arch-user Leon Lederman expressed the community's sentiments of wanting the next lab to be accessible by right for all users, that they would have a strong voice in decisions on what was built and how facilities were used, and that it would be a place where they would be “at home and loved.”
By
David Gross, Born Washington, D.C., 1941; Ph.D., 1966 (physics), University of California at Berkeley; Professor of Physics at Princeton University; high-energy physics (theory).
The Standard Model is surely one of the major intellectual achievements of the twentieth century. In the late 1960s and early 1970s, decades of path-breaking experiments culminated in the emergence of a comprehensive theory of particle physics. This theory identifies the basic fundamental constituents of matter and describes all the forces of nature relevant at accessible energies – the strong, weak, and electromagnetic interactions.
Science progresses in a much more muddled fashion than is often pictured in history books. This is especially true of theoretical physics, partly because history is written by the victorious. Consequently, historians of science often ignore the many alternate paths that people wandered down, the many false clues they followed, the many misconceptions they had. These alternate points of view are less clearly developed than the final theories, harder to understand and easier to forget, especially as these are viewed years later, when it all really does make sense. Thus reading history one rarely gets the feeling of the true nature of scientific development, in which the element of farce is as great as the element of triumph.
The emergence of quantum chromodynamics, or QCD, is a wonderful example of the evolution from farce to triumph. During a very short period, a transition occurred from experimental discovery and theoretical confusion to theoretical triumph and experimental confirmation. We were lucky to have been young then, when we could stroll along the newly opened beaches and pick up the many beautiful shells that experiment had revealed.
By
Murray Gell-Mann, Born New York City, 1929; Ph.D., 1951 (physics), Massachusetts Institute of Technology; Robert A. Millikan Professor of Physics Emeritus at the California Institute of Technology; Professor, Santa Fe Institute; Nobel Prize in Physics, 1969; elementary particle physics (theory).
By
Donald Perkins, Born Hull, England, 1925; Ph.D., 1948 (physics), University of London; Professor of Physics at the University of Oxford; high-energy physics (experimental).
Several accounts have been written of the discovery of neutral weak currents, mainly by social scientists or theoreticians. Although doubtless well motivated, these authors were themselves not immersed in the experimental situation in neutrino physics in the 1960s and 1970s. There have also been, of course, nonhistorical reviews of the physics of neutral currents. In this chapter I shall present an experimenter's account of the sequence of events in this discovery, based on my own experience and on discussions with other physicists taking part in those experiments. As emphasized earlier in this volume by Leon Lederman (see Chapter 6), discoveries in high-energy physics are frequently stories of false trails, crossed wires, sloppy technique, misconceptions, and misunderstandings, compensated by the occasional incredible strokes of good luck. Certainly this was the case for neutral currents.
It is well known that neutral currents were discovered in 1973 by a collaboration operating with the bubble chamber Gargamelle at the CERN Proton Synchrotron. Gargamelle (shown in Fig. 25.1) was a large (4.8 m long, 1.9 m diameter) heavy-liquid chamber filled with freon (CF3Br) with 20 tons total mass. For the neutral-current investigation, a relatively small fiducial volume of 3 m3 (4.5 metric tons) was employed. The chamber was conceived and constructed by André Lagarrigue with the help of engineers from Saclay, and funded largely by the French atomic energy commission. Other physicists participating included André Rousset and Paul Musset, who were prominent in the subsequent physics program at CERN.
By
James Bjorken, Born Chicago, Illinois, 1934; Ph.D., 1959 (physics), Stanford University; Staff Physicist at the Stanford Linear Accelerator Center; high-energy physics (theory and experiment).
I begin with a disclaimer: what follows is subjective recollection, with no serious attempt of setting down an objective history. I also limit the scope of my remarks to the period roughly from 1966 to 1971. This period can in turn be divided in two parts – BF (Before Feynman) and AF (After Feynman).
Before Feynman
The climate in the beginning of this period was very different from now. David Gross has quite accurately and eloquently described it in Chapter 11, and I need not elaborate it very much here again. Field theory for the strong and weak interactions was not trusted. The emphasis was on observables, in close analogy to the Heisenberg matrix mechanics that heralded the golden age of quantum mechanics in the late 1920s. Local fields for strongly interacting particles were simply too far away from observations to be regarded as reliable descriptive elements. It was Murray Gell-Mann's great contribution to identify the totality of the matrix elements of electroweak currents between hadron states as operationally defined descriptive elements, upon which one could base a phenomenology with a lot of predictive power.
As did matrix mechanics, Gell-Mann's current algebra allowed the construction of sum rules based upon equal-time commutation relations of the electroweak currents with each other. The idea was picked up by Sergio Fubini and his collaborators, who greatly extended what Gell-Mann had started, and then by Stephen Adler and William Weisberger, who produced one of the most important and celebrated results of the period.
By
James Cronin, Born Chicago, Illinois, 1931; Ph.D., 1955 (physics), University of Chicago; Professor of Physics at the University of Chicago; Nobel Prize in Physics, 1980; high-energy physics (experimental).
This opportunity to discuss the discovery of CP violation has forced me to go back and look at old notebooks and records. It amazes me that they are rather sloppy and very rarely are there any dates on them. Perhaps this is because I was not in any sense aware that we were on the verge of an important discovery. In the first reference I list some of the literature on this subject, which provides different perspectives on the discovery. I begin with a review of some of the important background that is necessary to place the discovery of CP violation in proper context.
Precursors
The story begins with the absolutely magnificent paper of Gell-Mann and Pais published in early 1955. Each time I read it, it gives me goose bumps such as I experience while listening to the first movement of Beethoven's Archduke Trio. They gave the paper a very formal title, “Behavior of Neutral Particles under Charge Conjugation,” but they knew in the end that this was something that concerned experiment. So the last paragraph reads:
At any rate, the point to be emphasized is this: a neutral boson may exist which has a characteristic θ0 mass but a lifetime ≠ τ and which may find its natural place in the present picture as the second component of the θ0 mixture.
One of us, (M. G.-M.), wishes to thank Professor E. Fermi for a stimulating discussion.
By
Laurie M. Brown, Born Brooklyn, New York, 1923; Ph.D., 1951 (physics), Cornell University; Professor Emeritus of Physics and Astronomy at Northwestern University; high-energy physics (theory) and history of physics.,
Robert Brout, Born New York City, 1928; Ph.D., 1953 (physics), Columbia University; Professor of Physics at the Université Libre de Bruxelles; statistical mechanics and high-energy physics (theory).,
Tian Yu Cao, Born Shanghai, China, 1941; Ph.D., 1987 (history and philosophy of science), University of Cambridge; Assistant Professor of Philosophy, Boston University; history and philosophy of science.,
Peter Higgs, Born Newcastle-upon-Tyne, United Kingdom, 1929; Ph.D., 1954 (physics), King's College, London; Professor of Theoretical Physics at the University of Edinburgh; high-energy physics (theory).,
Yoichiro Nambu, Born Tokyo, Japan, 1921; Sc.D., 1952 (theoretical physics), University of Tokyo; Professor Emeritus of Physics, Enrico Fermi Institute, University of Chicago; high-energy physics (theory).
This panel was intended to function as a discussion, but instead it emerged as a series of short presentations by the participants Robert Brout, Tian Yu Cao, and Peter Higgs, with an introductory discussion by the chair. The present chapter consists of a revised and edited version of those reports and also includes a later submission by Yoichiro Nambu, who was scheduled to be on the panel originally but was unable to attend.
Introduction
The two sectors of the current Standard Model of particle physics, the strong color and the electroweak sectors, are distinct and are tied together only by ontology. Together, they describe the interactions, other than gravitation, of the three generations of quarks and leptons. The dream of representing the strong and weak “nuclear” interactions (as they were known before the acceptance of the quarks) as quantum field theories (QFT) goes back to the 1930s. The first such QFT, other than quantum electrodynamics, was Enrico Fermi's weak-interaction theory of 1934. This theory was almost immediately extended by Werner Heisenberg in 1935 to include the strong interactions (thus making it the first unified QFT) whose exchanged “quanta” were those of the electron-neutrino “Fermi-field.” In 1935, Hideki Yukawa invented “U-quanta,” now called pions, to represent the field of strong interactions, adjusting their mass to fit the range of nuclear forces. This was again a unified QFT, as the U-quanta were also intended to serve as intermediate bosons of the weak interaction.
By
Leon M. Lederman, Born New York City, 1922; Ph.D., 1951 (physics), Columbia University; Pritzker Professor of Physics at the Illinois Institute of Technology in Chicago; Director of Fermi National Accelerator Laboratory 1979–89; Wolf Prize, 1982; Nobel Prize in Physics, 1988; high-energy physics (experimental).
History conferences are designed to set the record straight or, depending on where you stand, make it as crooked as it can possibly be. In this case I intend to personalize the story and the complicated reason is that the discovery of the bottom quark, almost exactly fifteen years ago, was the culmination of a series of events in experimental physics which go back to the discovery of the muon neutrino just thirty years ago, in 1962. I think it's important to emphasize that this story is one of missed opportunities, abysmal judgment, monumental blunders, stupid mistakes, and inoperative equipment. It was leavened only by the incredible luck and incandescent good fortune which you all know is an essential ingredient for any physics career. Lest you sneer that I am displaying false modesty, I beg you to hold your opinion until you've seen the data.
Preamble
In the period Haim Harari called “From the fourth lepton to the fifth quark,” we found the muon neutrino but missed neutral currents. We discovered what became known as the Drell–Yan process but missed the J/ψ. We missed the J/ψ again at the ISR but stumbled on high-transverse-momentum hadrons. We missed the J/ψ at Fermilab in 1973, chasing single-direct-lepton yields that were a red herring. Then we found a false upsilon. But finally Nature, terrified that she would be stuck with us forever, yielded up her secret, the true upsilon (ϒ), hoping this would make us go away.
By
Laurie M. Brown, Born 1923, New York City; Ph.D. (physics), Cornell University, 1951; theoretical physics and history of physics; Northwestern University.,
Michael Riordan, Born 1946, Springfield, Mass.; Ph.D. (physics), MIT, 1973; experimental physics, history of physics, and science writing; Stanford Linear Accelerator Center and University of California, Santa Cruz.,
Max Dresden, Born 1918, Amsterdam; Ph.D. (physics), University of Michigan, 1946; theoretical physics and history of physics; State University of New York at Stony Brook and Stanford Linear Accelerator Center.,
Lillian Hoddeson, Born 1940, New York City; Ph.D. (physics), Columbia University, 1966; history of science and technology; University of Illinois and Fermi National Accelerator Laboratory.
In the late 1970s elementary particle physicists began speaking of the “Standard Model” as the basic theory of matter. This theory is based on sets of fundamental spin-½ particles called “quarks” and “leptons,” which interact by exchanging generalized quanta, particles of spin 1. The model is referred to as “standard,” because it provides a theory of fundamental constituents – an ontological basis for describing the structure and behavior of all forms of matter (gravitation excepted), including atoms, nuclei, strange particles, and so on. In situations where appropriate mathematical techniques are available, it can be used to make quantitative predictions that are completely in accord with experiment. There are no well-established results in particle physics that clearly disagree with this theory.
This pleasing state of affairs is quite new in particle physics. It contrasts markedly with the theoretical situation in the early 1960s, when there were a variety of different ideas about the subatomic realm. For example, in 1964 most particle physicists considered protons, neutrons, pions, kaons, and a host of other strongly interacting particles (i.e., hadrons) to be in a certain sense “elementary.” By 1979 the consensus had emerged that the hadrons were not elementary after all but are composed of more basic building blocks called quarks, held together by the exchange of another kind of particle called the gluon. Or consider the particle interactions. In 1964 almost all physicists thought the strong, weak, and electromagnetic interactions were independent phenomena, perhaps requiring different types of theories for their description.
By
J. L. Heilbron, Born San Francisco, California, 1934; Ph.D., 1964 (history of science), University of California at Berkeley; Professor of History and the History of Science, and Vice-Chancellor, University of California at Berkeley.
Having slipped so far down the chain of being – from physicist to historian to administrator – I was very much flattered by the invitation to contribute this chapter. I shall not abuse the invitation by discussing the Standard Model itself, for you all know much more about it than I do. Instead I shall discuss two earlier physical theories (or, rather, sets of theories) that may be considered standard models of their times. My purpose is not to place the modern version in perspective – for what larger setting is possible for a theory that covers all time and all space? My purpose is to remind you that others have had the same intellectual impulses that drive contemporary particle physicists and cosmologists, and that they could point to persuasive evidence in support of their own standard models.
To qualify as a discarded standard model, a theory must have been deemed fundamental and universal; also, it must have enjoyed a wide consensus among physicists and produced quantitative results testable by experiment. These criteria are satisfied by two, and perhaps only two, previous models, which I'll call the Napoleonic and the Victorian.
The Napoleonic standard model
I call the standard model of the years around 1800 Napoleonic, not because he had anything to do with creating it, but because he patronized its principal architects, because it rose and fell coincidentally with his own career, and because it operated with the same mixture of the aristocratic and the democratic, the chauvinistic and the cosmopolitan, that characterized his regime.
By
Melvin Schwartz, Born New York City, 1932; Ph.D., 1958 (physics), Columbia University; I. I. Rabi Professor of Physics, and Associate Director, Brookhaven National Laboratory; Nobel Prize in Physics, 1988; high-energy physics (experimental).
The experiment that led to the discovery of the muon neutrino was the largest experiment that had ever been mounted at Brookhaven at its time. The experimental team consisted of only seven people – three professors, three graduate students, and one physicist from the AGS (Alternating Gradient Synchrotron) department. We fashioned the biggest detector that had ever been built at that time, consisting mainly of ten tons of aluminum. It was an experiment in which we ended up having a lot of fun and made some important progress. This chapter will discuss this experiment, the first high-energy neutrino experiment, and mention a few developments that have occurred in neutrino scattering since that time.
What was the state of particle theory in 1959, when planning for this experiment began? In general, theory was in a fairly primitive state: V–A and parity violation were well understood, and there was a general universality among weak interactions involving muons, electrons, nucleons, and neutrinos. And everything was relatively consistent with a simple four-fermion point vertex: the Fermi theory. There had been one prior neutrino experiment, done by Clyde Cowan and Fred Reines – the classic experiment, one of the most beautiful experiments of the 1950s – in which antineutrinos produced in a nuclear reactor gave rise to a reaction in which an antineutrino and a proton yielded a neutron and a positron.
By
Sau Lan Wu, Born Hong Kong; Ph.D., 1970 (physics), Harvard University; Enrico Fermi Professor of Physics at the University of Wisconsin, Madison; high-energy physics (experimental).
By
Alexander Polyakov, Born Moscow, 1945; Ph.D., 1969 (physics), Landau Institute for Theoretical Physics; Professor of Physics at Princeton University; high-energy physics (theory).
By
Kjell Johnsen, Born Meland, Norway, 1921; Ph.D., 1954 (physics), Norwegian Institute of Technology, Trondheim; retired from CERN; accelerator physics.
The history of colliding-beam devices can be traced back to 1956, when a group at the Midwestern Universities Research Association put forward the idea of particle stacking in circular accelerators. Of course, people who worked with particle accelerators had already speculated about the high center-of-mass energies attainable with colliding beams, but such ideas were unrealistic with the particle densities then available in normal accelerator beams. The invention of particle stacking fundamentally changed this situation. It opened up the possibility of making two intense proton beams collide with a sufficiently high interaction rate to enable experimentation in an energy range otherwise unattainable by known techniques.
A group at CERN started investigating this possibility in 1957, first by studying a special two-way fixed-field alternating gradient accelerator and then in 1960 by turning to the idea of two intersecting storage rings that could be fed from the CERN 28 GeV Proton Synchrotron (PS). This change in concept for these initial studies was stimulated by the promising performance of the PS at the very start of its operation in 1959.
In 1961 the Accelerator Research Division at CERN had gained sufficient confidence to present its first proposal for a 2 x 25 GeV storage ring system. This system was intended essentially for protons, but other particles were mentioned in the proposal. This led to a series of important actions. First, in 1962, France offered a site next to the original CERN site.
By
Silvan Schweber, Born Strasbourg, Prance, 1928; Ph.D., 1952 (physics), Princeton University; Professor of Physics at Brandeis University; high-energy physics (theory) and history of science.
The establishment of the Standard Model marked the attainment of another stage in the attempt to give a unified description of the forces of nature. The program was initiated at the beginning of the nineteenth century by Oersted and Faraday, the “natural philosophers,” who, influenced by Naturphilosophie, gave credibility to the quest and provided the first experimental indication that the program had validity. Thereafter, Maxwell constructed a model for a unified theory of electricity and magnetism, providing a mathematical formulation that was able to explain much of the observed phenomena and to make predictions of new ones. With Einstein the vision became all-encompassing. In addition, Einstein advocated a radical form of theory reductionism. For him the supreme test of the physicist was “to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction.” A commitment to reductionism and a desire for unification animated the quest for the understanding of the subnuclear domain – and success in obtaining an effective representation was achieved by those committed to that vision.
The formulation of the Standard Model is one of the great achievements of the human intellect – one that rivals the genesis of quantum mechanics. It will be remembered – together with general relativity, quantum mechanics, and the unravelling of the genetic code – as one of the outstanding intellectual advances of the twentieth century. But much more so than general relativity and quantum mechanics, it is the product of a communal effort.