We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The late 1960s were very exciting for experimental high energy physicists. Two new accelerators, the CERN-ISR, in Europe [240], and the National Accelerator Laboratory (NAL, now Fermilab), in the USA [241,242], were under construction. The call for proposals went out in January 1969 for the CERN-ISR [243], with eventual first collisions in January 1971. For NAL, proposals were requested in March 1970 [244] with first operation in March 1972. Everybody who was anybody in experimental high energy physics at that time was involved in proposals at one or both laboratories.
Fermilab's accelerator was a traditional fixed target machine which provided 200 to 400 GeV primary proton beams and a large variety of secondary beams, while the CERN-ISR was the first proton–proton collider. The size and scope of the machines was quite different although both were destined to make important contributions to high pT physics, a subject that did not exist before these machines operated. Also, Fermilab was a brand new laboratory totally dedicated to the new accelerator while CERN had been in existence since 1954 with several operating accelerators [248].
The Fermilab program, circa 1970
Fermilab had held summer studies in 1968 and 1969 for users to help design and specify the various beams and facilities at the new laboratory. The layout of the accelerator [249] together with a more expanded view of the initial (circa 1973–1975) beam lines and facilities [250] is shown in Figure 6.1. Note the 1 km radius of the main accelerator.
The maximum likelihood method and least squares fits
The likelihood function L is defined as the a priori probability of a given outcome. Since Gaussian probability distributions are common (as a consequence of the central limit theorem) and since there is also an important theorem regarding likelihood ratios for composite hypotheses, it is convenient to use the logarithm of the likelihood, W = -2 ln L.
There are generally two types of fits used for data from experiments. (i) The standard maximum likelihood method is to fit a function to n independent trials of the same distribution, for example the muon lifetime measured from the decay times ti of individual decays. (ii) The method of least squares is also an extremum method which performs Gaussian least squares fits for binned data from histograms to predictions of the expectation values at each data point, which may be functions of parameters to be determined. This will be discussed following a review of the standard maximum likelihood method and the likelihood ratio test.
Maximum likelihood fit to a Gaussian, an instructive example
A nice example of the standard maximum likelihood method is a fit to a Gaussian with mean μ and variance σ2 from the observation of n independent trials with values xi from this distribution, in other words a sample of n trials from this population.
Commissioning of the Large Hadron Collider (LHC) in late summer 2009 opened a new, long awaited, era of high energy particle physics. The scientific quest of the LHC is the completion of the Standard Model (SM) of particles and forces and the search for novel phenomena beyond the SM. Exciting physics questions such as the existence of the Higgs boson, the missing link of the SM, supersymmetric particles, extra dimensions and many others are expected to be answered at the LHC. Another experimental effort follows the direction of exploration of hot and dense nuclear matter created in ultra-relativistic nuclear collisions. It is believed that such excited nuclear medium forms a soup of deconfined quarks and gluons known as a Quark Gluon Plasma (QGP) [1] and provides an ideal laboratory to study the many-body aspects of Quantum ChromoDynamics (QCD).
HighpT particle production played a key role in the foundation of QCD as a theory of the strong interaction. Shortly after the discovery of point-like constituents inside the proton in Deeply Inelastic Scattering (DIS) experiments at the Stanford Linear Accelerator Center (SLAC) [2] and the observation of particle production at large transverse momenta in p + p collisions at the Intersecting Storage Rings (ISR) at CERN [3], QCD emerged as a mathematically consistent theory [4].
Why were some people studying “high pT” physics in the 1960s?
The quick answer is that they were looking for a “left handed” intermediate boson W±, the proposed carrier of the weak interaction [35]. Although the possibility of Fermi's point-like weak interaction of β decay [18,19] being transmitted by a boson field was originally discussed by Yukawa [222] and other authors, the modern concept of the parity-violating intermediate vector bosons W± as the quanta that transmit the weak interaction was introduced by Lee and Yang in the year 1960 [35, 36] to avoid a breakdown of unitarity in neutrino scattering at high energies if the weak interaction remained point-like [35]; and experiments were proposed to detect the intermediate bosons with high energy neutrinos [37, 38]. Neutrino beams at the new BNL-AGS and CERN-PS accelerators provided the first opportunity to study weak interactions at high energy, whereas previously weak interactions had only been studied via radioactive decay.
The first high energy neutrino experiment at the BNL-AGS [39] set a limit on the mass of the intermediate boson of roughly less than the mass of the proton MW ≲ Mp. However, much more importantly, this experiment discovered that the neutrinos from charged pion decay produced only muons.
Elementary particle physics is the study of the fundamental constituents of matter and the forces between them. It is also called High Energy Physics (HEP) because in order to study fundamental particles with smaller and smaller sizes, shorter and shorter wavelength probes are required which correspond to higher and higher energy.
The field of high energy physics has proceeded for the past ˜60 years in a typical sequence: a new accelerator opened up a new range of available energy (or type of accelerated particle, e.g. colliding beams of positrons and electrons), and coupled with new detector technology – which enabled improved or previously impossible measurements to be made – rapidly yielded discoveries soon after it started up. The hadron accelerators which have had the most influential impact on the modern high energy particle and heavy ion physics discussed in this book are shown in Figure 1.1. The upper branch shows four major generations of p–p(p) colliders starting from the CERN Intersecting Storage Rings (ISR), the first hadron collider, while the lower branch depicts the major heavy ion facilities in the USA and Europe. The AGS at Brookhaven National Laboratory (BNL) ran for p–p physics from 1960–2002 and is now the injector to RHIC. The fixed target p–p programs of Fermilab (1972–) and the CERN-SPS (1976–) are also not shown.
The roads to RHIC and the LHC are highly intertwined. The great experimental discoveries of the late 1960s and early 1970s – DIS in e–p, hard scattering in p–p collisions, J/Ψ – inspired a surge of proposals for new accelerators to study the new phenomena. Of key importance in this development were the two major theoretical discoveries: QCD in 1973 as the theory of the strong interactions; and the unification of electromagnetic and weak interactions by the Glashow-Weinberg-Salam model [95-97] a few years earlier. The Glashow-Weinberg-Salam model [100] added two new neutral particles to the unified “electroweak” interaction:
(i) a vector boson, Z0, as the carrier of a neutral current weak interaction, which predicted such previously unobserved reactions as vµ + N → vµ + hadrons, with no final state µ, via the exchange of a Z0;
(ii) a neutral scalar “Higgs” boson which “spontaneously” broke the electroweak symmetry of the gauge theory Lagrangian and gave mass to the Z0 and W± bosons while keeping the photon massless.
Brookhaven (BNL) was first in the accelerator competition [657,658]. In 1971, convinced by the success of the proton–proton collider concept at the CERN-ISR, BNL proposed a 200 x 200 GeV (later 400 x 400 GeV) p–p collider with a high luminosity of L = 1033 cm-2 s-1 using superconducting magnets, the Intersecting Storage Accelerator, or ISABELLE.
Shortly after the discovery of the radioactivity of uranium by Becquerel in 1896 [155] and its ability to ionize gases, Rutherford [156] began a study of the rate of discharge of a parallel plate capacitor in gas by placing successive layers of thin aluminum foil over the surface of a layer of uranium oxide on one plate. He concluded that “the uranium radiation is complex, and that there are present at least two distinct types of radiation: one that is readily absorbed which will be termed for convenience the α radiation, and the other of a more penetrative character, which will be termed the β radiation.” In 1906, Rutherford [157] observed that α particles from the decay of radium scattered, i.e. deviated from their original direction of motion, when passing through a thin sheet of mica, but did not scatter in vacuum. He made this observation by passing α particles through narrow slits and making an image on a photographic plate. In vacuum, the edges of the image were sharp while the image of α particles that passed through the mica was broadened and showed diffuse edges. This observation was controversial because it was not expected that α particles would scatter [158]: “Since the atom is the seat of intense electrical forces, the β particle in passing through matter should be much more easily deflected from its path than the massive α particle.”
This is a first-hand account of one of the most creative and exciting periods of discovery in the history of physics. From 1960 until 1990 theoreticians and experimentalists worked together to probe deeper and deeper into the basic structure of reality, moving closer and closer to an understanding of the ultimate building blocks from which everything in the Universe is made. Gerard 't Hooft was closely involved in many of the advances in the development of the subject. In this book he gives a personal account of the process by which physicists came to understand the structure of matter, and to speculate on possible directions in which the subject may evolve in the future. This fascinating personal account of the last thirty years in one of the most dramatic areas in twentieth century physics will be of interest to professional physicists and physics students, as well as the educated general reader with an interest in one of the most exciting scientific detective stories ever.