We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The second variant of Milgrom’s theory (T1) derives the equations of motion from a Lagrangian formulation, allowing the theory to make predictions for a very general class of systems with arbitrary shapes. A number of confirmed, novel predictions follow from T1 including the “central surface density relation” (CSDR) and a prediction about the form of the vertical force law in the Milky Way. The theory also predicts a very low rate of mergers between galaxies, a prediction that may or may not be consistent with observations. However, the theory’s predictions about the kinematics of the largest bound structures in the universe, the galaxy clusters, appear to be incorrect.
The first variant of Milgrom’s theory (T0) consists simply of his three postulates from 1983. These postulates entail a number of novel predictions, predictions that have subsequently been confirmed by observational astrophysicists. The first is the “baryonic Tully–Fisher relation” (BTFR), a unique relation between the total mass of a galaxy and its asymptotic rotation speed. An even more surprising prediction is the “radial acceleration relation” (RAR), which states that the rotation speed anywhere in a disk galaxy is determined precisely by the observed distribution of matter – but not in the way that Newton’s laws would predict. According to Lakatos’s Methodology, these, and some other, successful novel predictions imply that Milgrom’s postulates constitute a progressive departure from the dark matter hypotheses of the standard cosmological model.
Karl Popper’s logical and epistemological insights are the basis of a widely used methodology for judging the success of scientific theories – or more accurately, of scientific “research programs,” defined as the evolving set of theories that share a common set of assumptions (or “paradigm” in the language of Thomas Kuhn). Imre Lakatos’s Methodology of Scientific Research Programs judges an evolving theory in terms of how it responds to falsifying instances – via ad hoc adjustments (bad) or via content-increasing hypotheses (good) – and how well it predicts facts in advance of their discovery. A theory that evolves in content-increasing ways, and that predicts novel facts in advance of their confirmation, is called “progressive”; a theory that fails to do so is called “degenerating.” Particularly important are predictions that differ from those of a competing theory – which in the case of MOND is the standard cosmological model.
The third variant of Milgrom’s theory (T2) is a relativistic theory. A number of relativistic variants have been proposed. The lack of a unique relativistic theory complicates the generation of testable predictions, but two predictions are only weakly dependent on the form of the relativistic Lagrangian: an early epoch of cosmic reionization, and a particular value for the amplitude ratio of the first and second peaks in the power spectrum of cosmic microwave background (CMB) temperature fluctuations. The first of these two predictions is possibly corroborated, and the second has definitely been confirmed. However, no relativistic version of MOND has yet been proposed that can accurately explain the full CMB spectrum.
Jean Perrin argued in the early twentieth century that the agreement, or “convergence,” of measured values of Avogadro’s number was compelling evidence for the existence of atoms. Max Planck argued, in a similar way, that the convergence of measured values of Planck’s constant was compelling evidence for the quantization of energy. Philosopher John Losee has argued that convergence of the measured value of a new constant of nature is the strongest possible evidence for the correctness of the theory that contains the constant. Milgrom’s theory contains such a constant (the “acceleration scale,” a0, or “Milgrom’s constant”), and this chapter presents the results of observational determinations of the value of that constant. The values are convergent, suggesting, according to Losee’s argument, that Milgrom was justified in postulating a modified acceleration law in place of dark matter.
This chapter summarizes the results from Chapters 4-8 and speculates on a question that is posed at the start of the book: Why is it that Milgrom’s theory, in spite of its remarkable successes, has been so widely ignored by standard-model cosmologists? It is argued that the approach of standard-model cosmologists to theory corroboration is different from that of Milgromian researchers: the first behave like verificationists, the second like critical rationalists. Unlike Milgrom’s theory, the standard cosmological model has made few if any successful novel predictions, and so it is understandable that standard-model cosmologists have gravitated toward a methodology that favors verification over critical rationalism.
In 1983, the physicist Mordehai Milgrom initiated a new research program in cosmology, called MOND (for MOdified Newtonian Dynamics), or Milgromian dynamics. In three papers, Milgrom proposed a set of postulates describing how Newton’s laws of gravity and motion should be changed in regimes of very low acceleration. Milgrom’s postulates were designed to explain the asymptotic flatness of galaxy rotation curves, without the necessity of postulating the existence of “dark matter”. Milgrom showed that a number of other, novel predictions follow from his three postulates, and proposed these predictions as tests of the theory. Milgrom also proposed a set of guiding principles for how his nascent theory should be developed toward a more complete theory of gravity and cosmology.
The fourth variant of Milgrom’s theory (T3) is due to Justin Khoury and Lasha Berezhiani. These researchers made a change to Milgrom’s postulates in order to address the failings of theory variants T0–T2. They introduced dark matter into the theory, but postulated that, on scales corresponding to galaxies, the dark matter acts like a superfluid condensate. By adjusting the equation of state of the superfluid, they showed that the interaction of the dark matter with normal matter – via Newtonian gravity, but also via coupling of the superfluid phonons – can result in an acceleration that approximates Milgrom’s modified dynamics in the low-acceleration regime. On large scales, the dark matter would behave like the dark matter in the standard model. Although still in an early stage of development, Khoury and Berezhiani’s theory has not yet demonstrated an increase in testable content compared withT0–T2, and in fact much of the content of the earlier theory variants is lost.
Scientific epistemology begins from the idea that the truth of a universal statement, such as a scientific law, can never be conclusively proved. No matter how successful a hypothesis has been in the past, it can always turn out to make incorrect predictions when applied in a new situation. Karl Popper argued that the most important experimental results are those that falsify a theory, and he proposed falsifiability as a criterion for distinguishing science from pseudoscience. Popper argued in addition that scientists should respond to falsifications in a particular way: not by ad hoc adjustments of their theories, but in a way that expands the theory’s explanatory content. Popper argued that the success of a modified theory should be judged in terms of its success at making new predictions. Popper’s view of epistemology, which is shared by many scientists and philosophers of science, is called “critical rationalism.” An epistemology that judges success purely in terms of a theory’s success at explaining known facts is called “verificationism.” Popper argued that verificationism is equivalent to a belief in induction, and that induction is a fallacy.
One of the greatest challenges in fundamental physics is to reconcile quantum mechanics and general relativity in a theory of quantum gravity. A successful theory would have profound consequences for our understanding of space, time, and matter. This collection of essays written by eminent physicists and philosophers discusses these consequences and examines the most important conceptual questions among philosophers and physicists in their search for a quantum theory of gravity. Comprising three parts, the book explores the emergence of classical spacetime, the nature of time, and important questions of the interpretation, metaphysics, and epistemology of quantum gravity. These essays will appeal to both physicists and philosophers of science working on problems in foundational physics, specifically that of quantum gravity.
Dark matter is a fundamental component of the standard cosmological model, but in spite of four decades of increasingly sensitive searches, no-one has yet detected a single dark-matter particle in the laboratory. An alternative cosmological paradigm exists: MOND (Modified Newtonian Dynamics). Observations explained in the standard model by postulating dark matter are described in MOND by proposing a modification of Newton's laws of motion. Both MOND and the standard model have had successes and failures – but only MOND has repeatedly predicted observational facts in advance of their discovery. In this volume, David Merritt outlines why such predictions are considered by many philosophers of science to be the 'gold standard' when it comes to judging a theory's validity. In a world where the standard model receives most attention, the author applies criteria from the philosophy of science to assess, in a systematic way, the viability of this alternative cosmological paradigm.
Einstein’s GTR initiated a new programme for describing fundamental interactions, in which the dynamics was described in geometrical terms. After Einstein’s classic paper on GTR (1916c), the programme was carried out by a sequence of theories. This chapter is devoted to discussing the ontological commitments of the programme (Section 5.2) and to reviewing its evolution (Section 5.3), including some topics (singularities, horizons, and black holes) that began to stimulate a new understanding of GTR only after Einstein’s death (Section 5.4), with the exception of some recent attempts to incorporate the idea of quantization, which will be addressed in Chapter 11. Considering the enormous influence of Einstein’s work on the genesis and developments of the programme, it seems reasonable to start this chapter with an examination of Einstein’s views of spacetime and geometry (Section 5.1), which underlie his programme.
In comparison with STR, which is a static theory of the kinematic structures of Minkowskian spacetime, general theory of relativity (GTR) as a dynamical theory of the geometrical structures of spacetime is essentially a theory of gravitational fields. The first step in the transition from STR to GTR, as we discussed in Section 3.4, was the formulation of EP, through which the inertial structures of the relative spaces of the uniformly accelerated frames of reference can be represented by static homogeneous gravitational fields. The next step was to apply the idea of EP to uniformly rotating rigid systems. Then Einstein (1912a) found that the presence of the resulting stationary gravitational fields invalidated the Euclidean geometry. In a manner characteristic of his style of theorizing, Einstein (with Grossmann, 1913) immediately generalized this result and concluded that the presence of a gravitational field generally required a non-Euclidean geometry and that the gravitational field could be mathematically described by a four-dimensional Riemannian metric tensor gμv (Section 4.1). With the discovery of the generally covariant field equations satisfied by gμv, Einstein (1915a–d) completed his formulation of GTR.
This chapter is devoted to examining the mathematical, physical, and speculative roots of nonabelian gauge theory. The early attempts at applying this theoretical framework to various physical processes will be reviewed, and the reasons for their failures explained.