We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In section 6.2.5 we introduced the strong cosmic censorship hypothesis, which implies that singularities other than the big bang would be unobservable. If this were literally true, then it might be thought that the considerations of this book were physically irrelevant. We shall see, however, that the situation is more complex than this. The main thrust of this book has been the attempt to establish a relation between the curvature strength of singularities (in the sense of section 6.1) and their ‘genuineness’ - i.e. whether or not there is an extension through them. The arguments for cosmic censorship suggest that only sufficiently strong singularities might be censored, and so the crucial question becomes, whether or not the censored singularities are precisely the genuine ones. In view of the importance of this to the whole study of singularities, I give here a more extended account of the cosmic censorship hypothesis.
The weak hypothesis
The strong cosmic censorship hypothesis was preceded by the weak cosmic censorship hypothesis, first formulated by Penrose (1969), who asked: “does there exist a ‘cosmic censor’ who forbids the appearance of naked singularities, clothing each one in an absolute event horizon?”
This last term was subsequently given more precision in terms of future null infinity J+. A full account of this would take us well beyond the scope of this book. In outline, however, J+ is a boundary attached to a conformal extension of a given space-time (M,g).
In this chapter we introduce the basic idea of our neutrino decay theory. According to this idea (Sciama 1990a) the widespread ionization of the Milky Way is mainly due to photons emitted by dark matter neutrinos pervading the Galaxy. This idea was proposed because it would immediately solve all the problems described in chapter 5, which arise from the conventional hypothesis that the ionisation sources are bright stars or supernovae. In particular, the ubiquity of the neutrinos could compensate for the small mean free path (≲ 1 pc) of the ionising photons in the intercloud medium, and their halo distribution could account for the large scale height (∼ 670 pc) of the ionised gas in the Reynolds layer.
Of course we can exploit these structural features of the basic idea only if the neutrino decay lifetime τ that would be required is otherwise reasonable. We shall find that we need τ ∼ 2 to 3 × 1023 sees. This value is (just) compatible with the lower limits derived in chapter 8, and with certain particle physics theories which are described there. Adopting this lifetime would also have major implications for a large variety of phenomena in astronomy and cosmology other than the ionisation of the Galaxy, and would enable several puzzling problems to be solved.
The most remarkable consequence of the resulting theory is that its domain of validity is highly constrained. As we shall see, it can be correct only if the decay photon energy Eγ, the rest mass mv of the decaying neutrinos, and the Hubble constant H0 each has a value specified with a precision τ 1 per cent.
The interstellar medium of our Galaxy contains a widespread component of ionised gas with fairly well-determined average properties. The source of the ionisation has puzzled astrophysicists for many years (e.g. Mathis 1986, Kulkarni and Heiles 1987, Reynolds 1991, 1992, Walterbos 1991, Heiles 1991). There are five major problems which contribute to the mystery. They are the following:
(i) The scale height of the ionised gas ∼ 670 pc (Nordgren et al. 1992), whereas the sources usually considered (e.g. ionizing radiation from 0 stars or supernovae) have a much smaller scale height (∼ 100 pc).
(ii) The power requirements which the sources must satisfy in order to maintain the ionisation are rather large and probably rule out any conventional source except 0 stars (Reynolds 1990b).
(iii) The interstellar medium is normally regarded as being highly opaque to hydrogen-ionising radiation, so that it is not clear how this radiation can travel hundreds of parsecs from the parent 0 stars to produce the diffuse ionised gas (Mathis 1986, Reynolds 1984, 1987, 1992, Heiles 1991).
(iv) The same opacity problem arises when one studies in detail (Reynolds 1990a) the ionisation along the line segments to two pulsars with accurately known parallactic distances.
(v) The mean electron density in opaque intercloud regions within a few hundred parsecs of the sun is remarkably constant in different directions (Reynolds 1990a, Sciama 1990c). If the opaque gas has a sufficiently tortuous distribution to explain problem (iii), it is surprising that the resultant electron density is so uniform.
In this chapter we shall elaborate on these problems.
The intergalactic flux of hydrogen-ionising photons plays a crucial role in the neutrino decay theory described in part 3 of this book. Accordingly in the present chapter we shall consider various observational estimates of this flux, evaluated at different cosmic epochs. While these estimates are rather uncertain, we shall find that they generally exceed the most recent determinations of the integrated contribution from quasars, which is usually regarded as the main source of the intergalactic ionising flux. Various other photon sources at high red shifts have been proposed to fill the gap, and these are discussed at the end of the chapter. In chapter 11 we shall find that the neutrino decay theory can account for the unexplained flux, but only if various parameters both of the theory and of the universe possess highly constrained values. These values are in general agreement with other estimates of them, and in some cases this agreement is rather precise.
The Density of Intergalactic Neutral Hydrogen
As soon as quasars of red shift ∼ 2 were identified by Schmidt (1965), various authors pointed out that they could be used to probe the density of intergalactic neutral hydrogen. This density could then be used in turn to constrain the intergalactic flux of ionising photons. Consider a layer of neutral hydrogen lying at a red shift z along the line of sight to a quasar of greater red shift Z. Photons emitted by the quasar and reaching the layer with the wavelength of Lyman α would be able to excite neutral hydrogen in the layer to its first excited state.
I started writing Modern Cosmology in 1969, just four years after the discovery of the 3 K cosmic microwave background. The significance of that remarkable discovery was rapidly appreciated by cosmologists, and it naturally dominated a large part of my book. Now, nearly a quarter of a century later, a new topic has come to dominate cosmology, namely, the dark matter problem. This problem, however, is not at all well understood. According to modern estimates some of the dark matter is in the form of ordinary particles — protons, neutrons and electrons — while some of it has a more exotic character. We do not know what form the ordinary dark matter takes, and we do not even know the identity of the exotic dark matter. Yet together they are a pervasive and indeed dominating constituent of the universe, in galaxies, groups and clusters of galaxies and intergalactic space. I therefore thought it desirable to update my book by writing a connected account of what has now become the single most important problem in astronomy and cosmology.
I must confess that I have a second reason for writing this book. In 1990 I proposed the idea that most of the widespread ionization of hydrogen observed in our Galaxy is produced by photons emitted by decaying dark matter neutrinos of non-zero rest-mass. The original motivation for this proposal was that the observed ionisation was puzzling astronomers because it seemed difficult to account for in terms of known sources of ionisation.
This theory, once proposed, rapidly took on a life of its own.
The detection of dark matter in astronomy has a long history. In past years it was called “the astronomy of the invisible”. The story begins in 1844 when, by chance, two different dark matter problems were identified. In that year it was noted that the planet Uranus had moved away from its calculated position by as much as two minutes of arc. In the same year F. W. Bessell drew attention to the sinuous motion of the star Sirius, the brightest star in the sky.
The subsequent development of the Uranus problem led to one of the most famous stories in the history of astronomy. In 1845 J. C. Adams, who had just ceased to be an undergraduate at Cambridge University, succeeded in calculating fairly accurately the position of a hypothetical planet whose gravitational effect on Uranus might be responsible for its disturbed motion. He attempted unsuccessfully to interest the Astronomer Royal G. B. Airy in this prediction. Apparently Airy had attributed the discrepancy to a departure from Newton's law of gravity. Perhaps also he was unimpressed by the student's youth.
Independently of Adams, in 1846 the Frenchman Le Verrier calculated the position of the hypothetical planet with a precision of 1 degree. (For a much shortened version of the needed calculations see Lyttleton 1958). Le Verrier contacted a German astronomer, Galle, at the Berlin Observatory, who rapidly succeeded in observing a new planet (Neptune) within 1 degree of the predicted position. The discovery of this planet (no longer “dark”) is widely considered to be a triumph of nineteenth century science, and naturally became the subject of chauvinistic controversy.
We saw in the last chapter that the Milky Way contains diffuse ionised gas (DIG) with a large scale height. We also saw that there is strong, but not decisive, evidence that conventional sources in the Galaxy are not adequate to account for the observed ionisation. What seem to be needed are sources which are smoothly distributed, so that the opacity of the neutral hydrogen can be overcome, and which possess a large enough scale height to account for the large scale height of the DIG. Dark matter neutrinos in the Galaxy would be expected to possess both these properties, as we discuss in detail in the next part of this book. If the radiative decay of these neutrinos is to be a serious candidate for the ionization source of the DIG in our Galaxy, we would expect to find the same ionisation problems in nearby galaxies whose structure is similar to ours. This is the subject of the present chapter.
There is one advantage and one disadvantage in studying the ionisation in other galaxies. The advantage is that by observing from a point outside the galaxies it is easier to discover the global properties of the ionisation. The disadvantage is that pulsars are not observable in other galaxies (except the Magellanic Clouds), so that we cannot use the pulsar dispersion measure to determine the distribution of the electron density, and have to rely on measurements of Hα and other emission lines. As we shall see, it has been possible by these means to observe the DIG in nearby galaxies and to discover that conventional ionisation sources in these galaxies again seem to be inadequate.
In this chapter we study the implications of the neutrino decay theory for the reionisation of the universe and the consequent suppression of fluctuations in the microwave background. We saw on page 48 that we expect the early high temperature universe to have become neutral at a red shift ∼ 1000, when it had cooled down to a temperature ∼ 3000 K. On the other hand we know from considerations of the Gunn-Peterson effect that the intergalactic medium is highly ionised at redshifts between zero and 4.9. The questions then arise, at what red shift between 4.9 and 1000 did the reionisation occur, and by what process?
These questions, and the general thermal history of the universe, have been much discussed. They are obviously relevant to our understanding of the processes of galaxy formation. In addition it has long been realised that they play a crucial role in determining the present anisotropy ΔT/T of the microwave background on small angular scales. As has often been discussed (e.g. Efstathiou 1988 and references cited therein), if the postrecombination universe had been reionised so early that its optical depth for Thomson scattering exceeded unity, then the ΔT/T induced by fluctuations associated with galaxy formation after recombination at z ∼ 1000 would have been severely attenuated by z = 0. This is an important question because the present stringent observational limits on ΔT/T at small angular scales would impose severe constraints on several theories of galaxy formation in the absence of a scattering screen.