We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This undergraduate text takes the reader along the trail of light from Newton's particles to Einstein's relativity. Like the best detective stories, it presents clues and encourages the reader to draw conclusions before the answers are revealed. The first seven chapters describe how light behaves, develop Newton's particle theory, introduce waves and an electromagnetic wave theory of light, discover the photon, and culminate in the wave-particle duality. The book then goes on to develop the special theory of relativity, showing how time dilation and length contraction are consequences of the two simple principles on which the theory is founded. An extensive chapter derives the equation E = mc2 clearly from first principles and then explores its consequences and the misconceptions surrounding it. That most famous of issues arising from special relativity - the aging of the twins - is treated simply but compellingly.
For 40 years Edward M. Purcell's classic textbook has introduced students to the wonders of electricity and magnetism. With profound physical insight, Purcell covers all the standard introductory topics, such as electrostatics, magnetism, circuits, electromagnetic waves, and electric and magnetic fields in matter. Taking a non-traditional approach, the textbook focuses on fundamental questions from different frames of reference. Mathematical concepts are introduced in parallel with the physics topics at hand, making the motivations clear. Macroscopic phenomena are derived rigorously from microscopic phenomena. With hundreds of illustrations and over 300 end-of-chapter problems, this textbook is widely considered the best undergraduate textbook on electricity and magnetism ever written. An accompanying solutions manual for instructors can be found at www.cambridge.org/9781107013605.
A unique introduction to the design, analysis, and presentation of scientific projects, this is an essential textbook for undergraduate majors in science and mathematics. The textbook gives an overview of the main methods used in scientific research, including hypothesis testing, the measurement of functional relationships, and observational research. It describes important features of experimental design, such as the control of errors, instrument calibration, data analysis, laboratory safety, and the treatment of human subjects. Important concepts in statistics are discussed, focusing on standard error, the meaning of p values, and use of elementary statistical tests. The textbook introduces some of the main ideas in mathematical modeling, including order-of-magnitude analysis, function fitting, Fourier transforms, recursion relations, and difference approximations to differential equations. It also provides guidelines on accessing scientific literature, and preparing scientific papers and presentations. An extensive instructor's manual containing sample lessons and student papers is available at www.cambridge.org/Marder.
Fourier transform theory is of central importance in a vast range of applications in physical science, engineering and applied mathematics. Providing a concise introduction to the theory and practice of Fourier transforms, this book is invaluable to students of physics, electrical and electronic engineering, and computer science. After a brief description of the basic ideas and theorems, the power of the technique is illustrated through applications in optics, spectroscopy, electronics and telecommunications. The rarely discussed but important field of multi-dimensional Fourier theory is covered, including a description of Computer Axial Tomography (CAT scanning). The book concludes by discussing digital methods, with particular attention to the Fast Fourier Transform and its implementation. This new edition has been revised to include new and interesting material, such as convolution with a sinusoid, coherence, the Michelson stellar interferometer and the van Cittert–Zernike theorem, Babinet's principle and dipole arrays.
Astronomy needs statistical methods to interpret data, but statistics is a many-faceted subject that is difficult for non-specialists to access. This handbook helps astronomers analyze the complex data and models of modern astronomy. This second edition has been revised to feature many more examples using Monte Carlo simulations, and now also includes Bayesian inference, Bayes factors and Markov chain Monte Carlo integration. Chapters cover basic probability, correlation analysis, hypothesis testing, Bayesian modelling, time series analysis, luminosity functions and clustering. Exercises at the end of each chapter guide readers through the techniques and tests necessary for most observational investigations. The data tables, solutions to problems, and other resources are available online at www.cambridge.org/9780521732499. Bringing together the most relevant statistical and probabilistic techniques for use in observational astronomy, this handbook is a practical manual for advanced undergraduate and graduate students and professional astronomers.
This textbook presents in a unified manner the fundamentals of both continuous and discrete versions of the Fourier and Laplace transforms. These transforms play an important role in the analysis of all kinds of physical phenomena. As a link between the various applications of these transforms the authors use the theory of signals and systems, as well as the theory of ordinary and partial differential equations. The book is divided into four major parts: periodic functions and Fourier series, non-periodic functions and the Fourier integral, switched-on signals and the Laplace transform, and finally the discrete versions of these transforms, in particular the Discrete Fourier Transform together with its fast implementation, and the z-transform. This textbook is designed for self-study. It includes many worked examples, together with more than 120 exercises, and will be of great value to undergraduates and graduate students in applied mathematics, electrical engineering, physics and computer science.
For the third edition of this successful undergraduate text, the author has made a number of changes to improve the presentation and clarify some of the arguments, and has also brought several of the applications up to date. The new material includes an elementary, descriptive introduction to the ideas behind the new science of chaos. The overall objectives of the book are unchanged: to lead the student to a thorough understanding of the basic concepts of vibrations and waves, to show how these concepts unify a wide variety of familiar physics, and to open doors to advanced topics which they illuminate. Each section of the book contains a brief summary of its salient contents. There are approximately 180 problems to which all numerical answers are provided, together with hints for their solution. This book is designed both for use as a text for an initial undergraduate course on vibrations and waves, and for a reference at later stages when more advanced topics or applications are met.
Ordinary Differential Equations introduces key concepts and techniques in the field and shows how they are used in current mathematical research and modelling. It deals specifically with initial value problems, which play a fundamental role in a wide range of scientific disciplines, including mathematics, physics, computer science, statistics and biology. This practical book is ideal for students and beginning researchers working in any of these fields who need to understand the area of ordinary differential equations in a short time.
In the past it may have been true that Stephen Senn's (2003) analogy was right. Paraphrasing his words for the sake of moderate language: scientists regarded statistics as the one-night stand: the quick fix, avoiding long-term entanglement. This analogy is no longer apt. Statistical procedures now drive many if not most areas of current astrophysics and cosmology. In particular the currently understood nature of our Universe is a product of statistical analysis of large and combined data sets. Here we briefly describe the scene in three areas dominating definition of the current model of the Universe and its history. The three areas inextricably tie together the shape and content of the Universe and the formation of structure and galaxies, leading to life as we know it. While these sketches are not reviews, we show by cross-referencing how frequently our preceding discussions play in to current research in cosmology.
The galaxy universe
The story of galaxy formation since 1990 is based on two premises. Firstly, it was widely accepted that the matter content in the Universe is primarily cold and dark – CDM prevails. The recognition of dark matter was slow, despite Zwicky (1937) demonstrating its existence via the cosmic virial theorem. The measurements of rotation curves of spiral galaxies (e.g. Rubin et al., 1980) convinced us.
Peter Scheuer started this. In 1977 hewalked into JVW's office in the Cavendish Lab and quietly asked for advice on what further material should be taught to the new intake of Radio Astronomy graduate students (that year including the hapless CRJ). JVW, wrestling with simple Chi-square testing at the time, blurted out ‘They know nothing about practical statistics …’. Peter left thoughtfully. A day later he returned. ‘Good news! The Management Board has decided that the students are going to have a course on practical statistics.’ Can I sit in, JVW asked innocently. ‘Better news! The Management Board has decided that you're going to teach it …’.
So, for us, began the notion of practical statistics. A subject that began with gambling is not an arcane academic pursuit, but it is certainly subtle as well. It is fitting that Peter Scheuer was involved at the beginning of this (lengthy) project; his style of science exemplified both subtlety and pragmatism. We hope that we can convey something of both. If an echo of Peter's booming laugh is sometimes heard in these pages, it is because we both learned from him that a useful answer is often much easier – and certainly much more entertaining – than you at first think.
After the initial course, the material for this book grew out of various further courses, journal articles and the abundant personal experience that results from understanding just a little of any field of knowledge that counts Gauss and Laplace amongst its originators.
Teaching is highly educational for teachers. Teaching from the first edition revealed to us how much students enjoyed Monte Carlo methods, and the ability with such methods to test and to check every derivation, test, procedure or result in the book. Thus, a change in the second edition is to introduce Monte Carlo as early as possible (Chapter 2). Teaching also revealed to us areas in which we assumed too much (and too little). We have therefore aimed for some smoothing of learning gradients where slope changes have appeared to be too sudden. Chapters 6 and 7 substantially amplify our previous treatments of Bayesian hypothesis testing/modelling, and include much more on model choice and Markov chain Monte Carlo (MCMC) analysis. Our previous chapter on 2D (sky distribution) analysis has been significantly revised. We have added a final chapter sketching the application of statistics to some current areas of astrophysics and cosmology, including galaxy formation and large-scale structure, weak gravitational lensing, and the cosmological microwave background (CMB) radiation.
We received very helpful comments from anonymous referees whom CUP consulted about our proposals for the second edition. These reviewers requested that we keep the book (a) practical and (b) concise and – small, or ‘backpackable’, as one of them put it. We have additional colleagues to thank either for further discussions, finding errata or because we just plain missed them from our first edition list: Matthew Colless, Jim Condon, Mike Disney, Alan Heavens, Martin Hendry, Jim Moran, Douglas Scott, Robert Smith and Malte Tewes.
Frustra fit per plura quod potest fieri per pauciora – it is futile to do with more things that which can be done with fewer.
(William of Ockham, c.1285–1349)
Nature laughs at the difficulties of integration.
(Pierre-Simon de Laplace, 1749–1827; Gordon & Sorkin, 1959)
One of the attractive features of the Bayesian method is that it offers a principled way of making choices between models. In classical statistics, we may fit to a model, say by least squares, and then use the resulting χ2 statistic to decide if we should reject the model. We would do this if the deviations from the model are unlikely to have occurred by chance. However, it is not clear what to do if the deviations are likely to have occurred, and it is even less clear what to do if several models are available. For example, if a model is in fact correct, the significance level derived from a χ2 test (or, indeed, any significance test) will be uniformly distributed between zero and one (Exercise 7.1).
The problem with model choice by χ2 (or any similar classical method) is that these methods do not answer the question we wish to ask. For a model H and data D, a significance level derived from a minimum χ2 tells us about the conditional probability, prob(D | H).
The stock market is an excellent economic forecaster. It has predicted six of the last three recessions.
(Paul Samuelson)
The only function of economic forecasting is to make astrology look respectable.
(John Kenneth Galbraith)
In contrast to previous chapters, we now consider data transformation, how to transform data in order to produce improved outcomes in either extracting or enhancing signal.
There are many observations consisting of sequential data, such as intensity as a function of position as a radio telescope is scanned across the sky or as signal varies across a row on a CCD detector, single-slit spectra, time-measurements of intensity (or any other property). What sort of issues might concern us?
(i) trend-finding; can we predict the future behaviour of data?
(ii) baseline detection and/or assessment, so that signal on this baseline can be analysed;
(iii) signal detection, identification, for example, of a spectral line or source in sequential data for which the noise may be comparable in magnitude to the signal;
(iv) filtering to improve signal-to-noise ratio;
(v) quantifying the noise;
(vi) period-finding; searching the data for periodicities;
(vii) correlation of time series to find correlated signal between antenna pairs or to find spectral lines;
(viii) modelling; many astronomical systems give us our data convolved with some more or less known instrumental function, and we need to take this into account to get back to the true data.
Statistics, the most important science in the whole world: for upon it depends the practical application of every other science and of every art.
(Florence Nightingale)
If your experiment needs statistics, you ought to have done a better experiment.
(Ernest Rutherford)
Science is about decision. Building instruments, collecting data, reducing data, compiling catalogues, classifying, doing theory – all of these are tools, techniques or aspects which are necessary. But we are not doing science unless we are deciding something; only decision counts. Is this hypothesis or theory correct? If not, why not? Are these data self-consistent or consistent with other data? Adequate to answer the question posed? What further experiments do they suggest?
We decide by comparing. We compare by describing properties of an object or sample, because lists of numbers or images do not present us with immediate results enabling us to decide anything. Is the faint smudge on an image a star or a galaxy? We characterize its shape, crudely perhaps, by a property, say the full-width half-maximum, the FWHM, which we compare with the FWHM of the point-spread function. We have represented a data set, the image of the object, by a statistic, and in so doing we reach a decision.
Statistics are there for decision and because we know a background against which to take a decision.
In embarking on statistics we are entering a vast area, enormously developed for the Gaussian distribution in particular. This is classical territory; historically, statistics were developed because the approach now called Bayesian had fallen out of favour. Hence, direct probabilistic inferences were superseded by the indirect and conceptually different route, going through statistics and intimately linked to hypothesis testing. The use of statistics is not particularly easy. The alternatives to Bayes' methods are subtle and not very obvious; they are also associated with some fairly formidable mathematical machinery. We will avoid this, presenting only results and showing the use of statistics, while trying to make clear the conceptual foundations.
Statistics
Statistics are designed to summarize, reduce or describe data. The formal definition of a statistic is that it is some function of the data alone. For a set of data X1, X2, …, some examples of statistics might be the average, the maximum value or the average of the cosines. Statistics are therefore combinations of finite amounts of data. In the following discussion, and indeed throughout, we try to distinguish particular fixed values of the data, and functions of the data alone, by upper case (except for Greek letters). Possible values, being variables, we will denote in the usual algebraic spirit by lower case.
(interchange between Peter Scheuer and his then student, CRJ)
(The) premise that statistical significance is the only reliable indication of causation is flawed.
(US Supreme Court, Matrixx Initiatives, Inc. vs. Siracusano, 22 March 2011)
It is often the case that we need to do sample comparison: we have someone else's data to compare with ours; or someone else's model to compare with our data; or even our data to compare with our model. We need to make the comparison and to decide something. We are doing hypothesis testing – are our data consistent with a model, with somebody else's data? In searching for correlations as we were in Chapter 4, we were hypothesis testing; in the model-fitting of Chapter 6 we are involved in data modelling and parameter estimation.
A frequentist point of view might be to consider the entire science of statistical inference as hypothesis testing followed by parameter estimation. However, if experiments were properly designed, the Bayesian approach would be right: it answers the sample-comparison questions we wished to pose in the first place, namely what is the probability, given the data, that a particular model is right? Or: what is the probability, given two sets of data, that they agree? The two-stage process should be unecessary at best.
It is difficult to understand why statisticians commonly limit their inquiries to Averages, and do not revel in more comprehensive views.
(Francis Galton, 1889)
When we make a set of measurements, it is instinct to try to correlate the observations with other results. One or more motives may be involved in this instinct. For instance we might wish (a) to check that other observers' measurements are reasonable, (b) to check that our measurements are reasonable, (c) to test a hypothesis, perhaps one for which the observations were explicitly made, or (d) in the absence of any hypothesis, any knowledge or anything better to do with the data, to find if they are correlated with other results in the hope of discovering some new and universal truth.
The fishing trip
Take the last point first. Suppose that we have plotted something against something, on a fishing expedition of this type. There are grave dangers on this expedition, and we must ask ourselves the following questions.
Does the eye see much correlation? If not, calculation of a formal correlation statistic is probably a waste of time.
Could the apparent correlation be due to selection effects? Consider, for instance, the beautiful correlation in Figure 4.1, in which Sandage (1972) plotted radio luminosities of sources in the 3CR catalogue as a function of distance modulus. […]