We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Einstein concentrated on changing the traditional notions of absolute space and absolute time, replacing them by one absolute: the speed of light.
Anthony J. Adams, Wesley an '91, essay in Physics 104, fall 1988.
Time dilation
The relativity of simultaneity tells us that observers in different frames of reference may disagree on the time interval between two events. One observer may find the time interval to be zero and hence the events to be simultaneous; another may not.
Let us try to make the comparison of time intervals more quantitative. We suppose that Alice is baking brownies. She puts the tray in the oven and sets the timer for 30 minutes. This is the first event. When the timer rings, Alice takes out the brownies; that is the second event. For Alice, the time interval between the two events is 30 minutes, and the events occur at the same location: the oven, stationary in her frame.
As in chapter 9, Alice moves with speed ν relative to Bob, and so do the oven and brownies. What time interval between the two events does Bob perceive?
To answer the question, we need somehow to introduce light, for all that we are sure of is that both Alice and Bob always measure the speed of light to be c, 3x 108 meters/second. So let us imagine a mirror on the ceiling over the oven, as sketched in figure 10.1.
In making some experiments on the fringes of colors accompanying shadows, I have found so simple and so demonstrative a proof of the general law of the interference of two portions of light, which I have already endeavored to establish, that I think it right to lay before the Royal Society a short statement of the facts which appear to me so decisive.
Thomas Young, Bakerian Lecture, November 24, 1803
Interference defined
In chapter 3, we framed the question, can we account for the behavior of light by supposing that light is wave motion of some kind?
So far, the logic of our response has been of this form: we asked experimentally, do waves have property X that we know light possesses? For “property X,” we took
the existence of reflection
the equality of the angles of reflection and incidence
the existence of refraction, and
Snell's law, in the sense that, in refraction, the ratio of sines is independent of the angle of incidence.
Each time, we could answer, yes, waves do have this property X that we know light possesses.
Beyond these tests, we found that the wave theory's prediction for the index of refraction – namely, a specific ratio of wave speeds – was resoundingly confirmed by Michelson's measurements of.
Things are going well for a wave theory of light. The time has come to shift the perspective and to change the logic of the testing.
Albert Einstein, conversation with R. S. Shankland, 4 February 1950
The big picture
Space and time are different from what you thought they were like. This simple statement is what the rest of the book is about. Right now, of course, the statement is enigmatic: how are space and time different? That will emerge by stages. The point here is to alert you to the big picture: when we follow Albert Einstein in developing the special theory of relativity, we are developing a theory of space and time. All of us have some commonsense notions about space and time, but – as we will discover – those ideas are not always valid.
This is sufficient prelude; let us go on.
The two principles
You sip from your coffee and then put the cup down. In the seat on your right, a woman pecks busily at the keyboard of a portable computer held in her lap. From what little you can see of the screen, you judge she is composing a sales report. On your left, music fills the ears of a 14-year-old boy. He leans back, his eyes closed and his cassette player languidly held in one hand. You pick up your coffee and have another sip.
This short book is intended to be a practical guide, providing sets of rules that will help you to analyse the data you collect in your regular experimental sessions in the laboratory. Even more important, explanations and examples are provided to help you understand the ideas behind the formulae. Emphasis is also placed on thinking about the answers that you obtain, and on helping you get a feeling for whether they are sensible.
In contrast, this does not set out to be a text on statistics, and certainly not to be a complete course on the subject. Also, no attempt is made to provide rigorous mathematical proofs of many of the required formulae. These are important, and if required can be consulted in any standard textbook on the subject.
I believe that it will be necessary to read this material more than once. You really need to have understood the ideas involved before you do your first practical; but on the other hand, it would be much easier to absorb the material after you have actually done a couple of experiments and grappled with problems of trying to do the analysis yourself. Thus it is a good idea to read the book quickly, so that you at least discover what topics are covered and where to find them again when you need them. At this stage, you need not worry if not everything is entirely comprehensible.
When performing experiments at school, we usually considered that the job was over once we obtained a numerical value for the quantity we were trying to measure. At university, and even more so in everyday situations in the laboratory, we are concerned not only with the answer but also with its accuracy. This accuracy is expressed by quoting an experimental error on the quantity of interest. Thus a determination of the acceleration due to gravity in our laboratory might yield an answer
g = (9.70 ± 0.15) m/s2.
In Section 1.4, we will say more specifically what we mean by the error of ±0.15. At this stage it is sufficient to state that the more accurate the experiment the smaller the error; and that the numerical value of the error gives an indication of how far from the true answer this particular experiment may be.
The reason we are so insistent on every measurement including an error estimate is as follows. Scientists are rarely interested in measurement for its own sake, but more often will use it to test a theory, to compare with other experiments measuring the same quantity, to use this parameter to help predict the result of a different experiment, and so on. Then the numerical value of the error becomes crucial in the interpretation of the result.
In this chapter we are going to discuss the problem of obtaining the best description of our data in terms of some theory, which involves parameters whose values are initially unknown. Thus we could have data on the number of road accidents per year over the last decade; or we could have measured the length of a piece of metal at different temperatures. In either of these cases, we may be interested to see (i) whether the data lie on a straight line, and if so (ii) what are its gradient and intercept (see Fig. 2.1).
These two questions correspond to the statistics subjects known as Hypothesis Testing and Parameter Fitting. Logically, hypothesis testing precedes parameter fitting, since if our hypothesis is incorrect, then there is no point in determining the values of the free parameters (i.e. the gradient and intercept) contained within the hypothesis. In fact, we will deal with parameter fitting first, since it is easier to understand. In practice, one often does parameter fitting first anyway; it may be impossible to perform a sensible test of the hypothesis before its free parameters have been set at their optimum values.
Various methods exist for parameter determination. The one we discuss here is known as least squares. In order to fix our ideas, we shall assume that we have been presented with data of the form shown in Fig. 2.1, and that it corresponds to some measurements of the length of our bar yiobs at various known temperatures xi.
In Section 1.5 we discussed the Gaussian distribution. In this and the next appendix, we describe the binomial and Poisson distributions.
Let us imagine that we throw an unbiassed die 12 times. Since the probability of obtaining a 6 on a single throw is 1/6, we would expect on average to end up with two 6's. However, we would not be surprised if in fact we obtained 6 once or three times (or even not at all, or four times). In general, we could calculate how likely we are to end up with any number of 6's, from none to the very improbable 12.
These possibilities are given by the binomial distribution. It applies to any situation where we have a fixed number N of independent trials, in each of which there are only two possible outcomes, success which occurs with probability p, or failure for which the probability is 1 — p. Thus, in the example of the previous paragraph, the independent trials were the separate throws of the die of which there were N = 12, success consisted of throwing a 6 for which the probability p =1/6, while failure was obtaining any other number with probability 5/6.
The requirement that the trials are independent means that the outcome of any given trial is independent of the outcome of any of the others. This is true for a die because what happens on the next throw is completely unrelated to what came up on any previous one.
In the chapters which follow, we assume that you are already familiar with the basic mathematics of scalar and vector fields in three dimensions, the properties of the ∇ operator, the integral theorems which hold for these fields, and so forth. In this prologue, we remind you of some basic definitions, and outline (without proof) those mathematical theorems of which we shall make extensive use. We also establish our notation and sign conventions.
We envisage space filled with electromagnetic fields, and at any instant we describe these fields mathematically using functions which may be scalar functions of position (like the potential Φ(r)) or vector functions of position (like the electric field E(r)). We shall assume that the functions which appear in the theory are continuous, and have derivatives existing as required, except perhaps at special points or on special surfaces. Singularities in the mathematics will usually correspond to singularities in the physics. For example, the electrostatic potential of a point charge Q at the origin is Q/4πε0r, and this function satisfies our conditions except at r = 0, which is the position of the point charge.
We sometimes focus on these fields in limited regions of space, say inside a volume V enclosed by a surface S, or over a surface S(Γ) bounded by a curve Γ.
Volume integrals
Volume integrals will often arise naturally in the theory, for example when we calculate the total charge or total energy in some volume V of space.