We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sir James Jeans' well-known treatise covers the topics in electromagnetic theory required by every non-specialist physicist. It provides the relevant mathematical analysis and is therefore useful to those whose mathematical knowledge is limited, as well as to the more advanced physicists, engineers and applied mathematicians. A large number of examples are given.
Arguably the most influential nineteenth-century scientist for twentieth-century physics, James Clerk Maxwell (1831–1879) demonstrated that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. A fellow of Trinity College Cambridge, Maxwell became, in 1871, the first Cavendish Professor of Physics at Cambridge. His famous equations - a set of four partial differential equations that relate the electric and magnetic fields to their sources, charge density and current density - first appeared in fully developed form in his 1873 Treatise on Electricity and Magnetism. This two-volume textbook brought together all the experimental and theoretical advances in the field of electricity and magnetism known at the time, and provided a methodical and graduated introduction to electromagnetic theory. Volume 2 covers magnetism and electromagnetism, including the electromagnetic theory of light, the theory of magnetic action on light, and the electric theory of magnetism.
Arguably the most influential nineteenth-century scientist for twentieth-century physics, James Clerk Maxwell (1831–1879) demonstrated that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. A fellow of Trinity College Cambridge, Maxwell became, in 1871, the first Cavendish Professor of Physics at Cambridge. His famous equations - a set of four partial differential equations that relate the electric and magnetic fields to their sources, charge density and current density - first appeared in fully developed form in his 1873 Treatise on Electricity and Magnetism. This two-volume textbook brought together all the experimental and theoretical advances in the field of electricity and magnetism known at the time, and provided a methodical and graduated introduction to electromagnetism. Volume 1 covers the first elements of Maxwell's electromagnetic theory: electrostatics, and electrokinematics, including detailed analyses of electrolysis, conduction in three dimensions, and conduction through heterogeneous media.
Data measured as angles or two-dimensional orientations are found almost everywhere in science. They commonly arise in biology, geography, geophysics, medicine, meteorology and oceanography, and many other areas. Examples of such data include departure directions of birds from release points, fracture plane orientations, the directional movement of animals after stimulation, wind and ocean current directions, and biorhythms. Statistical methods for handling such data have developed rapidly in the last twenty years, particularly data display, correlation, regression and analysis of tempered or spatially structured data. Further, some of the exciting modern developments in general statistical methodology, particularly nonparametric smoothing methods and bootstrap-based methods, have contributed significantly to relatively intractable data analysis problems. This book provides a unified and up-to-date account of techniques for handling circular data.
Beginning graduate students in mathematics and other quantitative subjects are expected to have a daunting breadth of mathematical knowledge. But few have such a background. This book will help students to see the broad outline of mathematics and to fill in the gaps in their knowledge. The author explains the basic points and a few key results of all the most important undergraduate topics in mathematics, emphasizing the intuitions behind the subject. The topics include linear algebra, vector calculus, differential geometry, real analysis, point-set topology, probability, complex analysis, abstract algebra, and more. An annotated bibliography then offers a guide to further reading and to more rigorous foundations. This book will be an essential resource for advanced undergraduate and beginning graduate students in mathematics, the physical sciences, engineering, computer science, statistics, and economics who need to quickly learn some serious mathematics.
This book is written as a guide for the presentation of experimental data including a consistent treatment of experimental errors and inaccuracies. It is meant for experimentalists in physics, astronomy, chemistry, life sciences and engineering. However, it can be equally useful for theoreticians who produce simulation data: they are often confronted with statistical data analysis for which the same methods apply as for the analysis of experimental data. The emphasis in this book is on the determination of best estimates for the values and inaccuracies of parameters in a theory, given experimental data. This is the problem area encountered by most physical scientists and engineers. The problem area of experimental design and hypothesis testing – excellently covered by many textbooks – is only touched on but not treated in this book.
The text can be used in education on error analysis, either in conjunction with experimental classes or in separate courses on data analysis and presentation. It is written in such a way – by including examples and exercises – that most students will be able to acquire the necessary knowledge from self study as well. The book is also meant to be kept for later reference in practical applications. For this purpose a set of “data sheets” and a number of useful computer programs are included.
This book consists of parts. Part I contains the main body of the text.
This chapter is about the presentation of experimental results. When the value of a physical quantity is reported, the uncertainty in the value must be properly reported too, and it must be clear to the reader what kind of uncertainty is meant and how it has been estimated. Given the uncertainty, the value must be reported with the proper number of digits. But the quantity also has a unit that must be reported according to international standards. Thus this chapter is about reporting your results: this is the last thing you do, but we'll make it the first chapter before more serious matters require attention.
How to report a series of measurements
In most cases you derive a result on the basis of a series of (similar) measurements. In general you do not report all individual outcomes of the measurements, but you report the best estimates of the quantity you wish to “measure,” based on the experimental data and on the model you use to derive the required quantity from the data. In fact, you use a data reduction method. In a publication you are required to be explicit about the method used to derive the end result from the data. However, in certain cases you may also choose to report details of the data themselves (preferably in an appendix or deposited as “additional material”); this enables the reader to check your results or apply alternative data reduction methods.
This appendix contains programs, functions or code fragments written in Python. Each code is referred to in the text; the page where the reference is made is given in the header.
First some general instructions are given on how to work with these codes. Python is a general-purpose interpretative language, for which interpreters are available for most platforms, including Windows. Python is in the public domain and interpreters are freely available. Most applications in this book use a powerful numerical array extension NumPy, which also provides basic tools in linear algebra, Fourier transforms and random numbers. Although Python version 3 is available, at the time of writing NumPy requires Python version 2, the latest being 2.6. In addition, applications may require the scientific tools library SciPy, which relies on NumPy. Importing SciPy automatically implies the import of NumPy.
Users are advised first to download Python 2.6, then the most recent stable version of NumPy, and then SciPy. Further instructions for Windows users can be found at www.hjcb.nl/python.
There are several options to produce plots, for example Gnuplot.py, based on the gnuplot package or rpy based on the statistical package “R.” But there are many more. Since the user may find it difficult to make a choice, we have added yet another, but very simple to use, plotting module called plotsvg.py. It can be downloaded from the author's website.
If you want to fit parameters in a functional relation to experimental data, the best method is a least-squares analysis: Find the parameters that minimize the sum of squared deviations of the measured values from the values predicted by your function. In this chapter both linear and nonlinear least-squares fits are considered. It is explained how you can test the validity or effectiveness of the fit and how you can determine the expected inaccuracies in the optimal values of the parameters.
Introduction
Consider the following task: you wish to devise a function y = f(x) such that this function fits as accurately as possible to a number of data points (xi, yi), i = 1, …, n. Usually you have – based on theoretical considerations – a set of functions to choose from, and those functions may still contain one or more yet undetermined parameters. In order to select the “best” function and parameters you must use some kind of measure for the deviation of the data points from the function. If this deviation measure is a single value, you can then select the function that minimizes this deviation.
This task is not at all straightforward and you may be lured into pitfalls during the process. For example, your choice of functions and parameters may be so large and your set of data may be so small that you can choose a function that exactly fits your data.
There are errors and uncertainties. The latter are unavoidable; eventually it is the omnipresent thermal noise that causes the results of measurements to be imprecise. After trying to identify and correct avoidable errors, this chapter will concentrate on the propagation and combination of uncertainties in composite functional relations.
Classification of errors
There are several types of error in experimental outcomes:
(i) (accidental, stupid or intended) mistakes
(ii) systematic deviations
(iii) random errors or uncertainties
The first type we shall ignore. Accidental mistakes can be avoided by careful checking and double checking. Stupid mistakes are accidental errors that have been overlooked. Intended mistakes (e.g. selecting data that suit your purpose) purposely mislead the reader and belong to the category of scientific crimes.
Systematic errors
Systematic errors have a non-random character and distort the result of a measurement. They result from erroneous calibration or just from a lack of proper calibration of a measuring instrument, from careless measurements (uncorrected parallax, uncorrected zero-point deviations, time measurements uncorrected for reaction time, etc.), from impurities in materials, or from causes the experimenter is not aware of. The latter are certainly the most dangerous type of error; such errors are likely to show up when results are compared to those of other experimentalists at other laboratories. Therefore independent corroboration of experimental results is required before a critical experiment (e.g. one that overthrows an accepted theory) can be trusted.
It is impossible to measure physical quantities without errors. In most cases errors result from deviations and inaccuracies caused by the measuring apparatus or from the inaccurate reading of the displaying device, but also with optimal instruments and digital displays there are always fluctuations in the measured data. Ultimately there is random thermal noise affecting all quantities that are determined at a finite temperature. Any experimentally determined quantity therefore has a certain inaccuracy. If the experiment were to be repeated, the result would be (slightly) different. One could say that the result of a particular experiment is no more than a random sample from a probability distribution. When reporting the result of an experiment, it is important to also report the extent of the uncertainty, e.g. in terms of the best estimate of some measure of the width of the probability distribution. When experimental data are processed and conclusions are drawn from them, knowledge of the experimental uncertainties is essential to assess the reliability of the conclusion.
Ideally, you should specify the probability distribution from which the reported experimental value is supposed to be a random sample. The problem is that you have only one experiment; even if your experiment consists of many observations of which you report the average, you have only one average to report.