We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The generation of random numbers is too important to be left to chance.
Title of an article by Coveyou (1969)
In the preceding chapters, we have discussed ways to estimate various statistics that summarise data and/or hypotheses, such as sample means and variances, parameters of models, their distributions, confidence intervals and p-values from goodness-of-fit tests. We can calibrate these if we know the sampling distribution of the relevant statistics. That is, we can place the observed value in the distribution expected (for a given hypothesis) and assess whether it is in the expected range or not. For example, in order to compute a p-value from a goodness-of-fit test, we need to know the distribution of the test statistics, or to find the variance (or bias) of some estimator we need to know the sampling distribution of the estimator. These follow from the distribution of the data and the mathematical relationship between the data and the statistic. Often this is difficult, sometimes even impossible, to perform analytically. But the Monte Carlo method makes many of these problems tractable, and provides a powerful tool for analysing data, and understanding the properties of analysis procedures and experiments.
The core of the Monte Carlo method is to generate random data and use this to compute estimates of derived quantities. We can use the Monte Carlo method to evaluate integrals, explore distributions of estimators and estimate any other quantities of sampling distributions.
An important difference between the classical and quantum perspectives is their different criteria of distinguishability. Identical particles are classically distinguishable when separated in phase space. On the other hand, identical particles are always quantum mechanically indistinguishable for the purpose of counting distinct microstates. But these concepts and these distinctions do not tell the whole story of how we count the microstates and determine the multiplicity of a quantized system.
There are actually two different ways of counting the accessible microstates of a quantized system of identical, and so indistinguishable, particles. While these two ways were discovered in the years 1924–1926 independently of Erwin Schrödinger’s (1887–1961) invention of wave mechanics in 1926, their most convincing explanation is in terms of particle wave functions. The following two paragraphs may be helpful to those familiar with the basic features of wave mechanics.
A system of identical particles has, as one might expect, a probability density that is symmetric under particle exchange, that is, the probability density is invariant under the exchange of two identical particles. But here wave mechanics surprises the classical physicist. A system wave function may either keep the same sign or change signs under particle exchange. In particular, a system wave function may be either symmetric or antisymmetric under particle exchange.
The existence of entropy follows inevitably from the first and second laws of thermodynamics. However, our purpose is not to reproduce this deduction, but rather to focus on the concept of entropy, its meaning and its applications. Entropy is a central concept for many reasons, but its chief function in thermodynamics is to quantify the irreversibility of a thermodynamic process. Each term in this phrase deserves elaboration. Here we define thermodynamics and process; in subsequent sections we take up irreversibility. We will also learn how entropy or, more precisely, differences in entropy tell us which processes of an isolated system are possible and which are not.
Thermodynamics is the science of macroscopic objects composed of many parts. The very size and complexity of thermodynamic systems allow us to describe them simply in terms of a mere handful of equilibrium or thermodynamic variables, for instance, pressure, volume, temperature, mass or mole number, internal energy, and, of course, entropy. Some of these variables are related to others via equations of state in ways that differently characterize different kinds of systems, whether gas, liquid, solid, or composed of magnetized parts.
The thermodynamic view of a physical system is the “black box” view. We monitor the input and output of a black box and measure its superficial characteristics with the human-sized instruments available to us: pressure gauges, thermometers, and meter sticks. The laws of thermodynamics govern the relations among these measurements. For instance, the zeroth law of thermodynamics requires that two black boxes each in thermal equilibrium with a third are in thermal equilibrium with each other, the first law that the energy of an isolated black box can never change, and the second law that the entropy of an isolated black box can never decrease. According to these laws and these measurements each black box has an entropy function S(E,V, …) whose dependence on a small set of variables encapsulates all that can be known of the black box system.
But we are not satisfied with black boxes – especially when they work well! We want to look inside a black box and see what makes it work. Yet when we first look into the black box of a thermodynamic system we see even more thermodynamic systems. A building, for instance, is a thermodynamic system. But so also is each room in the building, each cabinet in each room, and each drawer in each cabinet. But actual thermodynamic systems cannot be subdivided indefinitely. At some point the concepts and methods of thermodynamics cease to apply. Eventually the subdivisions of a thermodynamic system cease to be smaller thermodynamic systems and instead become groups of atoms and molecules.
One of the most important contributions that the science of astronomy has made to human progress is an understanding of the distance and size of celestial objects. After millennia of using our eyes and about four centuries of using telescopes, we now have a very good idea of where we are in the Universe and how our planet fits in among the other bodies in our Solar System, the Milky Way galaxy, and the Universe. Several of the techniques astronomers use to estimate distance and size are based on angles, and the purpose of this chapter is to make sure you understand the mathematical foundation of these techniques. Specifically, the concepts of parallax and angular size are discussed in the first two sections of this chapter, and the third section describes the angular resolution of astronomical instruments.
Parallax
Parallax is a perspective phenomenon that makes a nearby object appear to shift position with respect to more distant objects when the observation point is changed. This section begins with an explanation of the parallax concept and proportionality relationships and concludes with examples of parallax calculations relevant to astronomy.
Parallax concept
You can easily demonstrate the effect of parallax by holding your index finger upright at arm's length and then observing that finger and the background behind it with your left eye open and your right eye closed.
The mathematician John von Neumann once urged the information theorist Claude Shannon to assign the name entropy to the measure of uncertainty Shannon had been investigating. After all, a structurally identical measure with the name entropy had long been an element of statistical mechanics. Furthermore, “No one really knows what entropy really is, so in a debate you will always have the advantage.” Most of us love clever one-liners and allow each other to bend the truth in making them. But von Neumann was wrong about entropy. Many people have understood the concept of entropy since it was first discovered 150 years ago.
Actually, scientists have no choice but to understand entropy because the concept describes an important aspect of reality. We know how to calculate and how to measure the entropy of a physical system. We know how to use entropy to solve problems and to place limits on processes. We understand the role of entropy in thermodynamics and in statistical mechanics. We also understand the parallelism between the entropy of physics and chemistry and the entropy of information theory.
But von Neumann’s witticism contains a kernel of truth: entropy is difficult, if not impossible, to visualize. Consider that we are able to invest the concept of the energy of a rod of iron with meaning by imagining the rod broken into its smallest parts, atoms of iron, and comparing the energy of an iron atom to that of a macroscopic, massive object attached to a network of springs that model the interactions of the atom with its nearest neighbors. The object’s energy is then the sum of its kinetic and potential energies – types of energy that can be studied in elementary physics laboratories. Finally, the energy of the entire system is the sum of the energy of its parts.
Information technologies are as old as the first recorded messages, but not until the twentieth century did engineers and scientists begin to quantify something they called information. Yet the word information poorly describes the concept the first information theorists quantified. Of course specialists have every right to select words in common use and give them new meaning. Isaac Newton, for instance, defined force and work in ways useful in his theory of dynamics. But a well-chosen name is one whose special, technical meaning does not clash with its range of common meanings. Curiously, the information of information theory violates this commonsense rule.
Compare the opening phrase of Dickens’s A Tale of Two Cities: It was the best of times, it was the worst of times … with the sequence of 50 letters, spaces, and a comma: eon jhktsiwnsho d ri nwfnn ti losabt,tob euffr te … taken from the tenth position on the first 50 pages of the same book. To me the first is richly associative; the second means nothing. The first has meaning and form; the second does not. Yet these two phrases could be said to carry the same information content because they have the same source. Each is a sequence of 50 characters taken from English text.