We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book, by one of the pre-eminent philosophers of science writing today, offers the most comprehensive account available of causal asymmetries. Causation is asymmetrical in many different ways. Causes precede effects; explanations cite causes not effects. Agents use causes to manipulate their effects; they don't use effects to manipulate their causes. Effects of a common cause are correlated; causes of a common effect are not. This book explains why a relationship that is asymmetrical in one of these regards is asymmetrical in the others. Hausman discovers surprising hidden connections between theories of causation and traces them all to an asymmetry of independence. This is a major book for philosophers of science that will also prove insightful to economists and statisticians.
Isaac Newton's Principia is considered one of the masterpieces in the history of science. The mathematical methods employed by Newton in the Principia stimulated much debate among his contemporaries, especially Leibniz, Huygens, Bernoulli and Euler, who debated their merits and drawbacks. Among the questions they asked were: How should natural philosophy be mathematized?; Is it legitimate to use uninterpreted symbols?; Is it possible to depart from the established Archimedean or Galilean/Huygenian tradition of geometrizing nature?; What is the value of elegance and conciseness?; What is the relation between Newton's geometrical methods and the calculus? This book explains how Newton addressed these issues, taking into consideration the values that directed the research of Newton and his contemporaries. This book will be of interest to researchers and advanced students in departments of history of science, philosophy of science, physics, mathematics and astronomy.
By focusing on the conceptual issues faced by nineteenth century physicists, this book clarifies the status of field theory, the ether, and thermodynamics in the work of the period. A remarkably synthetic account of a difficult and fragmentary period in scientific development.
Quantum mechanics is our most successful physical theory. However, it raises conceptual issues that have perplexed physicists and philosophers of science for decades. This 2004 book develops an approach, based on the proposal that quantum theory is not a complete, final theory, but is in fact an emergent phenomenon arising from a deeper level of dynamics. The dynamics at this deeper level are taken to be an extension of classical dynamics to non-commuting matrix variables, with cyclic permutation inside a trace used as the basic calculational tool. With plausible assumptions, quantum theory is shown to emerge as the statistical thermodynamics of this underlying theory, with the canonical commutation/anticommutation relations derived from a generalized equipartition theorem. Brownian motion corrections to this thermodynamics are argued to lead to state vector reduction and to the probabilistic interpretation of quantum theory, making contact with phenomenological proposals for stochastic modifications to Schrödinger dynamics.
Radio astronomy has revolutionized the course of modern astronomy. Marking the fiftieth anniversary of Jansky's discovery in 1933 of extraterrestrial radio emission, Professor Sullivan asked many of the pioneers in the field to set down their versions of events and the people who made them. Each of the score of contributors seeks to give a good 'feeling' for the times to the great majority of readers who will not have experienced them. Over 150 illustrations, mostly historical photographs of men and machines, enliven the various recollections and reflections. The list of contributors includes many of the key personalities and covers all the major laboratories and countries involved in radio astronomy before 1960. In addition to the radio astronomers themselves, there are contributions from optical astronomers and theorists closely related to the field, as well as historians of twentieth century astronomy.
Editors Laurie Brown, Max Dresden, Lillian Hoddeson and Michael Riordan have brought together a distinguished group of elementary particle physicists and historians of science to explore the recent history of particle physics. Based on a conference held at Stanford University, this is the third volume of a series recounting the history of particle physics and offers the most up-to-date account of the rise of the Standard Model, which explains the microstructure of the world in terms of quarks and leptons and their interactions. Major contributors include Steven Weinberg, Murray Gell-Mann, Michael Redhead, Silvan Schweber, Leon Lederman and John Heilbron. The wide-ranging articles explore the detailed scientific experiments, the institutional settings in which they took place, and the ways in which the many details of the puzzle fit together to account for the Standard Model.
This book defines and develops a unifying principle of physics, that of 'extreme physical information'. The information in question is, perhaps surprisingly, not Shannon or Boltzmann entropy but, rather, Fisher information, a simple concept little known to physicists. Both statistical and physical properties of Fisher information are developed. This information is shown to be a physical measure of disorder, sharing with entropy the property of monotonic change with time. The information concept is applied 'phenomenally' to derive most known physics, from statistical mechanics and thermodynamics to quantum mechanics, the Einstein field equations, and quantum gravity. Many new physical relations and concepts are developed, including new definitions of disorder, time and temperature. The information principle is based upon a new theory of measurement, one which incorporates the observer into the phenomenon that he/she observes. The 'request' for data creates the law that, ultimately, gives rise to the data. The observer creates his or her local reality.
While experience tells us that time flows from the past to the present and into the future, a number of philosophical and physical objections exist to this commonsense view of dynamic time. In an attempt to make sense of this conundrum, philosophers and physicists are forced to confront fascinating questions, such as: Can effects precede causes? Can one travel in time? Can the expansion of the Universe or the process of measurement in quantum mechanics define a direction in time? In this book, researchers from both physics and philosophy attempt to answer these issues in an interesting, yet rigorous way. This fascinating book will be of interest to physicists and philosophers of science and educated general readers interested in the direction of time.
The Post-Darwinian Controversies offers an original interpretation of Protestant responses to Darwin after 1870, viewing them in a transatlantic perspective and as a constitutive part of the history of post-Darwinian evolutionary thought. The impact of evolutionary theory on the religious consciousness of the nineteenth century has commonly been seen in terms of a 'conflict' or 'warfare' between science and theology. Dr. Moore's account begins by discussing the polemical origins and baneful effects of the 'military metaphor', and this leads to a revised view of the controversies based on an analysis of the underlying intellectual struggle to come to terms with Darwin. The middle section of the book distinguishes the 'Darwinism' of Darwin himself amid the main currents of post-Darwinian evolutionary thought, and is followed by chapters which examine the responses to Darwin of twenty-eight Christian controversialists, tracing the philosophical and theological lineage of their views. The paradox that emerges - that Darwin's theory was accepted in substance only by those whose theology was distinctly orthodox theology and of other evolutionary theories with liberal and romantic theological speculation.
This book gives a broad synthesis of conceptual developments of twentieth-century field theories, from the general theory of relativity to quantum field theory and gauge theory. The author gives a historico-critical exposition of the conceptual foundations of the theories revealing a pattern to the evolution of these conceptions. Theoretical physicists and students of theoretical physics will find in this book an account of the foundational problems of their discipline that will help them understand the internal logic and dynamics of their subject. In addition the book will provide professional historians and philosophers of science, and especially philosophers of physics, with a conceptual basis for further historical, cultural and sociological analysis of the theories discussed. The book also contains much material for philosophical (metaphysical, methodological and semantical) reflection. Finally, the scientifically qualified general reader will find in this book a deeper analysis of contemporary conceptions of the physical world than can be found in popular accounts of the subject.
Over recent decades, some approaches to non-equilibrium statistical mechanics, that differ decidedly in their foundational and philosophical outlook, have nevertheless converged in developing a common unified mathematical framework. I will call this framework ‘stochastic dynamics’, since the main characteristic feature of the approach is that it characterizes the evolution of the state of a mechanical system as evolving under stochastic maps, rather than under a deterministic and time-reversal invariant Hamiltonian dynamics.
The motivations for adopting this stochastic type of dynamics come from at least three different viewpoints.
(1) ‘Coarse graining’ (cf. van Kampen, 1962; Penrose, 1970): In this view one assumes that on the microscopic level the system can be characterized as a (Hamiltonian) dynamical system with deterministic time-reversal invariant dynamics. However, on the macroscopic level, one is only interested in the evolution of macroscopic states, i.e. in a partition (or coarse graining) of the microscopic phase space into discrete cells. The usual idea is that the form and size of these cells are chosen in accordance with the limits of our observational capabilities.
On the macroscopic level, the evolution now need no longer be portrayed as deterministic. When only the macro-state of a system at an instant is given, it is in general not fixed what its later macro-state will be, even if the underlying microscopic evolution is deterministic. Instead, one can provide transition probabilities, that specify how probable the transition from any given initial macro-state to later macro-states is.
A cup of tea, left to its own, cools down while the surrounding air heats up until both have reached the same temperature, and a gas, confined to the left half of a room, uniformly spreads over the entire available space as soon as the confining wall is removed. Thermodynamics (TD) characterizes such processes in terms of an increase of thermodynamic entropy, which attains its maximum value at equilibrium, and the second law of thermodynamics posits that in an isolated system entropy cannot decrease. The aim of statistical mechanics (SM) is to explain the behaviour of these systems, in particular their conformity with the second law, in terms of the dynamical laws governing the individual molecules of which the systems are made up. In what follows these laws are assumed to be the ones of classical mechanics.
An influential suggestion of how this could be achieved was made by Ludwig Boltzmann (1877), and variants of it are currently regarded by many as the most promising option among the plethora of approaches to SM. Although these variants share a commitment to Boltzmann's basic ideas, they differ widely in how these ideas are implemented and used. These differences become most tangible when we look at how the different approaches deal with probabilities. There are two fundamentally different ways of introducing probabilities into SM, and even within these two groups there are important disparities as regards both technical and interpretational issues.
Objective interpretations claim that probability statements are made true or false by physical reality, and not by our state of mind or information. The task is to provide truth conditions for probability statements that are objective in this sense. Usually, two varieties of such interpretations are distinguished and discussed: frequency interpretations and propensity interpretations. Both face considerable problems, the most serious of which I will briefly recall to motivate the search for an alternative.
Firstly, the frequency interpretations. Here the central problem is that it is very implausible (to say the least) to postulate a non-probabilistic connection between probabilities and relative frequencies. What a frequency approach claims seems either to be false or to presuppose the notion of probability. Take, for example, the repeated throwing of a fair die that has equal probabilities for each side. All you can say is that it is very probable that upon many repetitions each face will turn up with a relative frequency of approximately 1/6 (weak law of large numbers). Or that, with probability 1, the limiting relative frequency of each face would be 1/6 in an infinite number of repetitions (strong law of large numbers). You cannot drop the clauses ‘very probable’ or ‘with probability 1’ in these statements. There are no relative frequencies that the die would produce on repeated throwing, but it could, with varying probabilities, yield any frequency of a given outcome.
Why does the universe have a thermodynamic arrow of time? The standard reasoning relies on the truism: no asymmetry in, no asymmetry out. If the fundamental laws of Nature are time symmetric invariant (that is, time reversal), then the origin of the thermodynamic asymmetry in time must lie in temporally asymmetric boundary conditions. However, this conclusion can follow even if the fundamental laws are not time reversal invariant. The more basic question is whether the fundamental laws – whether time symmetric or not – entail the existence of a thermodynamic arrow. If not, then the answer must lie in temporally asymmetric boundary conditions. No asymmetry of the right kind in, no asymmetry out. As it happens, as I understand them, none of the current candidates for fundamental law of Nature entail the thermodynamic arrow. String theory, canonical quantum gravity, quantum field theory, general relativity, and more all admit solutions lacking a thermodynamic arrow. So a first pass at an answer to our initial question is: the universe has a thermodynamic arrow due in part to its temporally asymmetric boundary conditions.
Merely locating the answer in boundary conditions, however, is not to say much. All it does is rule out thermodynamic phenomena being understood as a corollary of the fundamental laws. But that's true of almost all phenomena. Few events or regularities can be explained directly via the fundamental laws. If we are to have a satisfying explanation, we need to get much more specific.
In this chapter I want to consider the so-called reduction of thermodynamics to statistical mechanics from both historical and relatively contemporary points of view. As is well known, most philosophers not working in the foundations of statistical physics still take this reduction to be a paradigm instance of that type of intertheoretic relation. However, numerous careful investigations by many philosophers of physics and physicists with philosophical tendencies show this view is by and large mistaken. It is almost surely the case that thermodynamics does not reduce to statistical mechanics according to the received view of the nature of reduction in the philosophical literature. What is interesting is that, while not framing the issue in these terms, J. Willard Gibbs can also be seen as being somewhat sceptical about the possibility of a philosophical reduction of thermodynamics to statistical mechanics. Gibbs' scepticism is, of course well-known. Nevertheless, I think his remarks bear further consideration given certain advances in understanding the foundations of statistical physics.
I will first briefly run over some philosophical ground, outlining the received approach to theory reduction as well as what I take to be a more promising conception of reduction that parallels, to some extent the way physicists typically speak of theory reduction. Following this I will discuss Gibbs' famous caution in connecting thermodynamical concepts with those from statistical mechanics. This is presented in chapter XIV, ‘Discussion of Thermodynamic Analogies’, of his book Elementary Principles in Statistical Mechanics.