We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is a relatively straightforward matter to use freely available computer codes and lists of chemical reactions to compute abundances of molecular species for many types of interstellar or circumstellar region. For example, the UDfA, Ohio, and KIDA websites (see Chapter 9) provide lists of relevant chemical reactions and reaction rate data. Codes to integrate time-dependent chemical rate equations incorporating these data are widely available and provide as outputs the chemical abundances as functions of time. For many circumstances, the codes are fast, and the reaction rate data (from laboratory experiments and from theory) have been assessed for accuracy. The required input data define the relevant physical conditions for the region to be investigated.
These codes and databases are immensely useful achievements that are based on decades of research. However, the results from this approach do not readily provide the insight that addresses some of the questions we posed in Chapter 1: What are the useful molecular tracers for observers to use, and how do these tracers respond to changes in the ‘drivers’ of the chemistry? Observers do not need to understand all the details of the chemical networks (which may contain thousands of reactions), but it is important to appreciate how the choice of the tracer molecule may be guided by, and depend on, the physical conditions in the regions they wish to study.
Molecules pervade the cooler, denser parts of the Universe. As a useful rule of thumb, cosmic gases at temperatures of less than a few thousand K and with number densities greater than one hydrogen atom per cm3 are likely to contain some molecules; even the Sun's atmosphere is very slightly molecular in sunspots (where the temperature – at about 3200 K – is lower than the average surface temperature). However, if the gas kinetic temperatures are much lower, say about 100 K or less, and gas number densities much higher, say more than about 1000 hydrogen atoms per cm3, the gas will usually be almost entirely molecular. The Giant Molecular Clouds (GMCs) in the Milky Way and in other spiral galaxies are clear examples of regions that are almost entirely molecular. The denser, cooler components of cosmic gas, such as the GMCs in the Milky Way Galaxy, contain a significant fraction of the nonstellar baryonic matter in the Galaxy. Counterparts of the GMCs in the Milky Way are found in nearby spiral galaxies (see Figure 1.1). Although molecular regions are generally relatively small in volume compared to hot gas in structures such as galactic jets or extended regions of very hot X-ray–emitting gas in interstellar space, their much higher density offsets that disparity, and so compact dense objects may be more massive than large tenuous regions.
This book is about Lagrangians and Hamiltonians. To state it more formally, this book is about the variational approach to analytical mechanics. You may not have been exposed to the calculus of variations, or may have forgotten what you once knew about it, so I am not assuming that you know what I mean by, “the variational approach to analytical mechanics.” But I think that by the time you have worked through the first two chapters, you will have a good grasp of the concept.
We being with a review of introductory concepts and an overview of background material. Some of the concepts presented in this chapter will be familiar from your introductory and intermediate mechanics courses. However, you will also encounter several new concepts that will be useful in developing an understanding of advanced analytical mechanics.
Kinematics
A particle is a material body having mass but no spatial extent. Geometrically, it is a point. The position of a particle is usually specified by the vector r from the origin of a coordinate system to the particle. We can assume the coordinate system is inertial and for the sake of familiarity you may suppose the coordinate system is Cartesian. See Figure 1.1.
The calculus of variations is a branch of mathematics which considers extremal problems; it yields techniques for determining when a particular definite integral will be a maximum or a minimum (or, more generally, the conditions for the integral to be “stationary”). The calculus of variations answers questions such as the following.
• What is the path that gives the shortest distance between two points in a plane? (A straight line.)
• What is the path that gives the shortest distance between two points on a sphere? (A geodesic or “great circle.”)
• What is the shape of the curve of given length that encloses the greatest area? (A circle.)
• What is the shape of the region of space that encloses the greatest volume for a given surface area? (A sphere.)
The technique of the calculus of variations is to formulate the problem in terms of a definite integral, then to determine the conditions under which the integral will be maximized (or minimized). For example, consider two points (P1 and P2)inthe x–y plane. These can be connected by an infinite number of paths, each described by a function of the form y = y(x). Suppose we wanted to determine the equation y = y(x) for the curve giving the shortest path between P1 and P2.
The purpose of this book is to give the student of physics a basic overview of Lagrangians and Hamiltonians. We will focus on what are called variational techniques in mechanics. The material discussed here includes only topics directly related to the Lagrangian and Hamiltonian techniques. It is not a traditional graduate mechanics text and does not include many topics covered in texts such as those by Goldstein, Fetter and Walecka, or Landau and Lifshitz. To help you to understand the material, I have included a large number of easy exercises and a smaller number of difficult problems. Some of the exercises border on the trivial, and are included only to help you to focus on an equation or a concept. If you work through the exercises, you will better prepared to solve the difficult problems. I have also included a number of worked examples. You may find it helpful to go through them carefully, step by step.
In the previous chapter it was mentioned that there is no general technique for solving the n coupled second-order Lagrange equations of motion, but that Jacobi had derived a general method for solving the 2n coupled canonical equations of motion, allowing one to determine all the position and momentum variables in terms of their initial values and the time.
There are two slightly different ways to solve Hamilton's canonical equations. One is more general, whereas the other is a bit simpler, but is only valid for systems in which energy is conserved. We will go through the procedure for the more general method, then solve the harmonic oscillator problem by using the second method.
Both methods involve solving a partial differential equation for the quantity S that is called “Hamilton's principal function.” The problem of solving the entire system of equations of motion is reduced to solving a single partial differential equation for the function S. This partial differential equation is called the “Hamilton–Jacobi equation.” Reducing the dynamical problem to solving just one equation is quite satisfying from a theoretical point of view, but it is not of much help from a practical point of view because the partial differential equation for S is often very difficult to solve. Problems that can be solved by obtaining the solution for S can usually be solved more easily by other means.
In this chapter we begin by considering canonical transformations. These are transformations that preserve the form of Hamilton's equations. This is followed by a study of Poisson brackets, an important tool for studying canonical transformations. Finally we consider infinitesimal canonical transformations and, as an example, we look at angular momentum in terms of Poisson brackets.
Integrating the equations of motion
In our study of analytical mechanics we have seen that the variational principle leads to two different sets of equations of motion. The first set consists of the Lagrange equations and the second set consists of Hamilton's canonical equations. Lagrange's equations are a set of n coupled second-order differential equations and Hamilton's equations are a set of 2n coupled first-order differential equations.
The ultimate goal of any dynamical theory is to obtain a general solution for the equations of motion. In Lagrangian dynamics this requires integrating the equations of motion twice. This is often quite difficult because the Lagrangian (and hence the equations of motion) depends not only on the coordinates but also on their derivatives (the velocities). There is no known general method for integrating these equations. You might wonder if it is possible to transform to a new set of coordinates in which the equations of motion are simpler and easier to integrate. Indeed, this is possible in some situations.
The harmonic oscillator plays a loftier role in physics than one might guess from its humble origin: a mass bouncing at the end of a spring. The harmonic oscillator underlies the creation of sound by musical instruments, the propagation of waves in media, the analysis and control of vibrations in machinery and airplanes, and the time-keeping crystals in digital watches. Furthermore, the harmonic oscillator arises in numerous atomic and optical quantum scenarios, in quantum systems such as lasers, and it is a recurrent motif in advanced quantum field theories. In short, if there were a competition for a logo for the universality of physics, the harmonic oscillator would make a pretty strong contender.
We encountered simple harmonic motion—the periodic motion of a mass attached to a spring—in Chapter 3. The treatment there was highly idealized because it neglected friction and the possibility of a time-dependent driving force. It turns out that friction is essential for the analysis to be physically meaningful and that the most interesting applications of the harmonic oscillator generally involve its response to a driving force. In this chapter we will look at the harmonic oscillator including friction, a system known as the damped harmonic oscillator, and then examine how the system behaves when driven by a periodic applied force, a system called the driven harmonic oscillator.