We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The initial layout of an optical instrument is usually made using simple Gaussian optics, where all apertures and pupils are infinitesimal and all angles are considered small, so that the approximations sin θ = tan θ = θ and cos θ = 1 are adequate.
Geometrical optics, however, is a non-linear discipline and ray-tracing, using the proper values of sines and cosines, quickly reveals that rays do not arrive in the expected places to make sharp images. The quality of an image can be tested by modifying the Gaussian optics to the approximations that sin θ = θ − θ3/6, cos θ = 1 and tan θ = θ + θ3/3 and adjustments can then be made to the positions, curvatures and refractive indices of the various elements of a system to produce better images at the focal surface. Except in rare instances, perfection is not to be expected, and the art of the optical designer is to adjust the values to produce an acceptable result. The problem confronting the designer is that the adjustments all interfere with each other, making the problem miserably complicated.
The problem was first analysed by Ludwig von Seidel of Munich (1821–1896), who codified the five so-called Seidel aberrations of a monochromatic system and derived formulae for computing them.
The historically acknowledged function of a telescope is to produce an enlarged virtual image of a distant object for inspection by the observer's eye. Both object and image are at −∞ in normal focusing mode, but by shortening the tube a little the image can be brought nearer to the observer's eye to allow for myopia. In effect the objective forms a real image of a distant object and the eyepiece re-collimates the light so that the final image, like the object, is at −∞. The lens of the observer's eye then focuses this virtual image on to the retina.
An unusual but equally valid alternative is to regard a telescope as a device which puts an enlarged, real image of the observer's eye-pupil at the telescope's objective. If the telescope is properly designed, the image of the eye-pupil will exactly correspond in position and diameter to the clear aperture of the objective. The enlarged, virtual eye would then see the distant object exactly as the actual observer sees it, but in appropriately greater detail.
What all this implies is that the observer's eye-pupil and the telescope objective are conjugate points in the complete optical system of telescope and eye.
Silver halide photography, from its invention in the 1830s, relied almost exclusively on finely ground silver bromide crystals suspended in a gelatine emulsion. These, it was discovered, were altered by the absorption of light so that they could be reduced chemically to black, colloidal silver by a variety of sensitive reducing agents to produce a negative image in the emulsion. This could then be changed, usually by copying to positive images in transparent emulsion or on sensitized paper. The sensitivity was restricted at first to light of short wavelengths in the blue to ultra-violet region of the spectrum until a chemist, H. W. Vogel, at the University of Berlin, discovered the technique of dye-sensitization to make orthochromatic emulsions sensitive to yellow and orange light. Later improvements in this technique eventually made red-sensitive panchromatic emulsions available – at a price – in the early years of the twentieth century. S. M. Prokudin–Gorskii, in particular, led the way by making simultaneous exposures through red, green and blue filters to give negatives, from which diapositives could be made for simultaneous projection with three projectors and the same colour filters, on to a screen, thus producing the first colour photography slide shows.
Which device is to be used to detect radiation depends very much on the kind of radiation. Heat, for example, the far infra-red that is, needs a bolometer; the UVOIR region relies on the interaction of radiation with electric charge, and the far ultra-violet relies on fluorescence or the photoelectric effect.
The geometry of radiation measurement
Two things are fundamental.
(1) There is no power in a parallel beam: power depends on divergence into a finite solid angle.
(2) There is no power from a point source. The power radiated is proportional to the source area if the radiation is incoherent, and is proportional to the square of the source area if the radiation is coherent across the area.
The two useful ideas – a point source and a parallel beam – are, like light rays themselves, among the ‘convenient fictions’ of physics.
Extended sources
As we can see from the remarks above, all sources are extended to some degree. Even a single atom radiates a spectral line from an ‘area’ of about λ2 because you cannot ‘localize’ the emitter to better than this, using this radiation. Light sources then are distinguished by their surface brightness, the power emitted per unit area per unit solid angle.
Optical instruments come in all shapes and sizes, from fly-on-the-wall surveillance cameras to 10 metre segmented astronomical reflecting telescopes, and in shapes from microscopes to sextants to periscopes to spectrographs to cine-projectors.
Whatever their purpose, they all have two things in common.
(1) They are image-forming devices, intended to make a picture, to form an image of a luminous source. The image may be on a cinema screen, on a photographic emulsion, on a CCD surface or on the retina of an eye.
(2) They are, with one important exception, centred systems. That is to say they comprise a series of curved surfaces of transparent materials or reflecting materials or both. The centres of curvature of the various elements all lie on a straight line called the optic axis. Light passes from the object, through successive elements until it emerges to form an image.
This is a slight over-simplification of course. There are occasional plane reflectors along the path, as in a periscope for example, but these are for convenience rather than for any peculiar optical properties they possess.
Optical elements
There are four basic optical elements: the lens, the mirror, the diffraction grating and the prism. What follows now concerns the first two of these. The others have chapters of their own.
For the purposes of observational astronomy and celestial navigation, it is adequate to assume that the Sun, stars and planets all lie on the surface of a transparent sphere of ‘immeasurably’ large diameter with the Earth, or sometimes the observer, at its centre. The positions of the stars on this sphere are given by coordinates similar to those of latitude and longitude on Earth, but known generally as declination (latitude) and celestial longitude, measured westward from ‘the first point of Aries’. This is the place on the sky where the Sun crosses the equator going northwards each (northern) Spring and was once in the constellation of Aries. (Because of the ‘precession of the equinoxes’ it is now in the constellation of Pisces.)
Formerly longitude was measured eastward from the first point of Aries. It was called ‘right ascension’ and was measured in hours and minutes rather than degrees. These hours were sidereal hours not solar hours, with the understanding that Earth rotates through 360° in 24 sidereal hours, or 23 hours 56 minutes of solar mean time. Sidereal clocks gain 4 minutes each day on ordinary clocks since the Earth rotates through approximately 361° from one midday to the next.
Microscopes and projectors have this in common: they illustrate the universal requirement that the passage of light through an instrument must be closely controlled. Neither is likely to be improvised by the reader, but one should understand the way they work.
Projectors
In this section we deal chiefly with the diapositive projector, the old-time ‘magic lantern’ as our Victorian forefathers knew it, which projects images of transparencies on to a large screen for public viewing. Two things are necessary for its design:
(1) it must project a clear sharp image of the transparency on to a distant flat surface, and
(2) it must illuminate the transparency uniformly with light – there must be no dark corners to the picture.
The ingredients for this are a bright source of light, ideally an extended source, but failing that either an array of white LEDs or a filament lamp with a series of coiled filaments as shown in Figure 9.1, arranged so that the space between parallel coils is equal to the width of the coil.
Behind the filament is a spherical mirror placed so that the filament is at its centre of curvature and slightly offset laterally so that the images of each coil fall between the coils themselves.
When you look through a telescope or binocular you focus the distant scene by first pulling out then slowly retracting the tube, or by turning the appropriate screw. For visual observation you should always extend the tube beyond the point of good focus and then draw it in until the scene is seen to be sharp and well focused. This is to avoid the eye strain which comes from a prolonged accommodation of the eye to a close object. (The same is true when using a hand-lens. Bring the object in from a distance; do not start with it too close because, although it will be in focus well enough, the image will be close to the eye instead of at −∞ where it belongs.)
The telescope field of view is bounded by a circular stop which is usually sharply in focus. There may be an eyepiece adjustment to make it so. It is a stop placed at the prime focus, and it is there to prevent scattered light from the outer, unusable part of the field from entering the eye, thereby reducing the contrast of the scene. A second such stop, with the same purpose, is at the intermediate pupil of an erecting telescope. The two stops are optically conjugate.
Covering a broad range of topics in modern optical physics and engineering, this textbook is invaluable for undergraduate students studying laser physics, optoelectronics, photonics, applied optics and optical engineering. This new edition has been re-organized, and now covers many new topics such as the optics of stratified media, quantum well lasers and modulators, free electron lasers, diode-pumped solid state and gas lasers, imaging and non-imaging optical systems, squeezed light, periodic poling in nonlinear media, very short pulse lasers and new applications of lasers. The textbook gives a detailed introduction to the basic physics and engineering of lasers, as well as covering the design and operational principles of a wide range of optical systems and electro-optic devices. It features full details of important derivations and results, and provides many practical examples of the design, construction and performance characteristics of different types of lasers and electro-optic devices.
Playing a prominent role in communications, quantum science and laser physics, quantum nonlinear optics is an increasingly important field. This book presents a self-contained treatment of field quantization and covers topics such as the canonical formalism for fields, phase-space representations and the encompassing problem of quantization of electrodynamics in linear and nonlinear media. Starting with a summary of classical nonlinear optics, it then explains in detail the calculation techniques for quantum nonlinear optical systems and their applications, quantum and classical noise sources in optical fibers and applications of nonlinear optics to quantum information science. Supplemented by end-of-chapter exercises and detailed examples of calculation techniques in different systems, this book is a valuable resource for graduate students and researchers in nonlinear optics, condensed matter physics, quantum information and atomic physics. A solid foundation in quantum mechanics and classical electrodynamics is assumed, but no prior knowledge of nonlinear optics is required.
The Dutch physicist Hendrik Antoon Lorentz (1853–1928) was educated at the University of Leiden, where he later became a Professor of Theoretical Physics. A leading figure in his field, he established the basic mathematical principles that were later used by Albert Einstein for his theory of relativity. Lorentz and his colleague Pieter Zeeman won the Nobel Prize in Physics in 1902 for their researches into the influence of magnetism upon radiation phenomena (the Zeeman effect). In 1905 Lorentz was also elected a Fellow of the Royal Society, which awarded him the Rumford and Copley Medals. Contributing to the discussion of the theory of a luminiferous ether - soon to be superseded by special relativity - this work, first published in 1895, looks at electromagnetic phenomena (the propagation of light) in relation to moving bodies and optics.
In Chapter 1, we treated the electromagnetic field as classical. Henceforth, we will want to treat it as a quantum field. In order to do so, we will first present some of the formalism of quantum field theory. This formalism is very useful in describing many-particle systems and processes in which the number of particles changes. Why this is important to a quantum description of nonlinear optics can be seen by considering a parametric amplifier of the type discussed in the last chapter. A pump field, consisting of many photons, amplifies idler and signal fields by means of a process in which a pump photon splits into two lower-energy photons, one at the idler frequency and one at the signal frequency. Therefore, what we would like to do in this chapter is to provide a discussion of some of the basics of quantum field theory that will be useful in the treatment of the quantization of the electromagnetic field.
In particular, we will begin with a summary of quantum theory notation, and a discussion of many-particle Hilbert spaces. These provide the arena in which all of the action takes place. We will then move on to a treatment of the canonical quantization procedure for fields. This will allow us to develop a scattering theory for fields, which is ideally what we need. This relates the properties of a field entering a medium to those of the field leaving it, and this corresponds to what is done in an experiment.
Quantum information, which studies the representation of information by quantum mechanical systems and the type of information processing this makes possible, is a relatively new field. Its roots, however, go back to early discussions of the interpretation of the quantum mechanical formalism. In 1935 Einstein, Podolsky and Rosen suggested that there were interpretational issues with quantum mechanics, having to do with local realism. Einstein was puzzled by an apparent lack of locality in quantum mechanical descriptions of reality, and suggested that quantum mechanics was not a complete theory. This stimulated a subsequent series of theoretical and experimental investigations. In particular, John Bell realized that Einstein's proposal for a ‘completion’ of quantum mechanics by the addition of more variables – called ‘hidden’ variables – was not consistent with quantum predictions, and could not be carried out.
While these interpretational issues set the stage for what followed, the modern field of quantum information arose in the 1980s stimulated by the ever decreasing size of the elements in information processing circuits. It was realized that at some point a threshold would be crossed and the devices would start to exhibit quantum mechanical behavior. It was first determined that, if they did, it would not present a problem, and, in addition, it could even be advantageous. This led to the idea of a quantum computer that stores information in quantum states, or ‘qubits’, rather than as binary digits, or ‘bits’, in a standard digital computer.
The field took off when, in 1994, Peter Shor found an efficient quantum algorithm for finding the prime factors of an integer.