We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Roland Shack invented the device now known as the Shack–Hartmann wavefront sensor in the early 1970s. This sensor, which in recent years has been commercialized, measures the phase distribution over the cross-section of a given beam of light without relying on interference and, therefore, does not require a reference beam.
The standard method of wavefront analysis is interferometry, where one brings together on an observation plane the beam under investigation (hereinafter the test beam) and a reference beam in order to form tell-tale fringes. The trouble with interferometry is that it requires a reference beam, which is not always readily available. Moreover, the coherence length of the light used in these measurements must be long compared with the path-length difference between the reference and test beams. Thus, when the available light source happens to be broad-band, it becomes difficult (though by no means impossible) to produce high-contrast fringes. The Shack–Hartmann instrument solves these problems by eliminating altogether the need for the reference beam.
Wavefront analysis by interferometry
Before embarking on a discussion of the Shack–Hartmann wavefront sensor, it will be instructive to describe the operation of a conventional interferometer. Consider, for instance, the system of Figure 45.1, where a spherical mirror is under investigation. While grinding and polishing the glass blank, the optician frequently performs this type of test to determine departures of the surface from the desired figure. A point source reflected from a 50/50 beam-splitter is used to illuminate the test mirror.
When a beam of light enters a material medium, it sets in motion the resident electrons, whether these electrons are free or bound. The electronic oscillations in turn give rise to electromagnetic radiation which, in the case of linear media, possesses the frequency of the exciting beam. Because Maxwell's equations are linear, one expects the total field at any point in space to be the sum of the original (exciting) field and the radiation produced by all the oscillating electrons. However, in practice the original beam appears to be absent within the medium, as though it had been replaced by a different beam, one having a shorter wavelength and propagating in a different direction. The Ewald–Oseen theorem resolves this paradox by showing how the oscillating electrons conspire to produce a field that exactly cancels out the original beam everywhere inside the medium. The net field is indeed the sum of the incident beam and the radiated field of the oscillating electrons, but the latter field completely masks the former.
Although the proof of the Ewald–Oseen theorem is fairly straightforward, it involves complicated integrations over dipolar fields in three-dimensional space, making it a brute-force drill in calculus and devoid of physical insight. It is possible, however, to prove the theorem using plane waves interacting with thin slabs of material, while invoking no physics beyond Fresnel's reflection coefficients. (These coefficients, which date back to 1823, predate Maxwell's equations.
Despite its scary name, a surface plasmon is simply an inhomogeneous plane-wave solution to Maxwell's equations. Typically, a medium with a large but negative dielectric constant ε is a good host for surface plasmons. Because in an isotropic medium having refractive index n and absorption coefficient κ we have ε = (n + iκ), whenever κ ≫ n the above criterion, large but negative ε, is approximately satisfied; as a result, most common metals such as aluminum, gold, and silver can exhibit resonant absorption by surface plasmon excitation. In order to excite, within a metal, a plane wave that has a large enough amplitude to carry away a significant fraction of the incident optical energy, one must create a situation whereby the metal is “forced” to accept such a wave; otherwise, as normally occurs, the wave within the metal ends up having a small amplitude, causing nearly all of the incident energy to be reflected, diffracted, or scattered from the metallic surface, depending upon the condition of that surface.
In this chapter several practical situations in which surface plasmons play a role will be presented. We begin by describing the results of an experiment that can be readily set up in any optics laboratory, and we give an explanation of the observed phenomenon by scrutinizing the well-known Fresnel's reflection formula at a metal-to-air interface.
A typical single-mode silica glass fiber has a mode profile that is well approximated by a Gaussian beam. At λ = 1.55 μm, this Gaussian mode has a (1/e2 intensity) diameter of ∼10 μm. One method of launching light into a fiber calls for placing the polished end of the fiber in contact with (or close proximity to) the polished end of another, signal-carrying fiber that has a matching mode profile. Alternatively, a coherent beam of light may be focused directly onto the polished end of the fiber. If the focused spot is well aligned with the fiber's core and has the same amplitude and phase distribution as the fiber's mode profile, then the launched mode will carry the entire incident optical power into the fiber. In general, however, the focused spot is neither perfectly matched to the fiber's mode, nor is it completely aligned with the core. Under these circumstances, only a certain fraction of the incident optical power will be launched into the fiber. The numerical value of this fraction, commonly referred to as the coupling efficiency, will be denoted by η throughout this chapter.
It is well-known that the strength of the launched mode may be computed by evaluating the overlap integral between the mode profile and the (complex) light amplitude distribution that arrives at the polished facet of the fiber. The problem of computing the coupling efficiency η is thus reduced to determining the light amplitude distribution immediately in front of the fiber.
A variety of methods exist for temporally compressing (shortening) optical pulses. These methods typically start with pulses in the picosecond or femtosecond range, and end up with pulses that can be as short as a few optical cycles. The optical bandwidth of the initial pulse is usually increased using a nonlinear interaction such as self-phase modulation; this leads to a chirped pulse, which sometimes ends up being longer than the original pulse. A well-known technique for generating sub-100 fs pulses is nonlinear compression in a fiber, where the fiber's nonlinearity is used to broaden the optical spectrum. Thereafter, the pulse duration is reduced using linear dispersive compression, which removes the chirp by flattening the spectral phase. This is accomplished by sending the pulse through an optical element with a suitable amount of dispersion, such as a prism pair, an optical fiber, a grating compressor, or a chirped mirror.
In the 1960s, Gires and Tournois and Giordmaine et al. independently proposed the shortening of optical pulses using compression techniques analogous to those used at microwave frequencies. Fisher et al. suggested that femtosecond optical pulses could be obtained by first passing a short pulse through an optical Kerr liquid in order to impress a frequency sweep or “chirp” on the pulse's carrier. Pulse compression was then to be achieved by compensating the frequency sweep in the pulse frequency spectrum using a dispersive delay line. In 1982, Shank et al.
Antoni van Leeuwenhoek (1632–1723), a fabric merchant from Delft, the Netherlands, used tiny glass spheres to study various microscopic objects at high magnification with surprisingly good resolution. A contemporary of Sir Isaac Newton, Christiaan Huygens, and Robert Hooke, he is said to have made over 400 microscopes and bequeathed 26 of them to the Royal Society of London. (A handful of these microscopes are extant in various European museums.) Using his single-lens microscope, van Leeuwenhoek observed what he called animalcules – or micro-organisms, to use the modern terminology – and made the first drawing of a bacterium in 1683. He kept detailed records of what he saw and wrote about his findings to the Royal Society of London and the Paris Academy of Science. His contributions have made him the father of scientific microscopy.
Van Leeuwenhoek was an amateur in science and lacked formal training. He seems to have been inspired to take up microscopy by Robert Hooke's illustrated book, Micrographia, which depicted Hooke's own observations with the microscope. In basic design, van Leeuwenhoek's instruments were simply powerful magnifying glasses, not compound microscopes of the type used today. An entire instrument was only 3– 4 inches (8–10 cm) long, and had to be held up close to the eye; its use required good lighting and great patience. Van Leeuwenhoek devised tiny, double-convex lenses to be mounted between brass plates.
George Nomarski invented the method of differential interference contrast for the microscopic observation of phase objects in 1953. The features on a phase object typically modulate the phase of an incident beam without significantly affecting the beam's amplitude. Examples include unstained biological samples having differing refractive indices from their surroundings, and reflective (as well as transmissive) surfaces containing digs, scratches, bumps, pits, or other surface-relief features that are smooth enough to reflect specularly the incident rays of light. A conventional microscope image of a phase object is usually faint, showing at best the effects of diffraction near the corners and sharp edges but revealing little information about the detailed structure of the sample.
Nomarski's method creates two slightly shifted, overlapping images of the same surface. The two images, being temporally coherent with respect to one another, optically interfere, producing contrast variations that contain useful information about the phase gradients across the sample's surface. In particular, a feature that has a slope in the direction of the imposed shear appears with a specific level of brightness that is distinct from other, differently sloping regions of the same sample.
The Nomarski microscope uses a Wollaston prism in the illumination path to produce two orthogonally polarized, slightly shifted bright spots at the sample's surface. Upon reflection from (or transmission through) the sample, the two beams are collected by the objective lens, then sent through the same (or, in the case of a transmission microscope, a similar) Wollaston prism, which recombines the two beams by sliding them back over each other.
The Talbot effect, also referred to as self-imaging or lensless imaging, was originally discovered in the 1830s by H. F. Talbot. Over the years, investigators have come to understand different aspects of this phenomenon, and a theory of the Talbot effect based on classical diffraction theory has emerged which is capable of explaining the various observations. For a detailed description of the Talbot effect and related phenomena, as well as a historical perspective on the subject, the reader may consult references 3 and 4 and further references cited therein. Since many of the standard optics textbooks do not even mention the Talbot effect, it is worthwhile to bring to the reader's attention the essential features of this phenomenon.
Lensless imaging of a periodic pattern
The Talbot effect is observed when, under appropriate conditions, a beam of light is reflected from (or transmitted through) a periodic pattern. The pattern may have one-dimensional periodicity (as in traditional gratings), or it may exhibit periodicity in two dimensions (e.g., a surface relief structure or a photographic plate imprinted with identical features on a regular lattice).
In what follows we shall present the diffraction patterns obtained from a periodic array of cross-shaped apertures in an otherwise opaque screen. Because the diffraction pattern of a single aperture differs markedly from that of a periodic array of such apertures, we begin by examining the behavior of an individual aperture under coherent illumination.
The phenomenon of conical refraction was predicted by Sir William Rowan Hamilton in 1832 and its existence was confirmed experimentally two months later by Humphrey Lloyd. (James Clerk Maxwell was only a toddler at the time.) The success of this experiment contributed greatly to the general acceptance of Fresnel's wave theory of light.
Conical refraction has been known for nearly 170 years now, and a complete explanation based on Maxwell's electromagnetic theory has emerged, which is accessible through the published literature. The complexity of the physics involved, however, is such that it prevents us from attempting to give a simple explanation. We shall, therefore, confine our efforts to presenting a descriptive picture of internal and external conical refraction by way of computer simulations based on Maxwell's equations.
Overview
To observe internal conical refraction one must obtain a slab of biaxial birefringent crystal, such as aragonite, that has been cut with one of its optic axes perpendicular to the polished parallel surfaces of the slab (see Figure 29.1). When a collimated beam of light (say, from a HeNe laser) is directed at normal incidence towards the front facet of the slab, the beam enters the crystal and spreads out in the form of a hollow cone of light. Upon reaching the opposite facet, the beam emerges as two concentric hollow cylinders, propagating in the same direction as the original, incident beam.
The electromagnetic fields within a waveguide or a resonator cannot have arbitrary distributions. The requirements of satisfying Maxwell's equations as well as the boundary conditions specific to the waveguide (or the resonator) confine the distribution to certain shapes and forms. The electromagnetic field distributions that can be sustained within a device are known as its stable modes of oscillation.
When the device and its geometry are simple, the stable modes can be determined analytically. For complex systems and complicated geometries, however, numerical methods must be used to solve Maxwell's equations in the presence of the relevant boundary conditions. The method of Fox and Li is an elegant numerical technique that can be applied to certain waveguides and resonators in order to obtain the operating mode of the device. Instead of solving Maxwell's equations explicitly, the method of Fox and Li uses the Fresnel–Kirchhoff diffraction integral to mimic the physical process of wavefront propagation within the device, thus arriving at its stable mode of operation after several iterations.
To illustrate the method of Fox and Li we focus our attention on the confocal resonator shown in Figure 31.1(a). Let us assume that the two mirrors are aberration-free parabolas with an effective numerical aperture NA = 0.01 and focal length f = 62 500λ0 (λ0 is the vacuum wavelength of the light confined within the cavity). The clear aperture of each mirror will therefore have a diameter of 1250λ0.
Michael Faraday (1791–1867) was born in a village near London into the family of a blacksmith. His family was too poor to keep him at school and, at the age of 13, he took a job as an errand boy in a bookshop. A year later he was apprenticed as a bookbinder for a term of seven years. Faraday was not only binding the books but was also reading many of them, which excited in him a burning interest in science.
When his term of apprenticeship in the bookshop was coming to an end, he applied for the job of assistant to Sir Humphry Davy, the celebrated chemist, whose lectures Faraday was attending during his apprenticeship. When Davy asked the advice of one of the governors of the Royal Institution of Great Britain about the employment of a young bookbinder, the man said: “Let him wash bottles! If he is any good he will accept the work; if he refuses, he is not good for anything.” Faraday accepted, and remained with the Royal Institution for the next fifty years, first as Davy's assistant, then as his collaborator, and finally, after Davy's death, as his successor. It has been said that Faraday was Davy's greatest discovery.
In 1823 Faraday liquefied chlorine and in 1825 he discovered the substance known as benzene. He also did significant work in electrochemistry, discovering the laws of electrolysis. However, his greatest work was with electricity.
When a light field interacts with structures that have complex geometric features comparable in size to the wavelength of the light, it is not permissible to invoke the assumptions of the classical diffraction theory, which simplify the problem and allow for approximate solutions. For such cases, direct numerical solutions of the governing equations are sought through approximating the continuous time and space derivatives by the appropriate difference operators. The Finite Difference Time Domain (FDTD) method discretizes Maxwell's equations by using a central difference operator in both the time and space variables. The E- and B-fields are then represented by their discrete values on the spatial grid, and are advanced in time in steps of Δt. The numerical solution thus obtained to Maxwell's equations (in conjunction with the relevant constitutive relations) provides a highly reliable representation of the electromagnetic field distribution in the space-time region under consideration.
This chapter presents examples of application of the FDTD method to problems involving the interaction between a focused beam of light and certain subwavelength structures of practical interest. A few general remarks concerning the nature of the FDTD method appear in the next section. This is followed by a description of the simulated system and two examples in which comparison is possible between the FDTD method and an alternative method of calculation. We then present simulation results for the case of a focused beam interacting with small pits and apertures in a thin film supported by a transparent substrate.
The Sagnac effect pertains to the relative phase shift between two beams of light that travel on an identical path in opposite directions within a rotating frame. Modern fiber-optic gyroscopes (Sagnac interferometers) used for navigation are based on this effect, allowing highly accurate measurements of rotation rates down to about 10−4−10−5 degrees per hour. Georges Sagnac (1869–1926) was the first to perform a ring interferometry experiment in 1913 aimed at observing the correlation of angular velocity and optical phase-shift. (An experiment conducted in 1911 by Francis Harress, attempting to measure the Fresnel drag of light propagating through rotating glass, was later recognized as actually constituting a Sagnac experiment; Harress had ascribed the observed “unexpected bias” to some other factor.) An ambitious ring interferometry experiment was set up by Albert Michelson and Henry Gale in 1926 to determine whether the Earth's rotation has an effect on the propagation of light in its vicinity. The Michelson–Gale interferometer with a 1.9 km perimeter was large enough to detect the rotation of the Earth, confirming its known value of angular velocity (obtained from astronomical observations). The Michelson–Gale ring interferometer was not calibrated by comparison with an outside reference, an impossible task given that the setup was fixed to the Earth.
Figure 14.1 shows the general design of a triangular Sagnac interferometer consisting of a light source, a beam-splitter S, mirrors M1, M2, and an observation plane, mounted on a base that rotates at a constant angular velocity Ω around a fixed axis.
In the 1920s Vasco Ronchi developed the well-known method of testing optical systems now named after him. The essential features of the Ronchi test may be described by reference to Figure 44.1. A lens (or more generally, an optical system consisting of a number of lenses and mirrors) is placed in the position of the “object under test”. The lens is then illuminated with a beam of light, which, for the purposes of the present chapter, will be assumed to be coherent and quasi-monochromatic. These restrictions on the beam may be substantially relaxed in practice.
The lens brings the incident beam to a focus in the vicinity of a diffraction grating, which is placed perpendicular to the optical axis, i.e., the Z-axis. The grating, also referred to as a Ronchi ruling, may be as simple as a low-frequency wire grid or as sophisticated as a modern short-pitched, phase/amplitude grating. The position of the grating should be adjustable in the vicinity of focus, so that it may be shifted back and forth along the optical axis. The grating breaks up the incident beam into multiple diffracted orders, which will subsequently propagate along Z and reach the lens labeled “pupil relay” in Figure 44.1.
The pupil relay may simply be the lens of the eye, which projects the exit pupil of the object under test onto the retina of the observer.
The possibility of self-trapping of optical beams due to an intensity-dependent refractive index was recognized in the early days of nonlinear optics. However, it was soon realized that in a three-dimensional medium, in which light diffracts in two transverse dimensions, self-trapping is not stable and leads to catastrophic collapse and filamentation. Stable self-trapping was then found to be feasible in two-dimensional media, in which the optical beam diffracts only in one transverse direction. Subsequently, the connection between self-trapping and soliton theory, and a complete analogy between spatial and temporal solitons were established. Whereas the formation of temporal solitons requires a balance between dispersion and nonlinear phase modulation, spatial solitons owe their existence to the balancing of diffraction with wavefront curvature induced by the nonlinear refractive index profile of the propagation medium.
To observe a spatial soliton one must limit diffraction to one transverse direction, which can be achieved in a planar optical waveguide. The first experiments of this type were conducted using a multimode liquid waveguide (CS2 confined between a pair of glass slides). Formation of spatial optical solitons in single-mode planar glass waveguides was reported shortly afterwards.
Kerr nonlinearity
The simplest nonlinearity capable of producing self-trapping (leading to soliton formation in a planar waveguide) is a Kerr nonlinearity, obtained when the refractive index of the medium has an intensity-dependent term of the formwhere I = |E|2 is the electric field intensity of the optical beam.
The common threads that run through this book are the classical phenomena of diffraction, interference, and polarization. Although the reader is expected to be generally familiar with these electromagnetic phenomena, the book does cover some of the principles of classical optics in the early chapters. The basic ideas of diffraction and Fourier optics are introduced in chapters 1 through 4; this introduction is followed by a detailed discussion of spatial and temporal coherence and of partial polarization in chapters 5 through 8. These concepts are then used throughout the book to explain phenomena that are either of technological import or significant in their own right as natural occurrences that deserve attention.
Each chapter is concerned with a single topic (e.g., surface plasmons, diffraction gratings, evanescent coupling, photolithography) and attempts to develop an understanding of this subject through the use of pictures, examples, numerical simulations, and logical argument. The reader already familiar with a particular topic is likely to learn more about its applications, to appreciate better the physics behind some of the formulas he or she may have previously encountered, and perhaps even learn a thing or two about the nuances of the subject. For the reader who is new to the field, our presentation is aimed to provide an introduction, an intuitive feel for the physical and/or technological issues involved, and, hopefully, motivation for digging deeper by consulting the cited references.
The goal of ellipsometry is to determine the optical and structural constants of thin films and flat surfaces from the measurements of the ellipse of polarization in reflected or transmitted light. In the absence of birefringence and optical activity a flat surface, a single-layer film, or a thin-film stack may be characterized by the complex reflection coefficients rp = |rp|exp(iφrp) and rs = |rs|exp(iφrs) for p- and s-polarized incident beams, as well as by the corresponding transmission coefficients tp = |tp|exp(iφtp) and ts = |ts|exp(iφts).
Strictly speaking, an ellipsometer is a device that measures the complex ratios rp /rs and/or tp /ts. The amplitude ratios are usually deduced from the angles ψr and ψt, which are defined by tan ψr = |rp|/|rs| and tan ψt = |tp|/|ts|. In practice, measuring the individual reflectivities Rp = |rp|2, Rs = |rs|2 or transmissivities Tp = |tp|2, Ts = |ts|2 does not require much additional effort. Measuring the individual phases, of course, is difficult, but the relative phase angles φrp − φrs and φtp − φts can be readily obtained by ellipsometric methods. The values of Rp, Rs, φrp − φrs, ψr, Tp, Ts, φtp − φts and ψt may be measured as functions of the angle of incidence, θ, or as functions of the wavelength of the light, λ, or both.
The results of ellipsometric measurements are fed to a computer program that searches the space of unknown parameters to find agreement between the measured data points and theoretical calculations.