We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Light embraces the most fascinating spectrum of electromagnetic radiation. This is mainly due to the fact that the energy of light quanta (photons) lies within the energy range of electronic transitions in matter. This gives us the beauty of color and is the reason why our eyes adapted to sense the optical spectrum.
Light is also fascinating because it manifests itself in the forms of waves and particles. In no other range of the electromagnetic spectrum are we more confronted with the wave-particle duality than in the optical regime. While long wavelength radiation (radiofrequencies, microwaves) is well described by wave theory, short wavelength radiation (X-rays) exhibits mostly particle properties. The two worlds meet in the optical regime.
To describe optical radiation in nano-optics it is mostly sufficient to adopt the wave picture. This allows us to use classical field theory based on Maxwell's equations. Of course, in nano-optics the systems with which the light fields interact are small (single molecules, quantum dots), which necessitates a quantum description of the material properties. Thus, in most cases we can use the framework of semiclassical theory, which combines the classical picture of fields and the quantum picture of matter. However, occasionally, we have to go beyond the semiclassical description. For example the photons emitted by a quantum system can obey non-classical photon statistics in the form of photon-antibunching (no two photons arriving simultaneously).
A key problem in nano-optics is the determination of electromagnetic field distributions near nanoscale structures and the associated radiation properties. A solid theoretical understanding of field distributions holds promise for new, optimized designs of near-field optical devices, in particular by exploitation of field-enhancement effects and favorable detection schemes. Calculations of field distributions are also necessary for imagereconstruction purposes. Fields near nanoscale structures often have to be reconstructed from experimentally accessible far-field data. However, most commonly the inverse scattering problem cannot be solved in a unique way, and calculations of field distributions are needed in order to provide prior knowledge about source and scattering objects and to restrict the set of possible solutions.
Analytical solutions of Maxwell's equations provide a good theoretical understanding, but can be obtained for simple problems only. Other problems have to be strongly simpli-fied. A pure numerical analysis allows us to handle complex problems by discretization of space and time but computational requirements (usually given by CPU time and memory) limit the size of the problem and the accuracy of results is often unknown. The advantage of pure numerical methods, such as the finite-difference time-domain (FDTD) method and the finite-element (FE) method, is the ease of implementation. We do not review these pure numerical techniques since they are well documented in the literature. Instead we review two commonly used semi-analytical methods in nano-optics: themultiple-multipole method (MMP) and the volume-integral method.
The interaction of metals with electromagnetic radiation is largely dictated by their free conduction electrons. According to the Drude model, the free electrons oscillate 180° out of phase relative to the driving electric field. As a consequence, most metals possess a negative dielectric constant at optical frequencies, which causes, for example, a very high reflectivity. Furthermore, at optical frequencies the metal's free-electron gas can sustain surface and volume charge-density oscillations, called plasmons, with distinct resonance frequencies. The existence of plasmons is characteristic of the interaction of metal nanostructures with light at optical frequencies. Similar behavior cannot be simply reproduced in other spectral ranges using the scale invariance of Maxwell's equations since the material parameters change considerably with frequency. Specifically, this means that model experiments with, for instance, microwaves and correspondingly larger metal structures cannot replace experiments with metal nanostructures at optical frequencies.
The surface charge-density oscillations associated with surface plasmons at the interface between a metal and a dielectric can give rise to strongly enhanced optical near-fields, which are spatially confined near the metal surface. Similarly, if the electron gas is confined in three dimensions, as in the case of a small particle, the overall displacement of the electrons with respect to the positively charged lattice leads to a restoring force, which in turn gives rise to specific particle-plasmon resonances depending on the geometry of the particle. In particles of suitable (usually pointed) shape, localized charge accumulations that are accompanied by strongly enhanced optical fields can occur.
In the history of science, the first applications of optical microscopes and telescopes to investigate nature mark the beginnings of new eras. Galileo Galilei used a telescope to see for the first time craters and mountains on a celestial body, the Moon, and also discovered the four largest satellites of Jupiter. With this he opened the field of optical astronomy. Robert Hooke and Antony van Leeuwenhoek used early optical microscopes to observe certain features of plant tissue that were called “cells,” and to observe microscopic organisms, such as bacteria and protozoans, thus marking the beginning of optical biology. The newly developed instrumentation enabled the observation of fascinating phenomena not directly accessible to human senses. Naturally, the question of whether the observed structures not detectable within the range of normal vision should be accepted as reality at all was raised. Today, we have accepted that, in modern physics, scientific proofs are veri-fied by indirect measurements, and the underlying laws have often been established on the basis of indirect observations. It seems that as modern science progresses it withholds more and more findings from our natural senses. In this context, the use of optical instrumentation excels among ways to study nature. This is due to the fact that because of our ability to perceive electromagnetic waves at optical frequencies our brain is used to the interpretation of phenomena associated with light, even if the structures that are observed are magnified a thousandfold. This intuitive understanding is among the most important features that make light and optical processes so attractive as a means to reveal physical laws and relationships.
As early as 1619 Johannes Kepler suggested that the mechanical effect of light might be responsible for the deflection of the tails of comets entering our Solar System. The classical Maxwell theory showed in 1873 that the radiation field carries with it momentum and that “light pressure” is exerted on illuminated objects. In 1905 Einstein introduced the concept of the photon and showed that energy transfer between light and matter occurs in discrete quanta. Momentum and energy conservation was found to be of great importance in microscopic events. Discrete momentum transfer between photons (X-rays) and other particles (electrons) was experimentally demonstrated by Compton in 1925 and the recoil momentum transferred from photons to atoms was observed by Frisch in 1933 [1]. Important studies on the action of photons on neutral atoms were carried out in the 1970s by Letokhov and other researchers in the USSR and by Ashkin's group at the Bell Laboratories in the USA. The latter group proposed bending and focusing of atomic beams and trapping of atoms in focused laser beams. Later work by Ashkin and coworkers led to the development of “optical tweezers.” These devices allow optical trapping and manipulation of macroscopic particles and living cells with typical sizes in the range of 0.1–10μm [2, 3]. Milliwatts of laser power produce piconewtons of force. Owing to the high field gradients of evanescent waves, stronger forces are to be expected in optical near-fields.
Localization refers to the precision with which the position of an object can be defined. Spatial resolution, on the other hand, is a measure of the ability to distinguish two separated point-like objects from a single object. The diffraction limit implies that optical resolution is ultimately limited by the wavelength of light. Before the advent of near-field optics it was believed that the diffraction limit imposes a hard boundary and that physical laws strictly prohibit resolution significantly better than λ/2. It was then found that this limit is not as strict as assumed and that access to evanescent modes of the spatial spectrum offers a direct route to overcome the diffraction limit. However, further critical analysis of the diffraction limit revealed that “super-resolution” can also be obtained by pure far-field imaging under certain constraints. In this chapter we analyze the diffraction limit and discuss the principles of different imaging modes with resolutions near or beyond the diffraction limit.
The point-spread function
The point-spread function is a measure of the resolving power of an optical system. The narrower the point-spread function the better the resolution will be. As the name implies, the point-spread function defines the spread of a point source. If we have a radiating point source then the image of that source will appear to have a finite size. This broadening is a direct consequence of spatial filtering. A point in space is characterized by a delta function that has an infinite spectrum of spatial frequencies kx and ky.
The problem of dipole radiation in or near planar layered media is of significance to many fields of study. It is encountered in antenna theory, single-molecule spectroscopy, cavity quantum electrodynamics, integrated optics, circuit design (microstrips), and surface-contamination control. The relevant theory was also applied to explain the strongly enhanced Raman effect of adsorbed molecules on noble metal surfaces, and in surface science and electrochemistry for the study of optical properties of molecular systems adsorbed on solid surfaces. Detailed literature on the latter topic is given in Ref. [1]. In the context of nano-optics, dipoles close to a planar interface have been considered by various authors to simulate tiny light sources and small scattering particles [2]. The acoustic analog is also applied to a number of problems such as seismic investigations and ultrasonic detection of defects in materials [3].
In his original paper [4], in 1909, Sommerfeld developed a theory for a radiating dipole oriented vertically above a planar and lossy ground. He found two different asymptotic solutions: space waves (spherical waves) and surface waves. The latter had already been investigated by Zenneck [5]. Sommerfeld concluded that surface waves account for longdistance radio-wave transmission because of their slower radial decay along the Earth's surface compared with that of space waves. Later, when space waves were found to reflect at the ionosphere, the contrary was confirmed. Nevertheless, Sommerfeld's theory formed the basis for all subsequent investigations.
In this appendix we state the asymptotic far-field Green functions for a planarly layered medium. It is assumed that the source point r0 = (x0, y0, z0) is in the upper half-space (z > 0). The field is evaluated at a point r = (x, y, z) in the far-zone, i.e. r ≫ λ. The optical properties of the upper half-space and the lower half-space are characterized by ε1, μ1 and εn, μn, respectively. The planarly layered medium in between the two halfspaces is characterized by the generalized Fresnel reflection and transmission coefficients. We choose a coordinate system with origin on the topmost surface of the layered medium with the z-axis perpendicular to the interfaces. In this case, z0 denotes the height of the point source relative to the topmost layer. In the upper half-space, the asymptotic dyadic Green function is defined as
where p is the dipole moment of a dipole located at r0 and G0 and Gref are the primary and reflected parts of the Green function. In the lower half-space we define
with Gtr being the transmitted part of the Green function. The asymptotic Green functions can be derived by using the far-field forms of the angular spectrum representation.
The primary Green function in the far-zone is found to be
The reflected part of the Green function in the far-zone is
where the potentials are determined in terms of the generalized reflection coefficients of the layered structure as
The transmitted part of the Green function in the far-zone is
where δ denotes the overall thickness of the layered structure.
This book provides an elementary introduction to the subject of quantum optics, the study of the quantum mechanical nature of light and its interaction with matter. The presentation is almost entirely concerned with the quantized electromagnetic field. Topics covered include single-mode field quantization in a cavity, quantization of multimode fields, quantum phase, coherent states, quasi-probability distribution in phase space, atom-field interactions, the Jaynes-Cummings model, quantum coherence theory, beam splitters and interferometers, dissipative interactions, nonclassical field states with squeezing etc., 'Schrödinger cat' states, tests of local realism with entangled photons from down-conversion, experimental realizations of cavity quantum electrodynamics, trapped ions, decoherence, and some applications to quantum information processing, particularly quantum cryptography. The book contains many homework problems and an extensive bibliography. This text is designed for upper-level undergraduates taking courses in quantum optics who have already taken a course in quantum mechanics, and for first and second year graduate students.
Covering a number of important subjects in quantum optics, this textbook is an excellent introduction for advanced undergraduate and beginning graduate students, familiarizing readers with the basic concepts and formalism as well as the most recent advances. The first part of the textbook covers the semi-classical approach where matter is quantized, but light is not. It describes significant phenomena in quantum optics, including the principles of lasers. The second part is devoted to the full quantum description of light and its interaction with matter, covering topics such as spontaneous emission, and classical and non-classical states of light. An overview of photon entanglement and applications to quantum information is also given. In the third part, non-linear optics and laser cooling of atoms are presented, where using both approaches allows for a comprehensive description. Each chapter describes basic concepts in detail, and more specific concepts and phenomena are presented in 'complements'.
Holographic and speckle interferometry are optical techniques which use lasers to make non-contracting field view measurements at a sensitivity of the wavelength of light on optically rough (i.e. non-mirrored) surfaces. They may be used to measure static or dynamic displacements, the shape of objects, and refractive index variations of transparent media. As such, these techniques have been applied to the solution of a wide range of problems in strain and vibrational analysis, non-destructive testing (NDT), component inspection and design analysis and fluid flow visualisation. This book provides a self-contained, unified, theoretical analysis of the basic principles and associated opto-electronic techniques (for example Electronic Speckle Pattern Interferometry). In addition, a detailed discussion of experimental design and practical application to the solution of physical problems is presented. In this new edition, the authors have taken the opportunity to include a much more coherent description of more than twenty individual case studies that are representative of the main uses to which the techniques are put. The Bibliography has also been brought up to date.
Inverse problems are of interest and importance across many branches of physics, mathematics, engineering and medical imaging. In this text, the foundations of imaging and wavefield inversion are presented in a clear and systematic way. The necessary theory is gradually developed throughout the book, progressing from simple wave equation based models to vector wave models. By combining theory with numerous MATLAB based examples, the author promotes a complete understanding of the material and establishes a basis for real world applications. Key topics of discussion include the derivation of solutions to the inhomogeneous and homogeneous Helmholtz equations using Green function techniques; the propagation and scattering of waves in homogeneous and inhomogeneous backgrounds; and the concept of field time reversal. Bridging the gap between mathematics and physics, this multidisciplinary book will appeal to graduate students and researchers alike. Additional resources including MATLAB codes and solutions are available online at www.cambridge.org/9780521119740.
The “solutions” of boundary-value problems presented in Section 2.8 are merely formal in that the appropriate Green functions must be found, and finding these Green functions is extremely difficult except in certain special cases such as the Rayleigh-Sommerfeld problem. Moreover, even when the appropriate Green functions can be found they are often expressed in the form of superpositions of elementary solutions of the homogeneous Helmholtz equation that are especially suited for dealing with boundaries of a specific shape. One example of a set of such elementary solutions is the plane waves which arise from applying the method of separation of variables to the homogeneous Helmholtz equation using a Cartesian coordinate system and, as we will see below, form a complete set of basis functions for fitting boundary-value data specified on plane surfaces; e.g., for the RS problems. However, the plane waves have limited utility in solving boundary-value problems involving non-planar boundaries such as spherical boundaries. In that case the method of separation of variables is applied to the Helmholtz equation using a spherical polar coordinate system and the so-called multipole fields arise as a set of elementary solutions that form a basis for fitting boundary-value data specified on spherical boundaries. In this chapter we will briefly review the method of separation of variables for the Helmholtz equation and obtain the resulting eigenfunctions for the important cases of Cartesian, spherical polar and cylindrical coordinate systems.
The formulas Eq. (1.33) of Chapter 1 represent the solution to the radiation problem in a non-dispersive medium governed by the wave equation; i.e., they give the radiated field u+(r, t) in terms of a known source q(r, t). These formulas were generalized to dispersive media in Chapter 2, where the radiation problem was solved directly in the frequency domain for a known source embedded in a uniform dispersive background medium. The inverse source problem (ISP), as its name indicates, is the inverse to the radiation problem, and in this problem one seeks the source q(r, t) from knowledge of its radiated field u+(r, t). The question of what applications require a solution to an inverse source problem naturally arises. There are basically two such applications that consist of (i) imaging (reconstructing) the interior of a volume source from observations of the field radiated by the source and (ii) designing a volume source to act as a multi-dimensional antenna to radiate a prescribed field. In the first application actual field measurements are employed, thereby generating data that are then used to “solve” the ISP and thus “reconstruct” the interior of the source, whereas in the second application desired field data are used to “design” a source that will generate those data. Regarding the ISP, the two applications are essentially identical, differing only in emphasis; in application (i) we have to contend with measurement error and noisy data, whereas in application (ii) we have to contend with inconsistencies between the desired data and the constraints required of the source (antenna).
In the radiation problem treated in Chapters 1 and 2 a “source” q(r, t) in the time domain or Q(r, ω) in the frequency domain radiated a wavefield that satisfied either the inhomogeneous wave equation in the time domain or the inhomogeneous Helmholtz equation in the frequency domain. In either case the solution to the radiation problem was easily obtained in the form of a convolution of the given source function with the causal Green function of the wave or Helmholtz equation. A key point concerning the radiation problem is that the source to the radiated field is assumed to be known (specified) and is assumed to be independent of the field that it radiates. Such sources are sometimes referred to as “primary” sources since the mechanism or process that created them is unknown or, at least, unimportant as regards the field that they radiate.
In this chapter we will also encounter the radiation problem, but with sources that are created by the interaction of a propagating wave incident on a physical obstacle or inhomogeneous region of space. These new types of sources are referred to as “induced” or “secondary” sources and the problem of computing the field that they radiate given the incident wave and a model for the field-obstacle interaction is called the scattering problem. We deal with two classes of scattering problem in this book: (i) scattering from so-called “penetrable” scatterers, where the incident wave penetrates into the interior of the obstacle so that the resulting induced source radiates as a conventional volume source of the type treated in earlier chapters; and (ii) scattering from non-penetrable scatterers, where the interaction of the incident wave with the obstacle occurs only over the object's surface.