We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As mentioned in Chapter 1, holographic imaging was originally developed in an attempt to obtain higher resolution in microscopy. Equations (3.20) and (3.21) show that it is possible to obtain a magnified image if different wavelengths are used to record a hologram and reconstruct the image, or if the hologram is illuminated with a wave having a different curvature from the reference wave used to record it. However, neither of these techniques has found much use, in the first instance because of the limited range of coherent laser wavelengths available, and, in the second, because of problems with image aberrations [Leith & Upatnieks, 1965; Leith, Upatnieks & Haines, 1965].
The most successful applications of holography to microscopy have been with systems in which holography is combined with conventional microscopy. In one approach, a hologram is recorded of the magnified real image of the specimen formed by the objective of a microscope, and the reconstructed image is viewed through the eyepiece [van Ligten & Osterberg, 1966]. While this technique offers no advantages for ordinary subjects, it is extremely useful for phase and interference microscopy [Snow & Vandewarker, 1968]. In another, a hologram is recorded of the object, and the reconstructed real image is examined with a conventional microscope. This technique is particularly well adapted to the study of dynamic three-dimensional particle fields, as described in the next section.
Of the three phenomena that result from the wavelike nature of light – polarization, interference, and diffraction – the third is the most puzzling. It does not render itself to intuitive explanation, since intuition suggests that light propagates in straight lines. Diffraction, however, allows for light under certain conditions to travel “around corners.” Because of this effect, light may be detected at points that could not be reached by straight rays. This effect also prevents indefinite propagation of collimated beams; invariably, after a certain distance, collimated beams appear to diverge. Similarly, when a focusing lens designed using considerations of geometrical optics is employed to focus radiation, the spot size at the focus cannot be reduced below a defined limit. In these examples, diffraction is seen to pose limitations on the application range of many optical devices. Thus, imaging resolution is reduced by the diffraction limits of lenses, power delivery by collimated laser beams is limited by their divergence, and the application of masks for processing semiconductor chips with photolithographic techniques is limited by diffraction induced by the minute pattern of the masks.
However, there exist numerous applications where diffraction presents an advantage. One example is the diffraction grating used for spectral separation of radiation (see Section 7.3). Another example is the advent of Fourier optics. This relatively new technology is based on the diffraction-limited imaging properties of lenses.
When confronted with a hologram for the first time, most people react with disbelief. They look through an almost clear piece of film to see what looks like a solid object floating in space. Sometimes, they even reach out to touch it and find their fingers meet only thin air.
A hologram is a two-dimensional recording but produces a three-dimensional image. In addition, making a hologram does not involve recording an image in the usual sense. To resolve these apparent contradictions and understand how a hologram works, we have to start from first principles.
The concept of holographic imaging
In all conventional imaging techniques, such as photography, a picture of a three-dimensional scene is recorded on a light-sensitive surface by a lens or, more simply, by a pinhole in an opaque screen. What is recorded is merely the intensity distribution in the original scene. As a result, all information on the relative phases of the light waves from different points or, in other words, information about the relative optical paths to different parts of the scene is lost.
The unique characteristic of holography is the idea of recording the complete wave field, that is to say, both the phase and the amplitude of the light waves scattered by an object. Since all recording media respond only to the intensity, it is necessary to convert the phase information into variations of intensity.
A typical optical system for recording transmission holograms of a diffusely reflecting object is shown in fig. 5.1; one for recording a reflection hologram is shown in fig. 5.2.
A simpler system for making reflection holograms is shown in fig. 5.3. This arrangement is essentially the same as that described originally by Denisyuk [1965] in which, instead of using separate object and reference beams, the portion of the reference beam transmitted by the photographic plate is used to illuminate the object. It gives good results with specular reflecting objects and with a recording medium, such as dichromated gelatin, which scatters very little light.
Making a hologram involves recording a two-beam interference pattern. The principal factors that must be taken into account in a practical setup to obtain good results are discussed in the next few sections.
Stability requirements
Any change in the phase difference between the two beams during the exposure will result in a movement of the fringes and reduce modulation in the hologram [Neumann, 1968].
In some situations, the effects of object movement can be minimized by means of an optical system in which the reference beam is reflected from a mirror mounted on the object [Mottier, 1969]. Alternatively, if the consequent loss in resolution can be tolerated, a portion of the laser beam can be focused to a spot on the object, producing a diffuse reference beam [Waters, 1972].
The emission and absorption of radiation, as well as the conversion of radiation into other modes of energy such as heat or electricity, all involve interaction between electromagnetic waves and atoms, molecules, or free electrons. Such daily phenomena as the radiative emission by the sun, the shielding of earth from harmful UV radiation by the ozone layer, the blue color of the sky, and red sunsets are all – despite their celestial magnitude – generated by microscopic particles. Most lasers depend on emission by excited atoms (e.g. the He-Ne laser), ionized atoms (the Ar+ laser), molecules (CO or CO2 lasers), impurities trapped in crystal structures (Nd: YAG or Ti:sapphire lasers), or semiconductors (GaAs diode lasers). Similarly, many scattering processes of interest (e.g., Rayleigh or Mie scattering) result from the exchange of energy and momentum between incident radiation and atomic or molecular species. In the previous chapter we saw that the energy of microscopic particles is quantized: their energy can be acquired, stored, or released only in fixed lumps called quanta. The example of the “particle in the box” (eqn. 8.19) illustrated that these energy quanta are specific not only to the particle itself but to the system to which it belongs. Thus, in the box, the energy of the particle is specified by its own mass and by the dimension of the box; in a different box, the same particle will have an entirely different system of energy levels and the quanta will have different magnitudes.
Until now, our discussion of the interaction between radiation and matter has concentrated only on the spectral aspects of radiation. The results could determine the wavelengths for absorption and emission or the selection rules for such transitions, but could not be used to determine the actual extent of emission or absorption. These too are important considerations which are needed to fully quantify radiative energy transfer. Unfortunately, none of the classical theories can predict the extent of emission from an excited medium, or even the extent of absorption. Although the discussion in Section 4.9 (on the propagation of electromagnetic waves through lossy media) touched briefly on the concept of attenuation by absorption, it failed to show the reasons for the spectral properties of the absorption or to accurately predict its extent. We will see later that the classical results are useful only as a benchmark against which the actual absorber is compared. The objective of this chapter is therefore to present an introduction to quantum mechanical processes that control the emission and absorption by microscopic systems consisting of atoms and molecules. The results will then be used to predict the extent of emission by media when excited by an external energy source and to evaluate the absorption of incident radiation.
It is now well recognized that all emission or absorption processes are the result of transitions between quantum mechanical energy levels.
The discussion in Section 4.10 on the scattering by gas molecules and by submicron particles illustrated the rules of superposition of radiation from several sources. Without much detail, the analysis there showed that the irradiance resulting from such superposition depends on the coherence properties of the sources: when the radiation emanating from several sources is coherent, the fields are additive; if incoherent, only the energies are additive. The distinction between these two modes of addition is important in view of the quadratic dependence (eqn. 4.42) between the irradiance and the electric field. Thus, the analysis of the superposition of radiation emitted by incoherent sources requires only the summation of the irradiance from all sources at a point. No consideration of the frequencies or the phases of the interacting fields is needed. On the other hand, the irradiance that results from the superposition of radiation from a multitude of sources that are coherent with each other depends on the spatial and temporal distribution of the interacting fields, on their phases, and on their frequencies. Thus, before such irradiance can be determined, the distribution of the combined fields must be found. The spatial and temporal distribution of the irradiance is then obtained from the field distribution using (4.42). Here we discuss the details of the superposition of coherent electromagnetic fields. Such detailed analysis can be simplified when considering the superposition of only two beams obtained by splitting one beam emitted by a single source.
The classical description of radiation and optics provides two alternative approaches. In the first and more rigorous approach, radiation is viewed as waves of electric and magnetic fields propagating in space. In the second approach, radiation is modeled by thin rays traveling from a source to a target while neglecting all aspects of its wave nature. Rigorous considerations show that the second approach, geometrical optics, is merely a class within the broader picture described by the first approach, which is called physical optics or electromagnetic theory. Electromagnetic theory is normally used to describe the propagation characteristics of electromagnetic waves. It is a very general theory that can depict most effects associated with the propagation of light. Many effects – such as the interference between several waves and diffraction – can be explained only by electromagnetic theory. Electromagnetic theory can also be used to design imaging and illuminating optical devices such as telescopes, microscopes, projectors, and mirrors. However, many of the wave characteristics of radiation are irrelevant for the successful design of these devices; only higher-order corrections require electromagnetic wave considerations. Therefore, in applications where the wave nature of radiation can be neglected, the alternative description of radiation and optics – geometrical optics – can be used. Although the information generated by geometrical optics is less detailed than results of electromagnetic theory, it is far less complex and yet provides a remarkable prediction of the performance of imaging and projecting optical devices.
Many of the characteristics of laser beams are determined by properties of their gain medium and by the loss and gain characteristics of the laser cavity. The previous chapter discussed factors that determine the wavelength and spectral bandwidth of laser beams, the characteristics of their longitudinal modes, gain requirements for steady-state oscillation, the ultimate power (or energy) of laser beams, the duration of a laser pulse when Q-switched or mode-locked, and so on. However, this wealth of information is insufficient for design applications where the spatial pattern of the energy delivery must be well defined. To illustrate this, recall that when a laser is used for illumination (such as in PLIF), a relatively uniform distribution of the energy may be required; for material processing, the beam energy may need to be concentrated into a narrow well-defined spot; and for holography or interferometry, the shape of the incident wavefronts may need to be geometrically simple. Furthermore, in all applications, the distribution of the energy passing through any optical element must be carefully controlled to prevent laser-induced damage by localized high-energy concentration. Popular belief has it that laser beams are always collimated and that their wavefronts are planar. But this is true only in the limit, when the beam diameter approaches infinity. Because of diffraction, the beam cannot remain collimated indefinitely when the diameter is finite; with the exception of a narrow range where the beam may be considered as nearly collimated, it must either converge or diverge.
Holograms generated by means of a computer can be used to produce wavefronts with any prescribed amplitude and phase distribution; they are therefore extremely useful in applications such as laser-beam scanning and optical spatial-filtering (see sections 13.3 and 14.2) as well as for testing optical surfaces.
The production of holograms using a digital computer has been discussed in detail by Lee [1978], Yaroslavskii and Merzlyakov [1980], and Dallas [1980], and involves two principal steps.
The first step is to calculate the complex amplitude of the object wave at the hologram plane; for convenience this is usually taken to be the Fourier transform of the complex amplitude in the object plane. It can be shown, by means of the sampling theorem, that if the object wave is sampled at a sufficiently large number of points (see Appendix 2), this can be done with no loss of information. Thus, if an image consisting of N × N resolvable elements is to be reconstructed, the object wave is sampled at N × N equally spaced points, and the N × N complex coefficients of its discrete Fourier transform are evaluated. This can be done quite easily with a computer program using the fast Fourier transform algorithm [Cochran et al., 1967] for arrays containing as many as 1024 × 1024 points.
The second step involves using the computed values of the discrete Fourier transform to produce a transparency (the hologram), which reconstructs the object wave when it is suitably illuminated.