We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Photoinduced anisotropy in silver-halide materials
The appearance of dichroism in silver-halide emulsions exposed to linearly polarized light was first observed by F. Weigert [1–3] in 1919. This effect is called the Weigert effect (W-effect). Weigert discovered that the effect was most pronounced if the emulsion layer was first exposed to unpolarized short-wavelength light to form print-out silver and afterwards to long-wavelength linearly polarized light. Photodichroism was observed in layers only before photographical development. Later it was found that the effect could be reversible [4]. When a silver chloride (AgCl) emulsion layer was exposed to polarized red light and then the polarization direction was rotated by 90°, the degree of dichroism first diminished, then disappeared completely, and, with continued exposure, reappeared with opposite sign with respect to the first time. This reversible effect could be achieved several times by rotating the polarization direction. Photoinduced dichroism was later observed also in single crystals of AgCl. Hilsch and Pohl [5] and Cameron and Taylor [6] found that, when AgCl crystals are exposed first to unpolarized UV light and then to linearly polarized red light, they become dichroic. The effect was also observed by Zocher and Coper [7].
All these early investigators of the W-effect tried also to give a theoretical explanation. They all assumed that the first exposure created anisotropic particles distributed in all directions.
In this chapter, we shall examine first the principles of interference and holography and extend them to the concept of polarization holography. We shall study the specific character of the periodic anisotropic structures obtained by the holographic method, which we call polarization holograms. We shall show how their efficiency and their polarization properties depend on the choice of recording geometry and on the photoinduced anisotropy in the materials. Since the formation of linear anisotropy is more common and more pronounced in photoanisotropic materials known at the moment, we first consider polarization holography in materials with linear anisotropy only. Then we extend the consideration to materials with both linear and circular anisotropy. The appearance of relief gratings on the surface of polarization holograms is also taken into account.
Plane-wave interference and holography
The holographic method was first proposed by Denis Gabor in 1948 [1] as a method for the reconstruction of wavefronts. Gabor proposed a two-stage process. The first stage is a two-dimensional photographic storage of the intensity distribution in the interference pattern of the signal wave (S) with a reference wave (R). At the second stage the reference wave illuminates the obtained photograph and reconstructs both the amplitude and the phase of the signal wavefront. The method was called “holography”, that is, “whole writing”. Later it was shown by Denisyuk [2] that, if three-dimensional (volume) materials are used to fix the interference field, the wavelength of the signal wave could also be restored.
We shall consider here light propagation through anisotropic materials or polarizing optical elements and systems, and shall describe in brief the methods used to find the changes in light intensity and polarization introduced by these anisotropic elements.
Materials with optical anisotropy
In anisotropic materials the velocity of light propagation depends on the propagation direction. The anisotropy is connected with the structure of the material. Typical materials with optical anisotropy are transparent crystals, and the theory of light propagation through anisotropic media is usually called crystal optics [1–3]. Optical anisotropy is also observed in liquid crystals, and in some amorphous materials subjected to external forces such as mechanical or electrical forces. Stretched polymer films provide a good example. In this book we deal mainly with photoinduced anisotropy. In some materials illumination with polarized light causes selective destruction of absorbing molecules or centers, reordering of these absorbing centers, or some other changes depending on light polarization. This results in polarization-dependent changes in the absorption coefficient or/and in the refractive index of the material, that is, in optical anisotropy. The dependence of the absorbance on light polarization is called dichroism and the dependence of the refractive index on light polarization is called birefringence.
When a material is anisotropic, its dielectric permeability is a tensor, and as a consequence the wave surfaces in it are not spherical, but are ellipsoidal.
The basic elements of an imaging system are shown in Figure 5.1. The light from a source, either coherent (e.g., a laser) or incoherent (e.g., an incandescent lamp or an arc lamp), is collected by the illumination optics (e.g., a condenser lens) and projected onto the object. An image is then formed by an objective lens upon a screen, a photographic plate, a CCD camera, the retina of an eye, etc. Assuming that the objective lens is free from aberrations, the resolution and the contrast of the image are determined not only by the numerical aperture of the objective lens but also by the properties of the light source and the illumination optics.
The source and the illumination optics
Three types of illumination will be considered. For collimated and coherent illumination we assume a monochromatic laser beam brought to focus at the plane of the object with a condenser lens having a very small numerical aperture (NA). Figure 5.2(a) is the logarithmic intensity distribution at the object plane, produced by a 0.03NA condenser. This distribution has the shape of an Airy pattern, with a central lobe diameter of 1.22λ/NA ≈ 41λ, where λ is the wavelength of the light source. Since the objects of interest will be small compared to the Airy disk diameter, and since they will be placed near the center of the Airy disk, this illumination qualifies as coherent, fairly uniform, and nearly collimated.
The characteristics of a beam of light emanating from a source in uniform motion with respect to an observer differ from those measured when the source is stationary. In general, it is irrelevant whether the source is stationary and the observer in motion or vice versa; the observed characteristics depend only on the relative motion. The observed frequency of the light, for example, has been known to depend on this relative motion since the Austrian physicist Christian Doppler (1842) showed the effect to exist both for sound waves and light waves.
The perceived direction of propagation of a light beam also depends on the relative motion of its source and the observer. The English astronomer James Bradley (1727) was the first to argue that the motion of the Earth in its orbit around the Sun causes a periodic shift of the apparent position of fixed stars as observed from the Earth; a telescope viewing a star must be tilted in the direction of the Earth's motion. Although this so-called stellar aberration could be explained on the basis of the corpuscular theory of light accepted at the time, certain features of it remained poorly understood until the advent of Einstein's special theory of relativity in 1905.
The mid-nineteenth century measurements of the speed of light in moving media could be made to agree with the prevailing theories at the time only if one assumed that the moving medium partially carried the luminiferous ether, the hypothetical medium which filled the Universe and in which the light waves propagated.
I started writing the Engineering column of Optics & Photonics News (OPN) in early 1997. Since then nearly forty articles have appeared, covering a broad range of topics in classical optical physics and engineering. My original goal was to introduce students and practising engineers to some of the most fascinating topics in classical optics. This I planned to achieve with minimal usage of the mathematical language that pervades the literature of the field. I had met many bright students and practitioners who either did not know or did not fully appreciate some of the major concepts of classical optics such as the Talbot effect, Abbe's sine condition, the Goos–Hänchen effect, Hamilton's internal and external conical refraction, Zernike's method of phase contrast, Michelson's stellar interferometer, and so on. My columns were going to have little mathematics but an abundance of pictures and pedagogical arguments, to bring forth the essence of the physics involved in each phenomenon. In the process, I hoped, the readers would appreciate the beauty of the subject and, if they found it interesting, would dig deeper by searching the cited literature.
A unique tool available to me for this purpose was the computer programs DIFFRACTTM, MULTILAYERTM, and TEMPROFILETM, which I have developed in the course of my research over the past 20 years. The first of these programs simulates the propagation of light through optical systems consisting of discrete elements such as lasers, lenses, mirrors, prisms, phase/amplitude masks, gratings, polarizers, wave-plates, multilayer stacks, birefringent crystals, diffraction gratings, and optically active materials.
The beam of light emanating from a quasi-monochromatic point source (or a sufficiently distant extended source) is said to be spatially coherent: the reason is that, at any two points on a given cross-section of the beam, the oscillating electromagnetic fields maintain their relative phase at all times. If an opaque screen with two pinholes is placed at such a cross-section, Young's interference fringes will form, and the observed fringe contrast will be 100% (at and around the center of the fringe pattern). This is the sense in which the fields at two points are said to be spatially coherent relative to each other. If the relative phase of the fields at the two points varies randomly with time, the pair of point sources will fail to produce Young's fringes and, therefore, the fields are considered to be incoherent. In practice there is a continuum of possibilities between the aforementioned extremes, and the resulting fringe contrast may fall anywhere between zero and 100%. The fields at the two points are then said to be partially coherent with respect to one another, and the properly defined fringe contrast in Young's experiment is used as the measure of their degree of coherence.
Optical systems involving partially coherent illumination are explored in several other chapters of this book; see, for example, “Coherent and incoherent imaging” (Chapter 5), “Michelson's stellar interferometer” (Chapter 35), “Zernike's method of phase contrast” (Chapter 38), and “polarization microscopy” (Chapter 39).
The classical theory of diffraction, according to which the distribution of light at the focal plane of a lens is the Fourier transform of the distribution at its entrance pupil, is applicable to lenses of moderate numerical aperture (NA). The incident beam, of course, must be monochromatic and coherent, but its polarization state is irrelevant since the classical theory is a scalar theory (see Chapter 2, “Fourier optics”). If the incident beam happens to be a plane wave and the lens is free from aberrations then the focused spot will have the well-known Airy pattern. When the incident beam is Gaussian the focused spot will also be Gaussian, since this particular profile is preserved under Fourier transformation. In general, arbitrary distributions of the incident beam, with or without aberrations and defocus, can be transformed numerically, using the fast Fourier transform (FFT) algorithm, to yield the distribution in the vicinity of the focus.
There are two basic reasons for the applicability of the classical scalar theory to systems of moderate NA. The first is that bending of the rays by the focusing element(s) is fairly small, causing the electromagnetic field vectors (E and B) before and after the lens to have more or less the same orientations. A scalar amplitude assigned to each point on the emergent wavefront from a system having low to moderate values of NA is sufficient to describe its electromagnetic state, whereas in the high-NA regime one can no longer ignore the vectorial nature of light.
In isotropic media the rays of geometrical optics are usually obtained from the surfaces of constant phase (i.e., wavefronts) by drawing normals to these surfaces at various points of interest. It is also possible to find the rays from the eikonal equation, which is derived from Maxwell's equations in the limit when the wavelength λ of the light is vanishingly small. Both methods provide a fairly accurate picture of beam-propagation and electromagnetic-energy transport in situations where the concepts of geometrical optics and ray-tracing are applicable. The artifact of rays, however, breaks down near caustics and focal points and in the vicinity of sharp boundaries, where diffraction effects and the vectorial nature of the field can no longer be ignored.
It is possible, however, to define the rays in a rigorous manner (consistent with Maxwell's electromagnetic theory) such that they remain meaningful even in those regimes where the notions of geometrical optics break down. Admittedly, in such regimes the rays are no longer useful for ray-tracing; for instance, the light rays no longer propagate along straight lines even in free space. However, the rays continue to be useful as they convey information about the magnitude and direction of the energy flow, the linear momentum of the field (which is the source of radiation pressure), and the angular momentum of the field. Such properties of light are currently of great practical interest, for example, in developing optical tweezers, where focused laser beams control the movements of small objects.
The beam propagation method (BPM) is a simple numerical algorithm for simulating the propagation of a coherent beam of light through a dielectric waveguide (or other structure). Figure 32.1 shows the split-step technique used in the BPM, in which the diffraction of the beam and the phase-shifting action of the guide are separated from each other in repeated sequential steps, of separation Δz. One starts a BPM simulation by defining an initial cross-sectional beam profile in the XY-plane. The beam is then propagated (using classical diffraction formulas) a short distance Δz along the Z-axis before being sent through a phase/amplitude mask. The properties of the mask are derived from the cross-sectional profile of the waveguide (or other structure) in which the beam resides. The above steps of diffraction followed by transmission through a mask are repeated until the beam reaches its destination or until one or more excited modes of the guide become stabilized.
Instead of propagating continuously along the length of the guide, the beam in BPM travels for a short distance in a homogeneous isotropic medium, which has the average refractive index of the guide but lacks the guide's features (e.g., core, cladding, etc.). After this diffraction step, a phase/amplitude mask is introduced in the beam path. To account for the refractive index profile of the guide, the mask must phase-shift certain regions of the beam relative to others.
The classical theory of diffraction originated in the work of the French physicist Augustin Jean Fresnel, in the first quarter of the nineteenth century. Fresnel's ideas were subsequently expanded and elaborated by, among others, William Rowan Hamilton, Gustav Kirchhoff, George Biddell Airy, John William Strutt (Lord Rayleigh), Ernst Abbe, and Arnold Sommerfeld, leading to a complete understanding of light in its wave aspects.
The Fourier-transform operation occurs naturally in any formulation of the theory of diffraction, giving rise to a body of literature that has come to be known as Fourier optics. The prominence of Fourier transforms in physical optics is rooted in the fact that any spatial distribution of the complex amplitude of light can be considered a superposition of plane waves. (Plane waves, of course, are eigenfunctions of Maxwell's equations for the propagation of electromagnetic fields through homogeneous media.)
Many students of Fourier optics are intimidated by the approximations involved in deriving its basic formulas, but it turns out that the majority of these approximations are in fact unnecessary: by starting from a plane-wave expansion of the light amplitude distribution, rather than the traditional Huygens' principle, one can readily arrive at the fundamental results of the classical theory either directly or after applying the stationary-phase approximation. (For a detailed discussion of the stationary-phase method see the appendix to this chapter.
Laser beams can deliver controlled doses of optical energy to specific locations on an object, thereby creating hot spots that can melt, anneal, ablate, or otherwise modify the local properties of a given substance. Applications include laser cutting, micro-machining, selective annealing, surface texturing, biological tissue treatment, laser surgery, and optical recording. There are also situations, as in the case of laser mirrors, where the temperature rise is an unavoidable consequence of the system's operating conditions. In all the above cases the processes of light absorption and heat diffusion must be fully analyzed in order to optimize the performance of the system and/or to avoid catastrophic failure.
The physics of laser heating involves the absorption of optical energy and its conversion to heat by the sample, followed by diffusion and redistribution of this thermal energy through the volume of the material. When the sample is inhomogeneous (as when it consists of several layers having different optical and thermal properties) the absorption and diffusion processes become quite complex, giving rise to interesting temperature profiles throughout the body of the sample. This chapter describes some of the phenomena that occur in thin-film stacks subjected to localized irradiation. We confine our attention to examples from the field of optical data storage but the selected examples have many features in common with problems in other areas, and it is hoped that the reader will find this analysis useful in understanding a variety of similar situations.