We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite the fact that optical fiber communications has been an active area of research since the early 1970s and optical transmission facilities have been widely deployed since the 1980s, serious activity in optical networking did not reach beyond the laboratory until the 1990s. It was in the early 1990s that a number of ambitious optical network testbed projects were initiated in the United States, Europe, and Japan. Although the testbeds were largely government financed, they planted the seeds for subsequent commercial developments, many of which were spin-offs of the testbed activities. The commercial ventures benefited from the knowledge accumulated from the testbeds as well as from the burgeoning worldwide demand for bandwidth. As a result, multiwavelength optical networks are deployed today in metropolitan area as well as wide area applications with increasing current activity in local access as well. In this chapter we give an overview of current developments in metropolitan and wide area networks. Recent developments in access networks were discussed in detail in Chapter 5. The chapter begins with a brief discussion of the role of business drivers and relative costs in creating the current trends. This is followed by a summary of the early testbed projects in the United States and Europe, which provides the context for a description of current commercial activity in multiwavelength metro and long-haul networks. We continue with a discussion of new applications and services made possible by the unique features of intelligent optical networks, and conclude with some thoughts for the future.
In this chapter we explore the structure, design, and performance of purely optical networks with electronically switched overlays. These are the logically-routed networks (LRNs) that were introduced in Section 3.5. Typical examples of LRNs are networks of SONET digital cross-connects (DCSs), networks of IP/MPLS routers, and ATM networks carried on a SONET DCS layer. To provide maximum flexibility, the LRN should be carried on top of a reconfigurable optical network. Although we generally refer to the underlying infrastructure as “purely optical” (that is, transparent), we shall, from time to time, relax that requirement to include optical networks having some degree of opacity on their transmission links.
Introduction: Why Logically-Routed Networks?
The rationale for using logical switching on top of a purely optical infrastructure has been discussed at various points throughout the book. The number of stations in a purely optical network cannot be increased indefinitely without running into a connectivity bottleneck. The sources of the bottleneck are the resource limitations within the network (fibers and optical spectrum) and within the access stations (optical transceivers).
Figure 7.1(a) illustrates the bottleneck in a purely optical network. Network access station (NAS) A has established logical connections (LCs), shown as dashed lines in the figure, that fan out to stations B, C, and D. If this is a wavelength-routed network (WRN), each LC is carried on a separate point-to-point optical connection; that is, three optical transceivers and three distinct wavelengths are required (assuming that the stations have single fiber pair access links).
The multiwavelength network architecture described in Section 2.1 contains several layers of connections. By exploiting the various alternatives in each layer, it is possible to produce a rich set of transport network configurations. This chapter explores how a desired connectivity pattern can be established using the combined functionality contained in the various layers. The approach is to examine the properties of different classes of networks through a sequence of simple illustrative examples. The design objective in each example is to provide a prescribed connectivity to a set of end systems. Each of the network classes illustrated in this chapter is discussed in more detail in later chapters, as is the issue of optical network control.
Our first example is shown in Figure 3.1. Five geographically dispersed end systems are to be fully interconnected by a transport network, which is to be specified. The end systems might correspond to physical devices such as supercomputers that interact with each other, or they may be gateways (interfaces) to local access subnets (LASs) serving industrial sites, university campuses, or residential neighborhoods.
In all of these cases, a dedicated set of connections is desired (shown as dashed lines in the figure), providing full connectivity among all the sites. Figure 3.2(a) shows one possible transport network, whose physical topology (PT) is a star, in which the central node is a star coupler of the type shown in Figure 2.7(a). Each end system is connected to the star through its own network access station.
In Chapter 2 we proposed a layered view of the connections in an optical network, focusing primarily on issues associated with optical layer transport but including a discussion of transport in logical network (e.g., IP network) overlays as well. Then in Section 3.1 we encountered a different way of “slicing” the functionality of an optical network, distinguishing three planes: transport, control, and management. In general terms, the transport plane is responsible for the physical transfer of data across an optical network, the control plane provides the intelligence required for the provisioning and maintenance (e.g., failure recovery operations) of a connection, and the management plane provides management services such as performance monitoring, fault and configuration management, accounting and security management. This chapter provides a summary of the current state of optical network control, which is a broad and rapidly evolving subject. The reader is referred to texts completely devoted to the subject of control (e.g., [Bernstein+04] for a more comprehensive treatment).
The line between management and control is not clearly defined. But roughly speaking, management functions deal with long-term issues and operate on slow timescales, whereas control functions are associated with rapid changes in network configurations and operate on short timescales. For example, the repair of a network fault such as a cut cable would be a management function. It might require days or weeks. On the other hand, “point-and-click” provisioning, where a network user controls the provisioning and configuration of a connection, is a control function.
Graph and hypergraph terminology has evolved over the years. The following definitions are adapted from [Berge89, Bermond+97, Chartrand+96]. Some of the material in this appendix is found in other parts of the book. It is repeated here for convenience.
Graphs
A graph G consists of a set of vertices V(G) and a set of edges E(G), where each edge e is a pair of distinct vertices (u, v). (If the two vertices are the same, then the edge is a loop. We rule out these cases.) A graph with vertex set V and edge set E is typically denoted by G(V, E). If e = (u, v), then u and v are adjacent vertices and e is incident on u and v. Two edges are adjacent if they are incident on the same vertex. Nonadjacent edges or nonadjacent vertices are called independent. A set of pairwise independent vertices of a graph G, which is of maximal cardinality, is called a maximal independent set. Figure A.1 shows an example of a maximal independent set of vertices (outlined in dashed circles).
A graph in which every two vertices are adjacent is called a complete or fully connected graph. The complete graph with n vertices is denoted by Kn. Figure A.2 shows K5.
A graph G is called bipartite if its vertices can be partitioned into two subsets, V1 and V2, (called partite sets) such that every edge of G joins a vertex in V1 to one in V2.
Image formation has many meanings to various groups or individuals; however, in geometrical optics its definition is very clear in that it refers to the formation of a light pattern to replicate a scene. The light (radiant power) pattern formed by the optical phenomenon resembles the scene or object, and is called an image. In geometrical optics, an image-forming optical system creates a radiant pattern in two dimensions that resembles the scene that a human eye would perceive as the object. There are two general classes of images in geometrical optics: those formed by lenses and those formed by projections. In present day cameras, lenses are by far the most common means of obtaining an image.
Pinhole camera
One example of a projection system is the pinhole camera, also referred to as camera obscura (Latin for “dark chamber”), which uses a tiny pinhole to collect light without the use of a lens. Figure 3.1 illustrates this simple concept. You may recall, as a child, sitting inside a box while viewing an image projected through a pinhole onto the inside wall. The light from an object passes through a small aperture along a ray, to form an image on a surface. This image may either be projected onto a translucent screen for viewing through the camera, or onto an opaque surface for viewing in reflection.
An imaging system will necessarily have limited field of view and spatial resolution. This limitation is imposed by such factors as pixel size, detector-array format, the number of data collected, etc. The corresponding instantaneous field of view (IFOV) is therefore likely to encompass several “patches” of materials that possess different reflectance and/or emissivity properties. If we are lucky, the combined signal from each IFOV is a linear mixture of weighted radiances from each “pure” material within the IFOV.
Given sufficient signal to noise ratio (SNR), sub-pixel traces of a particular material may be detected based on the presence of distinctive spectral features in the combined signature. A simpler technique relies on a single spectral channel within which the target and the background exhibit different radiance. For example, the 3–5 µm window that may be used to detect smoldering fires in a natural background and the detection of narrow roads of concrete surrounded by vegetation is accomplished in the 0.6–0.7 µm band. Compare the spectra of healthy vegetation and soil to concrete or asphalt in this spectral region to see why.
Take a look at Figures B.1(a) and B.1(b) for some examples of a geometric interpretation of linearly mixed pixels.
A data cube from a space-borne spectrometer provides spectral data on spatial locations of interest, as shown in Figure B.2. The signal measured in each IFOV is radiometrically made up of the constituents, i.e. [A(x) + B(x) + C(x) + D(x)], along with their fractional area.
The act of image formation in our present understanding consists of reformatting diverging wavefronts from a source (object) to converging spherical wavefronts moving toward image points in the image plane. The transfer of wavefronts through an optical system can be done most easily, as has been accomplished so far, by the use of ray tracing. The tracing of rays through an optical system is determined purely by geometrical considerations and trigonometry. The assumptions made in ray tracing through an optical system are:
(1) Rays travel at a constant velocity in homogeneous media.
(2) Rays travel in straight lines.
(3) Rays follow Snell's law at the interface between media.
(4) At an interface, the reflected and refracted rays lie in the plane of incidence.
(5) Object and image surfaces are opaque.
Ray tracing through an optical system is best accomplished by a moving coordinate system using simple geometrical considerations and trigonometric functions, totally ignoring diffraction effects.
Thus far, only paraxial rays have been used to find the image location, size and brightness. The small angle approximation describes the optical system to first order; however, for object points at large distances from the optical axis, corresponding image points are clearly aberrated and not correctly predicted by paraxial ray tracing. Real ray tracing uses vectors starting from a point with direction cosines for the ray in each space (segment) as it is traced from the object point to the image point.
Our tour through paraxial optics has only considered perfect images of scenes in which a point source in object space was mapped to a point in image space using paraxial rays. Gaussian optics also produced perfect images outside the paraxial region. Paraxial optics, however, is only a first-order approximation to a real optical system. Realizable optical systems do not produce perfect point images from point sources (represented mathematically as a delta function). In real optical systems, there is some blur or spreading of the point image.
Diffraction
The complex propagation of light passing through an aperture stop of a lens system will form a less than perfect image (for a detailed explanation of Huygens' wavefronts and propagation see Mahajan (2001)). In fact, the best one can do is to make the system “diffraction-limited.” Diffraction occurs when a wavefront (radiant beam) impinges upon the edge of an opaque screen or aperture. Light appears outside the perfect geometrical shadow because the light has been diffracted by the edge of the aperture. The effect this has on our simple rotationally symmetric optical systems is that a point does not map to a point, but is blurred or smeared. You may have observed the effect of the diffraction of light from a portal where there is light beyond what would be defined as the geometrical shadow boundary. If a wavefront, as shown in Figure 11.1, passes through a circular aperture, it does not continue as a circular disc.
In Chapter 2 we introduced the concept of a plane mirror and its effect on the handedness of an image. An effect of Snell's law provides the law of reflection: the angle of incidence is equal to the angle of reflection, along with a sign change relative to the normal of the surface (see Equation (2.14)). Rays from an object or any point on the object are reflected according to Snell's law in the plane of incidence. The plane of incidence is the plane composed of the incident ray and the surface normal, as shown in Figure 8.1, in which the plane of the paper contains the ray and the normal (η).
As discussed in Chapter 2, the image of point P is located as far behind the mirror as the point is in front of the mirror. For an extended object made up of a continuum of points, as shown in Figure 8.2, the image is located by tracing rays backward in the plane of incidence. The image of the arrow has been inverted upon reflection, and point A′ is below point B′ on this image. What we have been doing is ray tracing in the plane of incidence. If we look at the object directly, we see a different orientation in the plane of incidence than we do if we look at the object via the mirror. An observer looking at the object and image, as shown in Figure 8.3, sees an inverted image.