We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The use of transmission lines has increased considerably since the author began his lectures on them at the University of Kent at Canterbury in October 1968. Now the mighty internet involves huge lengths of optical fibres, estimated at over 750 million miles, and similar lengths of copper cables. The ubiquitous mobile phones and personal computers contain circuits using microstrip, coplanar waveguide and stripline. However, despite all these widespread modern applications of transmission lines, the basic principles have remained the same. So much so, that the many classic textbooks on this subject have been essential reading for nearly a hundred years. It is not the purpose of this book to repeat the content of these standard works but to present the material in a form which students may find more digestible. Also this is an age where mathematical calculations are relatively simple to perform on modern personal computers and so there is less need for much of the advanced mathematics of earlier years. The aim of this book is to introduce the reader to a wide range of transmission line topics using a straightforward mathematical treatment which is linked to a large number of graphs illustrating the text. Although the professional worker in this field would use a computer program to solve most transmission line problems, the value of this book is that it provides exact solutions to many simple problems which can be used to verify the more sophisticated computer solutions. The treatment of the material will also encourage ‘back-of-envelope’ calculations which may save hours of computer usage. The author is aware of the hundreds of books published on every aspect of transmission lines and the myriads of scientific publications which appear in an ever increasing number of journals. To help the reader get started on exploring any topic in greater depth, this book contains comments on many of these specialist books at the end of each chapter. Following this will be the reader's daunting task to search through the scientific literature for even more information. It is the author's hope that this book will establish some of the basic principles of this extensive subject which make the use of some of these scientific papers more profitable.
In the preceding five chapters the topic of attenuation has been largely omitted. One reason for this was to simplify the text as the ‘loss-less’ or ‘loss-free’ theory is much easier than that for ‘lossy’ lines. Another reason is that attenuation in many transmission lines is not the major characteristic, particularly for short lengths of line. This means that the discussions in the previous five chapters are sufficient if the losses are small. However, no account of transmission lines would be complete without a discussion of the main causes and effects of attenuation. The chapter will begin with a return to the equivalent circuit method as this enables the two main mechanisms for attenuation to be introduced in a straightforward manner. After that, the concepts will be extended to those transmission lines that require electromagnetic waves for their solutions. Finally, some other aspects of attenuation will be discussed, including dispersion and pulse distortion.
Attenuation in two conductor transmission lines
At the beginning of Chapter 1, an equivalent circuit for a short length of loss-less transmission line was shown in Figure 1.1. In order to introduce the two main sources of attenuation, this diagram now needs amending. Firstly, any conductors will have some electrical resistance (curiously even for superconducting wires at microwave frequencies there will be some resistance, if there are still some unpaired electrons!) and this resistance can be represented as a series distributed resistance, R, which will have the units of Ωm−1. The other source of attenuation is the loss that occurs due to a dielectric having a small conductance. This can be represented by a parallel distributed conductance G with units of Sm−1. The effect of both of these resistive elements is to remove energy from the wave in proportion to the square of its amplitude. Not surprisingly, this results in an exponential decay of a sine wave.
This chapter will complete the discussion about photons on transmission lines. In the first three chapters, the equivalent circuit technique was used to develop many aspects of transmission lines. This involved a one dimensional analysis which has stood the test of time and still produces a useful, although incomplete, picture. However, in the second section, the analysis using electromagnetic waves was given, revealing the three dimensional nature of transmission lines. This showed features that the first section was not able to do; in particular, the velocity and the characteristic impedance of the waves as well as the propagation of higher order modes and the causes of attenuation. The electromagnetic wave approach is outstanding in its accurate prediction of the complex nature of transmission lines. In Chapter 7, a third method was considered using only plane waves travelling at the velocity of light. It was assumed that these plane waves were made up of linearly polarised photons. In this chapter, some more aspects of photons will be considered to see what other properties of transmission lines can be revealed. The use of photons is sufficiently new to be less well established and accepted as the other two approaches to transmission lines used so far. One problem lies in the separation between classical and quantum electrodynamics. The classical theory can be used quite satisfactorily for large numbers of photons. However, if individual photons are being considered, only the quantum theory will describe the phenomena adequately. The full treatment of photons using quantum mechanics is beyond the scope of this book but, where appropriate, the results of such a treatment will be quoted. The other problem lies in the somewhat conflicting theories that currently exist. This chapter should therefore be read with some caution as the picture presented may well change in the near future. Fortunately, the contents of the first seven chapters are sufficiently uncontroversial to not need changing within the lifetime of this book.
Optical magnetometry, in which a magnetic field is measured by observing changes in the properties of light interacting with matter immersed in the field, is not a new field. It has its origins in Michael Faraday's discovery in 1845 of the rotation of the plane of linearly polarized light as it propagated through a dense glass in the presence of a magnetic field. Faraday's historic discovery marked the first experimental evidence relating light and electromagnetism.
A century later, atomic magnetometers based on optical pumping were introduced and gradually perfected by such giants as Alfred Kastler, Hans Dehmelt, Jean Brossel, William Bell, Arnold Bloom, and Claude Cohen-Tannoudji, to name but a few of the pioneers. Recent years have seen a revolution in the field related to the development of tunable diode lasers, efficient antirelaxation wall coatings, techniques for elimination of spin-exchange relaxation, and, most recently, the advent of optical magnetometers based on color centers in diamond. Today, optical magnetometers are pushing the boundaries of sensitivity and spatial resolution, and, in contrast to their able competition from super-conducting quantum interference device (SQUID) magnetometers, they do not require cryogenic temperatures. Numerous novel applications of optical magnetometers have flourished, from detecting signals in microfluidic nuclear-magnetic resonance chips to measuring magnetic fields of the human brain to observing single nuclear spins in a solid matrix.
Nuclear magnetic resonance (NMR) is a powerful analytical tool for elucidation of molecular form and function, finding application in disciplines including medicine (magnetic resonance imaging), materials science, chemistry, biology, and tests of fundamental symmetries [1–6]. Conventional NMR relies on a Faraday pickup coil to detect nuclear spin precession. The voltage induced in a pickup coil is proportional to the rate of change of the magnetic flux through the coil. Hence, for a given nuclear spin polarization, the signal increases linearly with the Larmor precession frequency of the nuclear spins. Since the thermal nuclear spin polarization is also linear in the field strength, the overall signal is roughly proportional to B2, motivating the development of stronger and stronger magnetic fields. Additionally, an important piece of information in NMR is the so-called chemical shift, which effectively modifies the gyromagnetic ratios of the nuclear spins depending on their chemical environment. This produces different precession frequencies for identical nuclei on different sites of a molecule, and the separation in precession frequencies is linear in the magnetic field. For these reasons, tremendous expense has been spent on the development of stronger magnets. Typical spectrometers feature 9.4 T superconducting magnets, corresponding to 400 MHz proton precession frequencies, and state-of-the-art NMR facilities may feature 24 T magnets, corresponding to 1 GHz proton precession frequency. While the performance of such machines is impressive, there are a number of drawbacks: superconducting magnets are immobile and expensive (roughly §500 000 for a 9.4 T magnet and console) and require a constant supply of liquid helium.
At present, we know of four fundamental forces, three of which (electromagnetism, the strong force, and the weak force) are well described by what has come to be known as the Standard Model, a theory developed in the 1960s by Glashow, Weinberg, Salam, and others [1–3]. The fourth, gravity, is well understood at macroscopic scales in terms of Einstein's theory of general relativity [4, 5]. In spite of the spectacular agreement between these theoretical descriptions and numerous experimental measurements, it has been exceedingly challenging to develop a consistent theory of gravitation at the quantum scale, primarily because of the extreme difference between the mass and distance scales at which experimental tests of the two theories are performed. Furthermore, there are a variety of observations that have defied satisfactory explanation within this framework, prominent among them the matter–antimatter asymmetry of the universe [6], evidence for dark matter [7], and the accelerating expansion of the universe, attributed to a mysterious “dark energy” permeating spacetime [8]. It is always of great interest to carry out experiments testing the agreement between theory and experiment beyond the frontier of present precision, and the abundant mysteries confronting our modern understanding of fundamental particles and interactions make the present era an especially auspicious time for the discovery of new physics.
The techniques of optical magnetometry are ideally suited for experimental tests of fundamental physical laws involving atomic spins. For example, a variety of optical magnetometry techniques are being used to search for heretofore undiscovered spin-dependent forces that would indicate the existence of new fundamental interactions.
Along with electromagnetic (EM), gravity, and radiation detection methods, magnetometry is a basic method for geophysical exploration for minerals, including diamonds and oil. Fixed-wing and helicopter-borne magnetometers and gradiometers are generally used for assessment explorations, with ground and marine methods providing for follow-up mapping of interesting areas.
Magnetometers have been towed by or mounted on airborne platforms for resource exploration since the 1940s [1]. Mapping of the Earth's magnetic field can illuminate structural geology relating to rock contacts, intrusive bodies, basins, and bedrock. Susceptibility contrasts associated with differing amounts of magnetite in the subsurface can identify areas that are good candidates for base and precious metal mineral deposits or diamond pipes. Existing magnetic anomalies associated with known mineralization are often extrapolated to extend drilling patterns and mining activities into new areas.
After World War II fluxgate sensors, originally employed for submarine detection, replaced dipping needle and induction coil magnetic field sensing systems as the air-borne magnetometer of choice. While the fluxgate and induction magnetometers could measure the components of the Earth's field rapidly (100 Hz or faster), their sensitivity to orientation made them a poor choice for installation on moving platforms. Experiments by Packard and Varian in 1953 on nuclear magnetic resonance resulted in the invention of the orientation-independent total-field proton precession magnetometer and total-field magnetometers replaced vector magnetometer systems in mobile platforms.