We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Shannon's paper from 1948, which presented information theory in a way that already included most of the fundamental concepts, helped bring about a fundamental change in electronic communication. Today digital formats have almost entirely replaced earlier forms of signaling, and coding has become a universal feature of communication and data storage.
Information theory has developed into a sophisticated mathematical discipline, and at the same time it has almost disappeared from textbooks on digital communication and coding methods. This book is a result of the authors' desire to teach information theory to students of electrical engineering and computer science, and to demonstrate its continued relevance. We have also chosen to mix source coding and error-correcting codes, since both are components of the systems we focus on.
Early attempts to apply information-theory concepts to a broad range of subjects met with limited success. The development of the subject has mostly been fuelled by the advances in design of transmitters and receivers for digital transmission such as modern design and other related applications. However, more recently the extensive use of digitized graphics has made possible a vast range of applications, and we have chosen to draw most of our examples from this area.
The first five chapters of the book can be used for a one-semester course at the advanced-undergraduate or beginning-graduate level. Chapter 6 serves as a transition from the basic subjects to the more complex environments covered by current standards.
In the previous chapters we have presented the basic concepts of information theory, source coding, and channel coding. In Chapters 1−3 we have followed traditional information-theory terminology in distinguishing between sources, which produce information, and channels, which are used for transmission (or storage) of information. In many current forms of communication information passes through multiple steps of processing and assembly into composite structures. Since in such cases it can be difficult to make a distinction between sources and channels, we use the neutral term information medium to refer to structures, whether physical or conceptual, that are used for storing and delivering information. In short form the terms medium and media are used. The diverse forms of electronic media may serve as examples of the composite objects we have in mind and the range of meanings of the term. As a specific case one can think of a two-dimensional (2-D) barcode printed on an advertising display so that it can be read by a cell-phone camera and used as a way of accessing the website for the business.
In the case of highly structured composite objects we shall make no attempt to directly apply concepts like entropy or capacity. Instead we limit our applications of information theory tools to more well-defined components of such objects in digital form. The present chapter discusses how 2-D media can be described in the light of these concepts, and how the various tools can be used in such applications.
This chapter describes fields of discrete symbols over finite alphabets A (with ∣A∣ symbols). Such fields can serve as models of graphics media or of other forms of two-dimensional (2-D) storage medium. By addressing the storage medium as a surface on which an image array may be written, the density may be increased. Besides optical storage, new storage techniques based on holography and nano-technology have been demonstrated. Two-dimensional barcodes have also been designed for coding small messages, but increasing the capacity compared with conventional barcodes, which carry information in one dimension only. The Datamatrix is an example of such a code. We focus on 2-D constrained coding for storage applications, but the models presented are more general and they may be used for other applications as well.
Two-dimensional fields with a finite constraint
The symbols are placed in a regular grid (commonly referred to as a lattice) and indexed a(i, j), where i, j are integers. The first index indicates rows and the symbols are spaced at equal intervals within the row. We usually consider rectangular grids on which the symbols are aligned in columns indicated by the second index. However, the symbols in row i + 1 can be shifted a fraction of a symbol interval relative to row i. In particular, a shift of half a symbol interval gives a hexagonal grid. The mutual dependency of symbols is often described in terms of neighbors.
In Chapter 3 we introduced linear error-correcting codes, and discussed how long codes can be obtained from short codes by the product construction. Reed–Solomon (RS) codes over larger alphabets were presented in Chapter 4, where we discussed the Datamatrix format as an example of how RS codes can be used to protect binary data. In this chapter we continue the analysis of these themes insofar as they relate to error correction in video and other applications.
We describe constructions of long error-correcting codes that are suitable for encoding of two-dimensional (2-D) media. The size of the pages makes it desirable to have relatively long codes, and 2-D constructions are often used to obtain long codes. However, there is not necessarily a link between the 2-D structure of the media and the code. In the last section we suggest that very long codes could be given a structure that would allow the 2-D structures to be connected, and that such a code could be partially decoded in cases in which only a subset of the data has to be retrieved.
Binary images of Reed–Solomon codes
The RS codes that are used in applications are always based on the fields F(2m). Here m = 8 is the traditional choice, but future applications are likely to use larger fields. As discussed in Section 4.6, the field is often constructed from a so-called primitive polynomial, p(z).
Finite fields are algebraic structures in which there is much research interest and they have been shown to have a wide range of applications. These proceedings give a state-of-the-art account of the area of finite fields and their applications in communications (coding theory, cryptology), combinatorics, design theory, quasirandom points, algorithms and their complexity. Typically, theory and application are tightly interwoven in the survey articles and original research papers included here. These also demonstrate inter-connections with other branches of pure mathematics such as number theory, group theory and algebraic geometry. This volume is an invaluable resource for any researcher in finite fields or related areas.
This book provides a comprehensive description of the methodologies and the application areas, throughout the range of digital communication, in which individual signals and sets of signals with favorable correlation properties play a central role. The necessary mathematical background is presented to explain how these signals are generated, and to show how they satisfy the appropriate correlation constraints. All the known methods to obtain balanced binary sequences with two-valued autocorrelation, many of them only recently discovered, are presented in depth. The authors treat important application areas including: Code Division Multiple Access (CDMA) signals, such as those already in widespread use for cell-phone communication, and planned for universal adoption in the various approaches to 'third-generation'(3G) cell-phone use; systems for coded radar and sonar signals; communication signals to minimize mutual interference ('cross-talk') in multi-user environments; and pseudo-random sequence generation for secure authentication and for stream cipher cryptology.
The design of code and cipher systems has undergone major changes in modern times. Powerful personal computers have resulted in an explosion of e-banking, e-commerce and e-mail, and as a consequence the encryption of communications to ensure security has become a matter of public interest and importance. This book describes and analyses many cipher systems ranging from the earliest and elementary to the more recent and sophisticated, such as RSA and DES, as well as wartime machines such as the ENIGMA and Hagelin, and ciphers used by spies. Security issues and possible methods of attack are discussed and illustrated by examples. The design of many systems involves advanced mathematical concepts and this is explained in detail in a major appendix. This book will appeal to anyone interested in codes and ciphers as used by private individuals, spies, governments and industry throughout history.
It's been over 40 years since Abraham Sinkov wrote this wonderful little book. As he mentions in his introduction (Preface to the first edition, above) this is a book concerning elementary cryptographic systems. Much has happened in the cryptographic world since this book first appeared. Notably, public-key systems have changed the landscape entirely. In bringing this book up to date, I've included the RSA method (Chapter 6), reasoning that understanding its underpinnings requires relatively elementary number theory and so would be a useful addition. The difficulty in breaking RSA leads to the question of what is a perfectly secure system, and so I've also added a chapter on one-time pads.
Otherwise, I've tried to change very little in the original text. Some terminology I've brought up-to-date: “direct standard” alphabets and ciphers are now “additive”, “decimated” is now “multiplicative”. Sinkov's original exercises I've left unaltered. Their subjects are rather dated, reflecting the cold war era (there are references to nuclear testing and communists), but leaving them I think does no harm to the material being studied and might be now thought of as “quaint”. In any case, decrypting them presents the same challenge as if more modern messages were used.
It's hard to find discussion of some of these topics anymore, methods you can think of as paper-and-pencil methods. Sinkov still presents the best discussion available on how to break columnar ciphers of unequal length, I feel.
In past chapters we have looked at a series of encryption methods. Some ciphers, such as a simple shift cipher, are easy to break and so are not very secure. Other systems take more effort to break, but it is still reasonable to expect to be able to break them, although you may need longer messages or an indication as to some portion of the message. Even in the case of RSA, there is a small chance that there will be a breakthrough in factoring or someone might be incredibly lucky and be able to factor n, so even a message encrypted using RSA is not 100% secure. In methods we have seen so far, the more secure systems require more effort on the part of the sender and receiver and this is what you'd expect. But the less secure methods have the advantage of being easy to implement and fast to use.
Is perfect security possible? That is, when given a ciphertext is it impossible to find the plaintext, even if you are able to use an incredible amount of computing power and are incredibly lucky? Here, we want to be a little careful by what we mean by “finding the plaintext”. For instance, suppose we are using monoalphabetic substitution and our ciphertext is the message ABCD.
The preceding chapter has indicated how a monoalphabetic cipher can be solved. Even if the original word lengths are concealed and the substitution alphabet is random, it is possible to find a solution by using frequency data, repetition patterns and information about the way letters combine with one another. What makes the solution possible is the fact that a given plain language letter is always represented by the same cipher letter. As a consequence, all the properties of plain language such as frequencies and combinations are carried over into the cipher and may be utilized for solution. In effect we could say that all such properties are invariant except that the names of the letters have been changed.
It would seem then that one way to obtain greater security would be to use more than one alphabet in enciphering a message. The general system could be one that uses a number of different alphabets for encipherment, with an understanding between the correspondents of the order in which the alphabets are to be used.
As an illustration of a classic procedure, consider the method that was devised by the French cryptographer Vigenère. It utilizes the encipherment square which is known by his name—the Vigenère square—described in Chapter 1 (Figure 1.2). This square, whose successive rows consist of the normal alphabet shifted by 1 place, 2 places, etc., can be easily constructed whenever it is needed.
In the previous chapters, we looked at many cipher systems where we spent quite a bit of effort developing techniques (1) to determine what method might have been used to encrypt a given cipher text and, once that was known, (2) to “crack” the cipher and come up with the key. In all these methods, an enemy knowing the encryption key compromises the system as it is an easy step to figure out the decryption key. Likewise, knowing the decryption key allows one to easily recover the encryption key. Such system are sometimes called symmetric ciphers or secret-key ciphers. Furthermore, all these methods are what we might term pencil-and-paper methods, as they can be implemented using only paper and pencil. The methods we are about to examine require the power of computers in order to perform the necessary computations.
In contrast to a secret-key cipher, a public-key cipher is a cipher where the encryption key is made public while the decryption key is kept secret. This allows many people to encrypt a message to the holder of the decryption key. Of course, knowledge of the encryption key in a public-key cipher must not allow someone to recover the decryption key, at least not without a tremendous amount of effort. (Thus all our previous ciphers fail miserably in this regard.) Public-key ciphers are also known as asymmetric ciphers, to distinguish them from symmetric ciphers.