We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 7, A Survey of Chosen-Prefix Collision Attacks, Marc Stevens surveys the technical advances, impact and usage of collision attacks for the most widely used cryptographic hash functions. Cryptographic hash function are the Swiss army knives within cryptography and are used in many applications including digital signature schemes, message authentication codes, password hashing, cryptocurrencies and content-addressable storage. Stevens was one of the driving forces in turning initial weaknesses in the compression function into practical attacks against various widely deployed protocols relying on hash function collision resistance for their security. In Chapter 7 he explains how each scenario involves the development of slightly different ideas to exploit weaknesses in especially MD5.
In Chapter 9, Arithmetic Software Libraries, Victor Shoup provides a peek under the hood of NTL, a software library for doing number theory, as well as its relation to a few other related software libraries. NTL supports data structures and arithmetic operations manipulating signed, arbitrary-length integers, as well as extensions dealing with vectors, matrices, finite fields and polynomials over the integers. These mathematical objects are essential building blocks for any efficient cryptologic implementation. As Shoup explains, he started development of NTL around 1990 in order to implement novel number-theoretic algorithms and determine at what point theoretical, asymptotic gains turned practical. NTL has become an essential tool for number-theoretic cryptanalysis to answer how well algorithms perform in practice and contains several algorithms' lattice-basis reduction, including LLL. As explained in Chapter 9, NTL might have been initially based on FreeLIP, but it continues to evolve to make best use of GMP and modern hardware to provide cutting-edge performance.
In Chapter 11, History of Cryptographic Key Sizes, Nigel P. Smart and Emmanuel Thomé provide an overview of improvements over the years in estimating the security level of the main cryptographic hardness assumptions. These improvements heavily rely on the algorithmic innovations discussed in Chapters 2 to 5, but combining all those ideas into concrete key-size recommendations is not immediate. Smart and Thomé have been at the forefront of cryptanalytic research and recommending cryptographic key sizes and conclude with current recommendations in Chapter 11.
In Chapter 2, Lattice Attacks on NTRU and LWE: a History of Refinements, Martin R. Albrecht and Léo Ducas provide an overview of the advances and techniques used in the field of lattice reduction algorithms. Four decades after its invention, the LLL algorithm stillplays a significant role in cryptography, not least as it has become one of the main tools to assess the security of a new wave of lattice-based cryptosystems intended for the new post-quantum cryptographic standard. The runtime of the LLL algorithm was always well understood, but the quality of its output, i.e., how short its output vectors were, could be hard to predict, even heuristically. Yet, an important aspect in the evaluation of the new lattice schemes is accurate predictions of the hardness of the underlying latticeproblems, which crucially relies on estimating the 'shortness' of the vectors that can be efficiently found using lattice reduction and enumeration. Albrecht and Ducas have been on the forefront of improving such estimators and build upon their expertise in Chapter 2.
In Chapter 10, XTR and Tori, Martijn Stam discusses how to work efficiently and compactly in certain multiplicative subgroups of finite fields. The emphasis is on XTR, which works in the cyclotomic subgroup of Fp6 . Stam worked extensively on speeding up XTR and related systems based on algebraic tori and uses his first-hand knowledge to provide a historical perspective of XTR in Chapter 10.
In Chapter 3, History of Integer Factorisation, Samuel S. Wagstaff, Jr gives a thorough overview of the hardness of one of the cornerstones of modern public-key cryptography. The history starts with the early effort by Eratosthenes and his sieve, eventually leading to the modern number field sieve, currently the asymptotically fastest general-purpose integer factorisation method known. Also included are 'special' integer factorisation methods like the elliptic-curve method, where the run-time depends mainly on the size of the unknown prime divisor. Modern factorisation efforts often include a gradual escalation of different methods, so it is essential to be familiar with a wide range of methods and the essence of all relevant algorithms is explained clearly.
In Chapter 8, Efficient Modular Arithmetic, Joppe W. Bos, Thorsten Kleinjung and Dan Page discuss how to realise efficient modular arithmetic in practice on a range of platforms. They cover generic modular multiplication routines such as the popular Montgomery multiplication as well as special routines for primes of a specific shape. Moreover, especially fast methods are outlined that might produce errors with a very small probability. Such faster 'sloppy reduction' techniques are especially beneficial in cryptanalytic settings. Bos, Kleinjung and Page have all been involved in various cryptographic and cryptanalytic record-setting software implementations. In Chapter 8 they explain how to select the most efficient modular arithmetic routines for the cryptologic implementation task at hand.
In Chapter 5, Computing Discrete Logarithms, Robert Granger and Antoine Joux discuss the question “how hard is it to compute discrete logarithms in various groups?”. It details the key ideas and constructions behind the most efficient algorithms for solving the discrete logarithm problem (DLP) with a focus on the recent advances related to finite fields of extension degree >1. A highlight is the rapid development, in the period 2012–2014, of quasi-polynomial time algorithms to solve the DLP in finite fields of fixed charaterstic. Both Granger and Joux contributed significantly to this development, albeit on competing teams. For this book, in Chapter 5, they join forces and explain how different ideas eventually led to the fall of the fixed characteristic finite-field discrete logarithm problem.
In Chapter 4, Lattice-Based Integer Factorisation: An Introduction to Coppersmith’s Method, Alexander May investigates the use of LLL to factor integers as pioneered by Coppersmith. Conceptually, Coppersmith’s method can be deceptively simple: given additional information about an integer to factor (e.g., the knowledge that an RSA key pair (N; e) has a small corresponding private exponent d), derive a system of equations with a small root that reveals the factorization and use LLL to find the small root. As a result, it becomes possible to explore exponentially sized search spaces, while preserving polynomial time by using the famous LLL lattice reduction algorithm. Yet, exploiting Coppersmith’s method in a cryptographic context optimally often involves a number of clever choices related to which system of equations to consider. At first, this is a tantalisingly annoying problem where the choice may appear obvious only in retrospect. May uses his extensive experience in improving the state of the art to explain the reasoning behind various applications in Chapter 4.
The area of computational cryptography is dedicated to the development of effective methods in algorithmic number theory that improve implementation of cryptosystems or further their cryptanalysis. This book is a tribute to Arjen K. Lenstra, one of the key contributors to the field, on the occasion of his 65th birthday, covering his best-known scientific achievements in the field. Students and security engineers will appreciate this no-nonsense introduction to the hard mathematical problems used in cryptography and on which cybersecurity is built, as well as the overview of recent advances on how to solve these problems from both theoretical and practical applied perspectives. Beginning with polynomials, the book moves on to the celebrated Lenstra–Lenstra–Lovász lattice reduction algorithm, and then progresses to integer factorization and the impact of these methods to the selection of strong cryptographic keys for usage in widely used standards.