We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In many applications integer computations are to be performed modulo some given constant. One such area is cryptology, where often multiplications, inversions, and exponentiations are to be performed modulo some very large integer. Hence we shall here investigate algorithms for such operations in their own right, but also because these can be used as primitives for the implementation of multiple modulus systems, also denoted residue number systems and abbreviated to RNSs. Here an integer is represented by a set of residues (the values of the integer modulo a set of given integer moduli, often chosen to be prime numbers).
In such systems computations in a large integer domain can be performed truly in parallel on completely independent processors (often called “channels”), one for each modulus from the set of moduli, and thus operating in a much smaller domain. Due to this independence, additions can be performed “carry-free” in the sense that there is no interaction between the computations of the channels, each of which is operating on integers from a smaller domain. The same applies to multiplication, and as we have pointed out in Chapter 3, such arithmetic is one way to minimize the hj-separable sets, and thus to decrease the computation time by exploiting parallelism. Addition, subtraction, and multiplication in particular can thus be efficiently implemented, whereas other operations like division and comparisons are much more difficult.
The product of two radix polynomials is a radix polynomial whose digits can be expressed as sums of digit products from the operand polynomials. The digits thus expressed are in general from some much larger digit set, and a conversion must be performed in some way to deliver the result in the same digit set as that of the operands. The order in which digit products are generated, and later accumulated, is algorithm specific.
Often intermediate products (called partial products) are formed by multiplying a digit from one operand (the multiplier) by the complete other operand (the multiplicand), followed by accumulation of shifted versions of these, to account for the different weights of the multiplier digits. Hence there are three recognizable steps in forming the product:
(i) formation of digit or partial products,
(ii) summation of digit or partial products,
(iii) final digit set conversion,
where these steps individually or in combination can be performed in parallel or sequentially in various ways.
The order in which digit products or partial products are formed and how accumulation is performed thus characterize the algorithms. Some implicitly perform a base and digit set conversion, utilizing a higher radix to reduce the number of terms that have to be added. The different algorithms can also be characterized according to the way operands are delivered/consumed, and how the result is produced, whether word parallel or digit serial, and in the last case in which order.
From the earliest cultures humans have used methods of recording numbers (integers), by notches in wooden sticks or collecting pebbles in piles or rows. Conventions for replacing a larger group or pile, e.g., five, ten, or twelve objects, by another object or marking, are also found in some early cultures. Number representations like these are examples of positional number systems, in which objects have different weight according to their relative positions in the number. The weights associated with different positions need not be related by a constant ratio between the weights of neighboring positions. In time, distance, old currency, and other measuring systems we find varying ratios between the different units of the same system, e.g., for time 60 minutes to the hour, 24 hours to the day, and 7 days to the week, etc.
Systems with a constant ratio between the position weights are called radix systems; each position has a weight which is a power of the radix. Such systems can be traced back to the Babylonians who used radix 60 for astronomical calculations, however without a specific notation for positioning of the unit, so it can be considered a kind of floating-point notation. Manipulating numbers in such a notation is fairly convenient for multiplication and division, as is known for anyone who has used a slide rule. Our decimal notation with its fixed radix point seems to have been developed in India about 600 CE, but without decimal fractions.
Floating-point number systems evolved from the process of expressing numbers in “scientific decimal notation,” e.g., 2.734 × 106 and −5.23 × 10−4. In practice, the display of a value in scientific notation generally employed the length (precision) of the display as a measure of the accuracy of the display. Values such as α = 6.0247 × 1023 for Avogadro's number or π = 3.14159 × 100 for the natural constant π were subject to various interpretations regarding the final digit with respect to the correspondence to the presumed infinitely precise specification of the real number for which the scientific notation was an approximation. Terms such as correct, rounded, and significant were often explicitly or implicitly implied with the display.
Specifically, the digits were termed correct if they agreed with the initial digits of an infinitely precise specification, such as 3.1415 or 3.141592 for π. This interpretation is now popularly termed the round-towards-zero result. The notions of rounded and significant digits did not convey such an unambiguous interpretation. Saying that 2.734 × 10−6 was a rounded value of x did not necessarily mean that ∣x − 2.734 × 10−6∣ was guaranteed to be at most .5 × 10−9. More particularly, disregarding inherent measurement error, the rounding of the exact “midpoint” value x = 2.7345 × 10−6 to four places was subject to different interpretations. Most ambiguous was the statement that the digits 6.0247 × 1023 were significant in representing a number such as Avogadro's constant. This probably meant only that the relative error was less than one-half part in 6024 but not necessarily less than one-half part in 60247.
This book builds a solid foundation for finite precision number systems and arithmetic, as used in present day general purpose computers and special purpose processors for applications such as signal processing, cryptology, and graphics. It is based on the thesis that a thorough understanding of number representations is a necessary foundation for designing efficient arithmetic algorithms.
Although computational performance is enhanced by the ever increasing clock frequencies of VLSI technology, selection of the appropriate fundamental arithmetic algorithms remains a significant factor in the realization of fast arithmetic processors. This is true whether for general purpose CPUs or specialized processors for complex and time-critical calculations. With faster computers the solution of ever larger problems becomes feasible, implying need for greater precision in numerical problem solving, as well as for larger domains for the representation of numerical data. Where 32-bit floating-point representations used to be the standard precision employed for routine scientific calculations, with 64-bit double-precision only occasionally required, the standard today is 64-bit precision, supported by 128-bit accuracy in special situations. This is becoming mainstream for commodity microprocessors as the revised IEEE standard of 2008 extends the scalable hierarchy for floating-point values to include 128-bit formats. Regarding addressing large memories, the trend is that the standard width of registers and buses in modern CPUs is to be 64 bits, since 32 bits will not support the larger address spaces needed for growing memory sizes. It follows that basic address calculations may require larger precision operands.
Combining a concrete perspective with an exploration-based approach, Exploratory Galois Theory develops Galois theory at an entirely undergraduate level. The text grounds the presentation in the concept of algebraic numbers with complex approximations and assumes of its readers only a first course in abstract algebra. The author organizes the theory around natural questions about algebraic numbers, and exercises with hints and proof sketches encourage students' participation in the development. For readers with Maple or Mathematica, the text introduces tools for hands-on experimentation with finite extensions of the rational numbers, enabling a familiarity never before available to students of the subject. Exploratory Galois Theory includes classical applications, from ruler-and-compass constructions to solvability by radicals, and also outlines the generalization from subfields of the complex numbers to arbitrary fields. The text is appropriate for traditional lecture courses, for seminars, or for self-paced independent study by undergraduates and graduate students.
Computing environments that furnish a large set of tools (such as editors, mail programs and language processors) are difficult to use, primarily because there is no means of organizing the tools so that they are at hand when needed. Because of the dearth of knowledge of how users behave when issuing commands to general purpose computer systems, user support facilities are ad-hoc designs that do not support natural work habits. The Computer User as Toolsmith, first published in 1993, describes several empirical studies from which the author has developed a computer version of a handyman's workbench that would help users with their online activities. For the practitioner and interface designer, the guidelines and principles offered here are directly applicable to the rational design of new systems and the modernization of old ones. For the researcher and graduate student, the book offers a wealth of analysis and interpretation of data, as well as a survey of research techniques.
This volume is a collection of articles based on the plenary talks presented at the 2005 meeting in Santander of the Society for the Foundations of Computational Mathematics. The talks were given by some of the foremost world authorities in computational mathematics. The topics covered reflect the breadth of research within the area as well as the richness and fertility of interactions between seemingly unrelated branches of pure and applied mathematics. As a result this volume will be of interest to researchers in the field of computational mathematics and also to non-experts who wish to gain some insight into the state of the art in this active and significant field.
Workplace studies are of growing significance to people in a broad range of academic disciplines and professions, in particular those involved in the development of new technologies. This groundbreaking book brings together key researchers in Europe and the US to discuss critical issues in the study of the workplace and to outline developments in the field. The collection is divided into two parts. Part I contains a number of detailed case studies that not only provide an insight into the issues central to workplace studies but also some of the problems involved in carrying out such research. Part II focuses on the interrelationship between workplace studies and the design of new technologies. This book provides a valuable, multidisciplinary synthesis of the key issues and theoretical developments in workplace studies and a guide to the implications of such research for new technology design and the workplace.
This book by one of the world's foremost philosophers in the fields of epistemology and logic offers an account of suppositional reasoning relevant to practical deliberation, explanation, prediction and hypothesis testing. Suppositions made 'for the sake of argument' sometimes conflict with our beliefs, and when they do, some beliefs are rejected and others retained. Thanks to such belief contravention, adding content to a supposition can undermine conclusions reached without it. Subversion can also arise because suppositional reasoning is ampliative. These two types of nonmonotonic logic are the focus of this book. A detailed comparison of nonmonotonicity appropriate to both belief contravening and ampliative suppositional reasoning reveals important differences that have been overlooked.
Semiotics is the science of signs: graphical, such as pictures; verbal (writing or sounds); or others such as body gestures and clothes. Computer semiotics studies the special nature of computer-based signs and how they function in use. This 1991 book is based on ten years of empirical research on computer usage in work situations and contains material from a course taught by the author. It introduces basic traditional semiotic concepts and adapts them so that they become useful for analysing and designing computer systems in their symbolic context of work. It presents a novel approach to the subject, rich in examples, in that it is both theoretically systematic and practical. The author refers to and reinterprets techniques already used so that readers can deepen their understanding. In addition, it offers new techniques and a consistent perspective on computer systems that is particularly appropriate for new hardware and software (e.g. hypermedia) whose main functions are presentation and communication. This is a highly important work whose influence will be wide and longlasting.
On-line learning is one of the most powerful and commonly used techniques for training large layered networks and has been used successfully in many real-world applications. Traditional analytical methods have been recently complemented by ones from statistical physics and Bayesian statistics. This powerful combination of analytical methods provides more insight and deeper understanding of existing algorithms and leads to novel and principled proposals for their improvement. This book presents a coherent picture of the state-of-the-art in the theoretical analysis of on-line learning. An introduction relates the subject to other developments in neural networks and explains the overall picture. Surveys by leading experts in the field combine new and established material and enable non-experts to learn more about the techniques and methods used. This book, the first in the area, provides a comprehensive view of the subject and will be welcomed by mathematicians, scientists and engineers, whether in industry or academia.
The core of scientific computing is designing, writing, testing, debugging and modifying numerical software for application to a vast range of areas: from graphics, meteorology and chemistry to engineering, biology and finance. Scientists, engineers and computer scientists need to write good code, for speed, clarity, flexibility and ease of re-use. Oliveira and Stewart's style guide for numerical software points out good practices to follow, and pitfalls to avoid. By following their advice, readers will learn how to write efficient software, and how to test it for bugs, accuracy and performance. Techniques are explained with a variety of programming languages, and illustrated with two extensive design examples, one in Fortran 90 and one in C++: other examples in C, C++, Fortran 90 and Java are scattered throughout the book. This manual of scientific computing style will be an essential addition to the bookshelf and lab of everyone who writes numerical software.
Information retrieval, IR, the science of extracting information from any potential source, can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these views can be reforged in the same framework used to formulate the general principles of quantum mechanics. All the usual quantum-mechanical notions have their IR-theoretic analogues, and the standard results can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To keep the book self-contained appendices with background material on physics and mathematics are included. Each chapter ends with bibliographic remarks that point to further reading. This is an important, ground-breaking book, with much new material, for all those working in IR, AI and natural language processing.
Significant amounts of our time and energy are devoted to creating, managing, and avoiding information. Computers and telecommunications technology have extended our regard for information and are driving changes in how we learn, work, and play. One result of these developments is that skills and strategies for storing and retrieving information have become more essential and more pervasive in our culture. This book considers how electronic technologies have changed these skills and strategies and augmented the fundamental human activity of information seeking. The author makes a case for creating new interface designs that allow the information seeker to choose what strategy to apply according to their immediate needs. Such systems may be designed by providing information seekers with alternative interface mechanisms for displaying and manipulating multiple levels of representation for information objects.Information Seeking in Electronic Environments is essential reading for researchers and graduate students in information science, human-computer interaction, and education, as well as for designers of information retrieval systems and interfaces for digital libraries and archives.