Book contents
- Frontmatter
- Dedication
- Content
- List of Figures
- List of Tables
- Preface
- Acknowledgments
- 1 Model and Analysis
- 2 Basics of Probability and Tail Inequalities
- 3 Warm-up Problems
- 4 Optimization I: Brute Force and Greedy Strategy
- 5 Optimization II: Dynamic Programming
- 6 Searching
- 7 Multidimensional Searching and Geometric Algorithms
- 8 String Matching and Finger Printing
- 9 Fast Fourier Transform and Applications
- 10 Graph Algorithms
- 11 Maximum Flow and Applications
- 12 NP Completeness and Approximation Algorithms
- 13 Dimensionality Reduction*
- 14 Parallel Algorithms
- 15 Memory Hierarchy and Caching
- 16 Streaming Data Model
- Appendix A Recurrences and Generating Functions
- Bibliography
- Index
14 - Parallel Algorithms
Published online by Cambridge University Press: 27 April 2019
- Frontmatter
- Dedication
- Content
- List of Figures
- List of Tables
- Preface
- Acknowledgments
- 1 Model and Analysis
- 2 Basics of Probability and Tail Inequalities
- 3 Warm-up Problems
- 4 Optimization I: Brute Force and Greedy Strategy
- 5 Optimization II: Dynamic Programming
- 6 Searching
- 7 Multidimensional Searching and Geometric Algorithms
- 8 String Matching and Finger Printing
- 9 Fast Fourier Transform and Applications
- 10 Graph Algorithms
- 11 Maximum Flow and Applications
- 12 NP Completeness and Approximation Algorithms
- 13 Dimensionality Reduction*
- 14 Parallel Algorithms
- 15 Memory Hierarchy and Caching
- 16 Streaming Data Model
- Appendix A Recurrences and Generating Functions
- Bibliography
- Index
Summary
Models of Parallel Computation
There is a perpetual need for faster computation which is unlikely to be ever satisfied. With device technologies hitting physical limits, alternate computational models are being explored. The Big Data phenomenon precedes the coinage of this term by many decades. One of the earliest and natural direction to speed-up computation was to deploy multiple processors instead of a single processor for running the same program. The ideal objective is to speed-up a program p-fold by using p processors simultaneously. A common caveat is that an egg cannot be boiled faster by employing multiple cooks! Analogously, a program cannot be executed faster indefinitely by using more and more processors. This is not just because of physical limitations but dependencies between various fragments of the code, imposed by precedence constraints.
At a lower level, namely, in digital hardware design, parallelism is inherent – any circuit can be viewed as a parallel computational model. Signals travel across different paths and components and combine to yield the desired result. In contrast, a program is coded in a very sequential manner and the data flows are often dependent on each other – just think about a loop that executes in a sequence. Second, for a given problem, one may have to re-design a sequential algorithm to extract more parallelism. In this chapter, we focus on designing fast parallel algorithms for fundamental problems.
A very important facet of parallel algorithm design is the underlying architecture of the computer, viz., how do the processors communicate with each other and access data concurrently. Moreover, is there a common clock across which we can measure the actual running time? Synchronization is an important property that makes parallel algorithm design somewhat more tractable. In more generalized asynchronous models, there are additional issues like deadlock and even convergence, which are very challenging to analyze.
In this chapter, we will consider synchronous parallel models (sometimes called SIMD) and look at two important models – parallel random access machine (PRAM) and the interconnection network model. The PRAM model is the parallel counterpart of the popular sequential RAM model where p processors can simultaneously access a common memory called shared memory.
- Type
- Chapter
- Information
- Design and Analysis of AlgorithmsA Contemporary Perspective, pp. 277 - 307Publisher: Cambridge University PressPrint publication year: 2019
- 1
- Cited by