Book contents
- Frontmatter
- Dedication
- Content
- List of Figures
- List of Tables
- Preface
- Acknowledgments
- 1 Model and Analysis
- 2 Basics of Probability and Tail Inequalities
- 3 Warm-up Problems
- 4 Optimization I: Brute Force and Greedy Strategy
- 5 Optimization II: Dynamic Programming
- 6 Searching
- 7 Multidimensional Searching and Geometric Algorithms
- 8 String Matching and Finger Printing
- 9 Fast Fourier Transform and Applications
- 10 Graph Algorithms
- 11 Maximum Flow and Applications
- 12 NP Completeness and Approximation Algorithms
- 13 Dimensionality Reduction*
- 14 Parallel Algorithms
- 15 Memory Hierarchy and Caching
- 16 Streaming Data Model
- Appendix A Recurrences and Generating Functions
- Bibliography
- Index
15 - Memory Hierarchy and Caching
Published online by Cambridge University Press: 27 April 2019
- Frontmatter
- Dedication
- Content
- List of Figures
- List of Tables
- Preface
- Acknowledgments
- 1 Model and Analysis
- 2 Basics of Probability and Tail Inequalities
- 3 Warm-up Problems
- 4 Optimization I: Brute Force and Greedy Strategy
- 5 Optimization II: Dynamic Programming
- 6 Searching
- 7 Multidimensional Searching and Geometric Algorithms
- 8 String Matching and Finger Printing
- 9 Fast Fourier Transform and Applications
- 10 Graph Algorithms
- 11 Maximum Flow and Applications
- 12 NP Completeness and Approximation Algorithms
- 13 Dimensionality Reduction*
- 14 Parallel Algorithms
- 15 Memory Hierarchy and Caching
- 16 Streaming Data Model
- Appendix A Recurrences and Generating Functions
- Bibliography
- Index
Summary
Models of Memory Hierarchy
Designing memory architecture is an important component of computer organization that tries to achieve a balance between computational speed and memory speed, viz., the time to fetch operands from memory. Computational speeds are much faster since the processing happens within the chip; whereas, a memory access could involve off chip memory units. To bridge this disparity, the modern computer has several layers of memory, called cache memory that provides faster access to the operands. Because of technological and cost limitations, cache memories offer a range of speed–cost tradeoffs. For example, the L1 cache, the fastest cache level is usually also of the smallest size. The L2 cache is larger, say by a factor of ten but also considerably slower. The secondary memory which is the largest in terms of size, e.g., the disk could be 10,000 times slower than the L1 cache. For any large size application, most of the data resides on disk and is transferred to the faster levels of cache when required.
This movement of data is usually beyond the control of the normal programmer and managed by the operating system and hardware. By using empirical principles called temporal and spatial locality of memory access, several replacement policies are used to maximize the chances of keeping the operands in the faster cache memory levels. However, it must be obvious that there will be occasions when the required operand is not present in L1; one has to reach out to L2 and beyond and pay the penalty of higher access cost. In other words, memory access cost is not uniform as discussed in the beginning of this book but for simplicity of the analysis, we had pretended that it remains same.
In this chapter, we will do away with this assumption; however, for simpler exposition, we will deal with only two levels of memory – slowand fast where the slower memory has infinite size while the faster one is limited, say, of size M and significantly faster. Consequently, we can pretend that the faster memory has zero (negligible) access cost and the slower memory has access cost 1.
- Type
- Chapter
- Information
- Design and Analysis of AlgorithmsA Contemporary Perspective, pp. 308 - 322Publisher: Cambridge University PressPrint publication year: 2019