Book contents
- Frontmatter
- Contents
- Preface
- Part I Judgments and Rules
- Part II Statics and Dynamics
- Part III Function Types
- Part IV Finite Data Types
- Part V Infinite Data Types
- Part VI Dynamic Types
- Part VII Variable Types
- Part VIII Subtyping
- Part IX Classes and Methods
- Part X Exceptions and Continuations
- Part XI Types and Propositions
- Part XII Symbols
- Part XIII State
- Part XIV Laziness
- Part XV Parallelism
- 39 Nested Parallelism
- 40 Futures and Speculations
- Part XVI Concurrency
- Part XVII Modularity
- Part XVIII Equational Reasoning
- Part XIX Appendix
- Bibliography
- Index
39 - Nested Parallelism
from Part XV - Parallelism
Published online by Cambridge University Press: 05 February 2013
- Frontmatter
- Contents
- Preface
- Part I Judgments and Rules
- Part II Statics and Dynamics
- Part III Function Types
- Part IV Finite Data Types
- Part V Infinite Data Types
- Part VI Dynamic Types
- Part VII Variable Types
- Part VIII Subtyping
- Part IX Classes and Methods
- Part X Exceptions and Continuations
- Part XI Types and Propositions
- Part XII Symbols
- Part XIII State
- Part XIV Laziness
- Part XV Parallelism
- 39 Nested Parallelism
- 40 Futures and Speculations
- Part XVI Concurrency
- Part XVII Modularity
- Part XVIII Equational Reasoning
- Part XIX Appendix
- Bibliography
- Index
Summary
Parallel computation seeks to reduce the running times of programs by allowing many computations to be carried out simultaneously. For example, if we wish to add two numbers, each given by a complex computation, we may consider evaluating the addends simultaneously, then computing their sum. The ability to exploit parallelism is limited by the dependencies among parts of a program. Obviously, if one computation depends on the result of another, then we have no choice but to execute them sequentially so that we may propagate the result of the first to the second. Consequently, the fewer dependencies among subcomputations, the greater the opportunities for parallelism. This argues for functional models of computation, because the possibility of mutation of shared assignables imposes sequentialization constraints on imperative code.
In this chapter we discuss nested parallelism in which we nest parallel computations within one another in a hierarchical manner. Nested parallelism is sometimes called fork-join parallelism to emphasize the hierarchical structure arising from forking two (or more) parallel computations, then joining these computations to combine their results before proceeding. We consider two forms of dynamics for nested parallelism. The first is a structural dynamics in which a single transition on a compound expression may involve multiple transitions on its constituent expressions. The second is a cost dynamics (introduced in Chapter 7) that focuses attention on the sequential and parallel complexity (also known as the work and depth) of a parallel program by associating a series-parallel graph with each computation.
- Type
- Chapter
- Information
- Practical Foundations for Programming Languages , pp. 325 - 336Publisher: Cambridge University PressPrint publication year: 2012