Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- 1 Introduction
- 2 Preliminary
- 3 Fundamental Theory and Algorithms of Edge Learning
- 4 Communication-Efficient Edge Learning
- 5 Computation Acceleration
- 6 Efficient Training with Heterogeneous Data Distribution
- 7 Security and Privacy Issues in Edge Learning Systems
- 8 Edge Learning Architecture Design for System Scalability
- 9 Incentive Mechanisms in Edge Learning Systems
- 10 Edge Learning Applications
- Bibliography
- Index
8 - Edge Learning Architecture Design for System Scalability
Published online by Cambridge University Press: 14 January 2022
- Frontmatter
- Contents
- List of Figures
- List of Tables
- 1 Introduction
- 2 Preliminary
- 3 Fundamental Theory and Algorithms of Edge Learning
- 4 Communication-Efficient Edge Learning
- 5 Computation Acceleration
- 6 Efficient Training with Heterogeneous Data Distribution
- 7 Security and Privacy Issues in Edge Learning Systems
- 8 Edge Learning Architecture Design for System Scalability
- 9 Incentive Mechanisms in Edge Learning Systems
- 10 Edge Learning Applications
- Bibliography
- Index
Summary
With the growth of model complexity and computational overhead, modern ML applications are usually handled by the distributed systems, where the training procedure is conducted in parallel. Basically, the datasets and models are partitioned to different workers in data parallelism and model parallelism, respectively.
In this chapter, we present the details of these two schemes. Moreover, considering some latest researches that handle distributed training via multiple primitives, we also discuss the extension of training parallelism, i.e., learning frameworks and efficient synchronization mechanisms over the hierarchical architecture.
- Type
- Chapter
- Information
- Edge Learning for Distributed Big Data AnalyticsTheory, Algorithms, and System Design, pp. 131 - 158Publisher: Cambridge University PressPrint publication year: 2022