Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- 1 Introduction
- 2 Preliminary
- 3 Fundamental Theory and Algorithms of Edge Learning
- 4 Communication-Efficient Edge Learning
- 5 Computation Acceleration
- 6 Efficient Training with Heterogeneous Data Distribution
- 7 Security and Privacy Issues in Edge Learning Systems
- 8 Edge Learning Architecture Design for System Scalability
- 9 Incentive Mechanisms in Edge Learning Systems
- 10 Edge Learning Applications
- Bibliography
- Index
3 - Fundamental Theory and Algorithms of Edge Learning
Published online by Cambridge University Press: 14 January 2022
- Frontmatter
- Contents
- List of Figures
- List of Tables
- 1 Introduction
- 2 Preliminary
- 3 Fundamental Theory and Algorithms of Edge Learning
- 4 Communication-Efficient Edge Learning
- 5 Computation Acceleration
- 6 Efficient Training with Heterogeneous Data Distribution
- 7 Security and Privacy Issues in Edge Learning Systems
- 8 Edge Learning Architecture Design for System Scalability
- 9 Incentive Mechanisms in Edge Learning Systems
- 10 Edge Learning Applications
- Bibliography
- Index
Summary
In this chapter, we first provide convergence results of Stochastic Gradient Descent (SGD) methods that are usually adopted to solve the machine learning problem. Then, we introduce advanced training algorithms including momentum SGD, Hyper-parameter-based algorithms, and optimization algorithms for deep learning models. At last, we give theoretical frameworks about how to deal with the staleness gradient incurred by ASP or SSP.
- Type
- Chapter
- Information
- Edge Learning for Distributed Big Data AnalyticsTheory, Algorithms, and System Design, pp. 24 - 41Publisher: Cambridge University PressPrint publication year: 2022