Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Determinacy in a synchronous π-calculus
- 2 Classical coordination mechanisms in the chemical model
- 3 Sequential algorithms as bistable maps
- 4 The semantics of dataflow with firing
- 5 Kahn networks at the dawn of functional programming
- 6 A simple type-theoretic language: Mini-TT
- 7 Program semantics and infinite regular terms
- 8 Algorithms for equivalence and reduction to minimal form for a class of simple recursive equations
- 9 Generalized finite developments
- 10 Semantics of program representation graphs
- 11 From Centaur to the Meta-Environment: a tribute to a great meta-technologist
- 12 Towards a theory of document structure
- 13 Grammars as software libraries
- 14 The Leordo computation system
- 15 Theorem-proving support in programming language semantics
- 16 Nominal verification of algorithm W
- 17 A constructive denotational semantics for Kahn networks in Coq
- 18 Asclepios: a research project team at INRIA for the analysis and simulation of biomedical images
- 19 Proxy caching in split TCP: dynamics, stability and tail asymptotics
- 20 Two-by-two static, evolutionary, and dynamic games
- 21 Reversal strategies for adjoint algorithms
- 22 Reflections on INRIA and the role of Gilles Kahn
- 23 Can a systems biologist fix a Tamagotchi?
- 24 Computational science: a new frontier for computing
- 25 The descendants of Centaur: a personal view on Gilles Kahn's work
- 26 The tower of informatic models
- References
21 - Reversal strategies for adjoint algorithms
Published online by Cambridge University Press: 06 August 2010
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Determinacy in a synchronous π-calculus
- 2 Classical coordination mechanisms in the chemical model
- 3 Sequential algorithms as bistable maps
- 4 The semantics of dataflow with firing
- 5 Kahn networks at the dawn of functional programming
- 6 A simple type-theoretic language: Mini-TT
- 7 Program semantics and infinite regular terms
- 8 Algorithms for equivalence and reduction to minimal form for a class of simple recursive equations
- 9 Generalized finite developments
- 10 Semantics of program representation graphs
- 11 From Centaur to the Meta-Environment: a tribute to a great meta-technologist
- 12 Towards a theory of document structure
- 13 Grammars as software libraries
- 14 The Leordo computation system
- 15 Theorem-proving support in programming language semantics
- 16 Nominal verification of algorithm W
- 17 A constructive denotational semantics for Kahn networks in Coq
- 18 Asclepios: a research project team at INRIA for the analysis and simulation of biomedical images
- 19 Proxy caching in split TCP: dynamics, stability and tail asymptotics
- 20 Two-by-two static, evolutionary, and dynamic games
- 21 Reversal strategies for adjoint algorithms
- 22 Reflections on INRIA and the role of Gilles Kahn
- 23 Can a systems biologist fix a Tamagotchi?
- 24 Computational science: a new frontier for computing
- 25 The descendants of Centaur: a personal view on Gilles Kahn's work
- 26 The tower of informatic models
- References
Summary
Abstract
Adjoint algorithms are a powerful way to obtain the gradients that are needed in scientific computing. Automatic differentiation can build adjoint algorithms automatically by source transformation of the direct algorithm. The specific structure of adjoint algorithms strongly relies on reversal of the sequence of computations made by the direct algorithm. This reversal problem is at the same time difficult and interesting. This paper makes a survey of the reversal strategies employed in recent tools and describes some of the more abstract formalizations used to justify these strategies.
Why build adjoint algorithms?
Gradients are a powerful tool for mathematical optimization. The Newton method for example uses the gradient to find a zero of a function, iteratively, with an excellent accuracy that grows quadratically with the number of iterations. In the context of optimization, the optimum is a zero of the gradient itself, and therefore the Newton method needs second derivatives in addition to the gradient. In scientific computing the most popular optimization methods, such as BFGS, all give best performances when provided gradients too.
In real-life engineering, the systems that must be simulated are complex: even when they are modeled by classical mathematical equations, analytic resolution is totally out of reach. Thus, the equations must be discretized on the simulation domain, and then solved, for example, iteratively by a computer algorithm.
- Type
- Chapter
- Information
- From Semantics to Computer ScienceEssays in Honour of Gilles Kahn, pp. 489 - 506Publisher: Cambridge University PressPrint publication year: 2009
References
- 1
- Cited by