Book contents
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
Preface and Acknowledgments
Published online by Cambridge University Press: 25 October 2017
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
Summary
The work presented here develops a comprehensive probabilistic approach to learning from experience. The central question I try to answer is: “What is a correct response to some new piece of information?” This question calls for an evaluative analysis of learning which tells us whether, or when, a learning procedure is rational. At its core, this book embraces a Bayesian approach to rational learning, which is prominent in economics, philosophy of science, statistics, and epistemology. Bayesian rational learning rests on two pillars: consistency and symmetry. Consistency requires that beliefs are probabilities and that new information is incorporated consistently into one's old beliefs. Symmetry leads to tractable models of how to update probabilities. I will endorse this approach to rational learning, but my main objective is to extend it to models of learning that seem to fall outside the Bayesian purview – in particular, to models of so-called “bounded rationality.”While these models may often not be reconciled with Bayesian decision theory (maximization of expected utility), I hope to show that they are governed by consistency and symmetry; as it turns out, many bounded learning models can be derived from first principles in the same way as Bayesian learning models.
This project is a continuation of Richard Jeffrey's epistemological program of radical probabilism. Radical probabilism holds that a proper Bayesian epistemology should be broad enough to encompass many different forms of learning from experience besides conditioning on factual evidence, the standard form of Bayesian updating. The fact that boundedly rational learning can be treated in a Bayesian manner, by using consistency and symmetry, allows us to bring them under the umbrella of radical probabilism; in a sense, a broadly conceived Bayesian approach provides us with “the one ring to rule them all” (copyright Jeff Barrett). As a consequence, the difference between high rationality models and bounded rationality models of learning is not as large as it is sometimes thought to be; rather than residing in the core principles of rational learning, it originates in the type of information used for updating.
- Type
- Chapter
- Information
- The Probabilistic Foundations of Rational Learning , pp. xi - xivPublisher: Cambridge University PressPrint publication year: 2017