Book contents
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
5 - Radical Probabilism
Published online by Cambridge University Press: 25 October 2017
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
Summary
Radical Probabilism doesn't insist that probabilities be based on certainties; it can be probabilities all the way down, to the roots.
Richard Jeffrey Radical ProbabilismIn Chapter 1, we introduced the two main aspects of Bayesian rational learning: dynamic consistency and symmetry. The preceding chapters focused on symmetries. In particular, we have seen that learning models other than Bayesian conditioning agree that updating on new information should be consistent with one's overall inductive assumptions about the learning situation. In this chapter we return to the issue of dynamic consistency. Recall that Bayesian conditioning is the only dynamically consistent rule for updating probabilities in a special, and particularly important, class of learning situations in which an agent learns the truth of a factual proposition. The basic rationale for dynamic consistency is that an agent's probabilities are best estimates prior and posterior to the learning experience only if they cohere with one another.
This chapter explores dynamic consistency in the context of learning models of bounded rationality. Since these models depart rather sharply from Bayesian conditioning, it is not immediately clear how, or even whether, they can be dynamically consistent. The relevant insights come from Richard Jeffrey's epistemological program of radical probabilism, which holds that Bayesian conditioning is just one among many legitimate forms of learning. After introducing Jeffrey's main ideas, we will see that his epistemology provides a large enough umbrella to include the probabilistic models of learning we have encountered in the preceding chapters, and many more. Two principles of radical probabilism, in particular, will assume a decisive role: Bas van Fraassen's reflection principle and its generalization, the martingale principle, which is due to Brian Skyrms. The two principles extend dynamic consistency to generalized learning processes and thereby allow us to say when such a process updates consistently on new information, even if the content of the information cannot be expressed as an observational proposition.
- Type
- Chapter
- Information
- The Probabilistic Foundations of Rational Learning , pp. 102 - 125Publisher: Cambridge University PressPrint publication year: 2017