Published online by Cambridge University Press: 03 January 2022
Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well.