Published online by Cambridge University Press: 28 February 2022
One of the salient features of Carnap's systems of inductive logic is a conditionalization learning model, which also plays a fundamental role in the orthodox Bayesian account of rationality. This learning model does not allow for revision of previously accepted evidence. It is, therefore, not adequate to represent all rational learning by an agent who accepts corrigible propositions as evidence. Recently I used a generalization of the concept of conditional belief to extend the model so that rational revision of previously accepted evidence can be accommodated. My generalization of conditional belief carries with it a natural way of representing an agent's conceptual framework. In this paper I exploit this representation to produce learning models that can accommodate rational conceptual change.
The idea of conceptual change has received considerable attention in recent work of philosophers of science. The following quotation from Hilary Putnam is instructive.