Since Ramsey, much discussion of the relation between probability and belief has taken for granted that there are degrees of belief, i.e., that there is a real-valued function, B, that characterizes the degree of belief that an agent has in each statement of his language. It is then supposed that B is a probability. It is then often supposed that as the agent accumulates evidence, this function should be updated by conditioning:$B^{E}(\cdot) $
should be$B(\cdot \wedge E) / B(E) $
. Probability is also important in classical statistics, where it is generally supposed that probabilities are frequencies, and that inference proceeds by controlling error and not by conditioning. I will focus on the tension between these two approaches to probability, and in the main part of the paper show where and when Bayesian conditioning conflicts with error based statistics and how to resolve these conflicts.