Published online by Cambridge University Press: 28 February 2022
Whether it is due to the incompleteness of information, inaccuracies of measurement, or stochastic nature of phenomena, a great deal of scientific inference requires probabilistic considerations. In carrying out such inferences, the statistical methods predominantly used are from the Neyman-Pearson Theory of statistics (NPT). Nevertheless, NPT has been the target of such severe criticisms that nearly all philosophers of induction and statistics have rejected it as inadequate for statistical inference in science. If these criticisms do in fact demonstrate the inadequacy of NPT, then a good portion of statistical inference in science will lack justification. Because of the seriousness of such a conclusion, it is important to carefully consider whether critics of NPT have succeeded in demonstrating its inadequacy.
I would like to thank Ronald Giere and Isaac Levi for very helpful comments on an earlier draft of this paper. I am also grateful to Henry Kyburg and Teddy Seidenfeld for sharing their responses to a former paper (Mayo 1981a) with me; they were extremely useful in clarifying the key points of contention underlying the present paper.