Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-26T17:57:52.229Z Has data issue: false hasContentIssue false

Use-Novelty, Severity, and a Systematic Neglect of Relevant Alternatives

Published online by Cambridge University Press:  01 April 2022

Tetsuji Iseda*
Affiliation:
University of Maryland
*
Department of Philosophy, University of Maryland, College Park, Maryland 20742, USA.

Abstract

This paper analyzes Deborah Mayo's recent criticism of use-novelty requirement. She claims that her severity criterion captures actual scientific practice better than use-novelty, and that use-novelty is not a necessary condition for severity. Even though certain cases in which evidence used for the construction of the hypothesis can test the hypothesis severely, I do not think that her severity criterion fits better with our intuition about good tests than use-novelty. I argue for this by showing a parallelism in terms of severity between the confidence interval case and what she calls ‘gellerization’. To account for the difference between these cases, we need to take into account certain additional considerations like a systematic neglect of relevant alternatives.

Type
Probability and Statistical Inference
Copyright
Copyright © 1999 by the Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I am very thankful to Professor Mayo for her detailed comments on eariler versions of this paper, and for her patience in replying to my questions. I would also like to thank to faculty members and graduate students at University of Maryland, especially to Rob Skipper and Nancy Hall, for their comments.

References

Brush, Stephen G. (1989), “Prediction and Theory Evaluation: The Case of Light Bending”, Science 246: 11241129.CrossRefGoogle ScholarPubMed
Earman, John (1992), Bayes or Bust?: A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: MIT Press.Google Scholar
Giere, Ronald N. (1983), “Testing Theoretical Hypotheses”, in Earman, John (ed.), Testing Scientific Theories. Minnesota Studies in the Philosophy of Science, vol. X. Minneapolis: University of Minnesota Press, 269298.Google Scholar
Howson, Colin and Urbach, Peter (1993), Scientific Reasoning: The Bayesian Approach, 2nd ed. La Salle: Open Court.Google Scholar
Lakatos, Imre (1970), “Falsification and the methodology of scientific research programmes”, in Lakatos, Imre and Musgrave, Alan (eds.), Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press, 91196.10.1017/CBO9781139171434.009CrossRefGoogle Scholar
Mayo, Deborah G. (1996), Error and the Growth of Experimental Knowledge. Chicago: University of Chicago Press.10.7208/chicago/9780226511993.001.0001CrossRefGoogle Scholar
Musgrave, Alan (1978), “Evidential Support, Falsification, Heuristics, and Anarchism”, in Radnitzky, Gerard and Andersson, Gunner (eds.), Progress and Rationality in Science. Boston Studies in the Philosophy of Science, vol. LVIII. Dordrecht: Reidel, 181201.CrossRefGoogle Scholar
Popper, Karl (1962), Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Basic Books.Google Scholar
Worrall, John (1978a), “The Ways in Which the Methodology of Scientific Research Programmes Improves on Popper's Methodology”, in Radnitzky, Gerard and Andersson, Gunner (eds.), Progress and Rationality in Science, Boston Studies in the Philosophy of Science vol. LVIII. Dordrecht: Reidel, 4570.CrossRefGoogle Scholar
Worrall, John. (1978b), “Research Programmes, Empirical Support, and the Duhem Problem: Replies to Criticism”, in Radnitzky, Gerard and Andersson, Gunner (eds.), Progress and Rationality in Science, Boston Studies in the Philosophy of Science vol. LVIII. Dordrecht: Reidel, 321338.CrossRefGoogle Scholar
Worrall, John. (1989), “Fresnel, Poisson and the White Spot: The Role of Successful Predictions in the Acceptance of Scientific Theories”, in Gooding, David, Pinch, Trevor, and Schaffer, Simon (eds.), The Uses of Experiment: Studies in the Natural Sciences. Cambridge: Cambridge University Press, 135157.Google Scholar
Zahar, Elie G. (1973), “Why Did Einstein's Programme Supersede Lorentz's?”, The British Journal for the Philosophy of Science 24: 93–123 and 223262.CrossRefGoogle Scholar