Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-12-04T19:48:55.988Z Has data issue: false hasContentIssue false

Predictive Accuracy as an Achievable Goal of Science

Published online by Cambridge University Press:  01 January 2022

Malcolm R. Forster*
Affiliation:
University of Wisconsin-Madison
*
Send requests for reprints to the author, Department of Philosophy, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; [email protected]; homepage http://philosophy.wisc.edu/forster/.

Abstract

What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truth-related achievement of science, and there is an explanation of why common scientific practices (of trading off simplicity and fit) tend to increase predictive accuracy. Akaike's explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides a clear formulation of many open problems of research.

Type
Research Article
Copyright
Copyright © The Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Akaike, H. (1973), “Information Theory and an Extension of the Maximum Likelihood Principle”, in Petrov, B. N. and Csaki, F. (eds.), 2nd International Symposium on Information Theory. Budapest: Akademiai Kiado, 267–81.Google Scholar
Akaike, H. (1994), “Implications of the Informational Point of View on the Development of Statistical Science”, in Bozdogan, H. (ed.) Engineering and Scientific Applications, Vol. 3, Proceedings of the First US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach. Dordrecht: Kluwer, 2738.Google Scholar
Browne, Michael (2000), “Cross-validation Methods”, Cross-validation Methods 44:108132.Google ScholarPubMed
Busemeyer, J. R., and Wang, Yi-Min (2000), “Model Comparisons and Model Selections Based on Generalization Test Methodology”, Model Comparisons and Model Selections Based on Generalization Test Methodology 44:177189.Google Scholar
Cramér, H. (1946a), Mathematical Methods of Statistics. Princeton, N.J.: Princeton University Press.Google Scholar
Cramér, H. (1946b), “A Contribution to the Theory of Statistical Estimation”, A Contribution to the Theory of Statistical Estimation 29:8594.Google Scholar
Forster, Malcolm R. (2000), “Key Concepts in Model Selection: Performance and Generalizability”, Key Concepts in Model Selection: Performance and Generalizability 44:205231.Google ScholarPubMed
Forster, Malcolm R., and Sober, Elliott (1994), “How to Tell when Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions”, How to Tell when Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions 45: 135.Google Scholar
Kieseppä, I. A. (1997), “Akaike Information Criterion, Curve-fitting, and the Philosophical Problem of Simplicity”, Akaike Information Criterion, Curve-fitting, and the Philosophical Problem of Simplicity 48: 2148.Google Scholar
Kruse, Michael (1997), “Variation and the Accuracy of Predictions”, Variation and the Accuracy of Predictions 48:181193.Google Scholar
Kruse, Michael (1999), “Beyond Bayes: Comments on Hellman”, Beyond Bayes: Comments on Hellman 66: 165174.Google Scholar
Kruse, Michael (2000), “Invariance, Symmetry and Rationality”, Invariance, Symmetry and Rationality 122:337357.Google Scholar
Kuhn, Thomas (1970), The Structure of Scientific Revolutions, Second Edition. Chicago: University of Chicago Press.Google Scholar
Muliak, Stanley A. (2001), “The Curve-Fitting Problem: An Objectivist View”, The Curve-Fitting Problem: An Objectivist View 68:218241.Google Scholar
Niiniluoto, Ilkka (1998), “Verisimilitude: The Third Period”, Verisimilitude: The Third Period 49:129.Google Scholar
Popper, Karl (1968), Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Basic Books.Google Scholar
Porter, Theodore (1986), The Rise of Statistical Thinking 1820–1900. Princeton, N.J.: Princeton University Press.CrossRefGoogle Scholar
Rosenkrantz, Roger D. (1977), Inference, Method, and Decision. Dordrecht: D. Reidel.CrossRefGoogle Scholar
Shamos, Morris H. (ed.) (1959), Great Experiments in Physics: Firsthand Accounts from Galileo to Einstein. New York: Dover Publications.Google Scholar
Sober, Elliott (1988), “Likelihood and Convergence”, Likelihood and Convergence 55: 228237.Google Scholar
Sober, Elliott (2002), “Instrumentalism, Parsimony, and the Akaike FrameworkPhilosophy of Science (forthcoming).CrossRefGoogle Scholar
Stewart, Ian (1989), Does God Play Dice? The Mathematics of Chaos. Oxford: Basil Blackwell.Google Scholar
Stone, M. (1977), “An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion”, An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike’s Criterion B 39:4447.Google Scholar
van Fraassen, Bas (1980), The Scientific Image. Oxford: Oxford University Press.CrossRefGoogle Scholar
Wasserman, Larry (2000), “Bayesian Model Selection and Model Averaging”, Bayesian Model Selection and Model Averaging 44:92107.Google ScholarPubMed
Whewell, William ([1858] 1967), Novum Organon Renovatum. Originally published as Part II of the 3rd edition of The Philosophy of the Inductive Sciences, London: Cass.Google Scholar
Zucchini, Walter (2000) “An Introduction to Model Selection”, An Introduction to Model Selection 44:4161.Google ScholarPubMed