Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-25T06:22:50.706Z Has data issue: false hasContentIssue false

On Transmitted Information as a Measure of Explanatory Power

Published online by Cambridge University Press:  01 April 2022

Joseph F. Hanna*
Affiliation:
Michigan State University

Abstract

This paper contrasts two information-theoretic approaches to statistical explanation: namely, (1) an analysis, which originated in my earlier research on problems of testing stochastic models of learning, based on an entropy-like measure of expected transmitted-information (and here referred to as the Expected-Information Model), and (2) the analysis, which was proposed by James Greeno (and which is closely related to Wesley Salmon's Statistical Relevance Model), based on the information-transmitted-by-a-system. The substantial differences between these analyses can be traced to the following basic difference. On Greeno's view, the essence of explanation lies in the relevance relations expressed by the conditional probabilities that relate the explanans variables to the explanandum variables; on my view, in contrast, the essence of explanation lies in theories viewed as hypothetical structures which deductively entail conditional probability distributions linking the explanans variables and the explanandum variables. The explanatory power of a stochastic theory is identified with information (regarding the values of explanandum variables) which is “absorbed from” the explanans variables. While other information which is “absorbed from” the explanandum variables (through the process of parameter estimation, for example) reflects descriptive power of the theory. I prove that Greeno's measure of transmitted information is a limiting special case of the E-I model, but that the former, unlike the latter, makes no distinction between explanatory power and descriptive power.

Type
Research Article
Copyright
Copyright © Philosophy of Science Association 1978

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I wish again to express my indebtedness to my teacher, Ernest Adams, for a conversation in the Spring of 1964 during which he suggested the possibility of using transmitted information as a measure of predictive success for stochastic theories. His suggestions were elaborated and extended as part of my Ph.D. dissertation [10], directed by Adams and submitted to the faculty of the program in Logic and Methodology of Science at the University of California, Berkeley. Further applications of transmitted information as a measure of predictive, descriptive, and explanatory power were reported in [6], [7], [9], and [11].

References

[1] Goldman, S. Information Theory. New York: Prentice-Hall, 1953.Google Scholar
[2] Greeno, J. G.Evaluation of Statistical Hypotheses Using Information Transmitted.” Philosophy of Science 37 (1970): 279294. (As reprinted in [18]. pp. 89–104).10.1086/288301CrossRefGoogle Scholar
[3] Greeno, J. G.Theoretical Entities in Statistical Explanation.” In Boston Studies in the Philosophy of Science, Vol. VIII. Edited by Buck, Roger C. and Cohen, Robert S. Dordrecht: D. Reidel, 1971. pp. 326.Google Scholar
[4] Grier, B.Prediction, Explanation, and Testability as Criteria for Judging Statistical Theories.” Philosophy of Science 42 (1975): 273283.CrossRefGoogle Scholar
[5] Kemeny, J.A Logical Measure Function.” Journal of Symbolic Logic 18 (1953): 289308.10.2307/2266553CrossRefGoogle Scholar
[6] Hanna, J. F.A New Approach to the Formulation and Testing of Learning Models.” Synthese 16 (1966): 344380.CrossRefGoogle Scholar
[7] Hanna, J. F.Explanation, Prediction, Description, and Information Theory.” Synthese 20 (1969): 308344.CrossRefGoogle Scholar
[8] Hanna, J. F.Falsifiability, Simplicity, and Free Parameters.” Manuscript. (Read at the First Biannual Philosophy of Science Association Convention, Pittsburgh, October 1968.)Google Scholar
[9] Hanna, J. F.Information-Theoretic Techniques for Evaluating Simulation Models.” In Computer Simulation of Human Behavior. Edited by Dutton, John M. and Starbuck, William H. New York: John Wiley & Sons, 1971. pp. 682692.Google Scholar
[10] Hanna, J. F.The Methodology of the Testing of Learning Models; with Applications to a New Stimulus Discrimination Model of Two-Choice Behavior.” Ph.D. dissertation, University of California, Berkeley, 1965. Xerox University Microfilms Publication Number 663608.CrossRefGoogle Scholar
[11] Hanna, J. F.Some Information Measures for Testing Stochastic Models.” Journal of Mathematical Psychology 6 (1969): 294311.CrossRefGoogle Scholar
[12] Hempel, C.Aspects of Scientific Explanation.” In [13]. pp. 331496.Google Scholar
[13] Hempel, C. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: The Free Press, 1965.Google Scholar
[14] Hempel, C. G.The Theoretician's Dilemma.” In Minnesota Studies in the Philosophy of Science, Vol. II. Edited by Feigl, Herbert, Scriven, Michael, and Maxwell, Grover. Minneapolis: University of Minnesota Press, 1958. pp. 3798. Also in [13]. pp. 173–226.Google Scholar
[15] Hempel, C. and Oppenheim, P.Studies in the Logic of Explanation.” Philosophy of Science 15 (1948): 135175. (As reprinted in [13]. pp. 245–290).10.1086/286983CrossRefGoogle Scholar
[16] Popper, K. R. The Logic of Scientific Discovery. London: Hutchinson & Co., 1959.Google Scholar
[17] Salmon, W. C.A Third Dogma of Empiricism.” In Basic Problems in Methodology and Linguistics. Edited by Butts, R. and Hintikka, J. Dordrecht and Boston: D. Reidel Publishing Co., 1977. pp. 149166.CrossRefGoogle Scholar
[18] Salmon, W. C.Statistical Explanation.” In Nature and Function of Scientific Theories. Edited by Colodny, R. G. Pittsburgh: University of Pittsburgh Press, 1970. pp. 173231. (As reprinted in [19]. pp. 29–87).Google Scholar
[19] Salmon, W. C. Statistical Explanation and Statistical Relevance. Pittsburgh: University of Pittsburgh Press, 1971.10.2307/j.ctt6wrd9pCrossRefGoogle Scholar
[20] Shannon, C. E. and Weaver, W. The Mathematical Theory of Communication. Urbana: University of Illinois Press, 1949.Google Scholar
[21] Shuford, E. H.; Albert, A.; and Massengill, H. C.Admissible Probability Measurement Procedures.” Psychometrics 31 (1966): 125146.CrossRefGoogle ScholarPubMed