Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-27T22:52:06.211Z Has data issue: false hasContentIssue false

Neither neural networks nor the language-of-thought alone make a complete game

Published online by Cambridge University Press:  28 September 2023

Iris Oved
Affiliation:
Independent Scholar, 911 Central Ave; San Francisco, CA, USA [email protected], [email protected]
Nikhil Krishnaswamy
Affiliation:
Department of Computer Science, Colorado State University, Fort Collins, CO, USA [email protected], https://www.nikhilkrishnaswamy.com/
James Pustejovsky
Affiliation:
Department of Computer Science, Brandeis University, Waltham, MA, USA [email protected], https://jamespusto.com/
Joshua K. Hartshorne
Affiliation:
Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA [email protected], http://l3atbc.org/index.html

Abstract

Cognitive science has evolved since early disputes between radical empiricism and radical nativism. The authors are reacting to the revival of radical empiricism spurred by recent successes in deep neural network (NN) models. We agree that language-like mental representations (language-of-thoughts [LoTs]) are part of the best game in town, but they cannot be understood independent of the other players.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Quilty-Dunn et al. have done a service in summarizing major lines of empirical data supporting a role for symbolic, language-like representations (a language-of-thought [LoT], construed broadly) in theories of cognition. This overview is particularly pressing for audiences of the overly hyped popular press on deep neural networks (NNs). However, Quilty-Dunn et al. have done a disservice to LoT by setting unfavorable terms for the debate. In particular, they (1) overlook the fact that an LoT is necessarily part of a larger system and thus its effects should rarely be cleanly observed, and (2) do not address well-known concerns about LoTs.

Quilty-Dunn et al. note that statements in an LoT are formed of discrete constituents and denote functions from possible worlds to truth values. Consider a situation in which Alice beats Bart at tug-of-war. This might be represented in an LoT as BEAT(ALICE, BART, TUG-OF-WAR). Fodor (Reference Fodor1975) argued that constituents (BEAT, ALICE, BART, TUG-OF-WAR) are “atomic” (unstructured) pointers to metaphysically real entities: Events (beating), properties (Aliceness, Bartness), kinds (tug-of-war), and so on.

Unfortunately, such entities do not appear to exist; nature is not so easily carved at its joints. An alternative – implicit in many Bayesian models – is to treat the symbols as reifications of some distribution in the world: There are some features that are reliably (if probabilistically) encountered in combination, and we use, for example, “Alice” to refer to one such combination (or a posited essence that explains the combination; see Oved, Reference Oved2015). This straightforwardly allows for recognition, for example through an NN classifier (Pustejovsky & Krishnaswamy, Reference Pustejovsky and Krishnaswamy2022; Wu, Yildirim, Lim, Freeman, & Tenenbaum, Reference Wu, Yildirim, Lim, Freeman and Tenenbaum2015). Thus, the LoT sentence BEAT(ALICE, BART, TUG-OF-WAR) means that in observing the referred-to scene, we would recognize (our NN classifier would identify) an Alice, a Bart, a beating, and tug-of-war, and that these entities would be arranged in the appropriate way (see also Pollock & Oved, Reference Pollock and Oved2005). (For readers familiar with possible worlds semantics, the proposition picks out the set of possible worlds where all those recognitions would happen.)

This approach explains, for instance, why we tie ourselves in knots trying to decide whether a cat with the brain of a skunk is a cat or a skunk, or whether the first chicken egg preceded or followed the first chicken. In the LoT, SKUNK, CAT, and CHICKEN are reified abstractions tied to recognition procedures. The world is messier, and the recognition procedures sometimes gum up. Note further that different methods for identifying skunks and cats, and so on (NNs, prototypes, inverse graphics, etc.) have characteristic imprecisions if not outright hallucinations. The predictions of any LoT theory cannot be separated from the manner in which the symbols map onto the world.

Reasoning presents additional complications. Most people infer from Alice beat Bart at tug-of-war that Alice is stronger, that both are humans not platypodes, are not quadriplegic, and played tug-of-war in a gym or field not while flying. Although none of these inferences necessarily hold, keeping a completely open mind about them requires willful obtuseness. Critically, such graded, probabilistic inferences have been the bane of symbolic reasoning theories, including LoTs. A promising avenue is to treat LoT statements as conditions on probable worlds generated from a generative model of the world (Goodman, Tenenbaum, & Gerstenberg, Reference Goodman, Tenenbaum and Gerstenberg2014; Hartshorne, Jennings, Gerstenberg, & Tenenbaum, Reference Hartshorne, Jennings, Gerstenberg and Tenenbaum2019). That is, one considers all possible worlds in which Alice beat Bart at tug-of-war. Because the prior probability of aerial quadriplegic tadpoles playing tug-of-war is low, we discount those possibilities (barring additional evidence).

Because we cannot do a census of possible worlds, this process requires an internal model of the world. Thus, the exact inferences one gets depend on not just the LoT but on what one believes about the world. They also depend on the nature of the model. In some domains, symbolic generative models seem to capture human intuitions, whereas in others we seem to use analog simulations (Jara-Ettinger, Gweon, Schulz, & Tenenbaum, Reference Jara-Ettinger, Gweon, Schulz and Tenenbaum2016; Ullman, Spelke, Battaglia, & Tenenbaum, Reference Ullman, Spelke, Battaglia and Tenenbaum2017). For example, when imagining Alice beating Bart at tug-of war, we might use abstract causal beliefs about tug-of-war (Hartshorne et al., Reference Hartshorne, Jennings, Gerstenberg and Tenenbaum2019), or we might simulate Alice pulling the rope and Bart dragging along the ground in her direction; the latter is more sensitive to physical properties of the players and the field. Moreover, as a practical matter, one must marginalize out (“average over”) irrelevant parts of one's world model (e.g., who Bart's parents are and what he plans to eat after the match). Determining what is relevant is tricky and substantially affects inferences. Indeed, Bass, Smith, Bonawitz, and Ullman (Reference Bass, Smith, Bonawitz and Ullman2021) show that some “cognitive illusions” may be explained by biases in how relevance is determined.

Note that if the above approach is right, the categorical behavior often taken as emblematic of LoTs is likely to be masked by the probabilistic, graded natures of the grounding procedure and the model of the world.

So far, we've followed the Fodorian atomic treatment of constituents, but this is controversial. Linguists note that words tend to have many distinct meanings: One can throw a book (the physical object) or like a book (usually the content conveyed by the book, not the physical object). One can beat Bart or the bell, but in fundamentally different ways. There are many reasons not to treat these different meanings as homophones (a single word that refers to many unrelated concepts), one of the most obvious being that you end up needing an enormous (potentially unbounded) conceptual library. Perhaps we do, but linguists have noted that there are systematic correspondences between the various meanings, and that this can only be explained if the symbols Fodor takes to be atomic in fact have structure that contributes to meaning and governs their resulting conceptual combination and composition (Jackendoff, Reference Jackendoff1990; Pustejovsky, Reference Pustejovsky1995). These solutions can be debated, but the problems have to be solved somehow.

Quilty-Dunn et al. provide a useful description of LoTs. Testing LoT theories, however, requires looking beyond the LoT to how it is used within a larger cognitive system. This, in almost all cases, will involve complex trade-offs and interactions with graded, distributed, and analog systems of representation and processing.

Acknowledgments

We thank members of the BabyBAW team, Mengguo Jing, and Wei Li for valuable discussion.

Financial support

Funding was provided by NSF 2033938 and NSF 2238912 to J. K. H., NSF 2033932 to J. P., and ARO W911NF-23-1-0031 to N. K.

Competing interest

None.

References

Bass, I., Smith, K. A., Bonawitz, E., & Ullman, T. D. (2021). Partial mental simulation explains fallacies in physical reasoning. Cognitive Neuropsychology, 38(7–8), 413424.CrossRefGoogle ScholarPubMed
Fodor, J. A. (1975). The language of thought (Vol. 5). Harvard University Press.Google Scholar
Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of thought. Center for Brains, Minds and Machines (CBMM).Google Scholar
Hartshorne, J. K., Jennings, M. V., Gerstenberg, T., & Tenenbaum, J. (2019). When circumstances change, update your pronouns. Cognitive Science (p. 3472).Google Scholar
Jackendoff, R. S. (1990). Semantic structures. MIT Press.Google Scholar
Jara-Ettinger, J., Gweon, H., Schulz, L. E., & Tenenbaum, J. B. (2016). The naïve utility calculus: Computational principles underlying commonsense psychology. Trends in Cognitive Sciences, 20(8), 589604.CrossRefGoogle ScholarPubMed
Oved, I. (2015). Hypothesis formation and testing in the acquisition of representationally simple concepts. Philosophical Studies 172(1), 227247.CrossRefGoogle Scholar
Pollock, J., & Oved, I. (2005). Vision, knowledge, and the mystery link. Philosophical Perspectives, 19, 309351.CrossRefGoogle Scholar
Pustejovsky, J. (1995). The generative lexicon. MIT Press.Google Scholar
Pustejovsky, J., & Krishnaswamy, N. (2022). Multimodal semantics for affordances and actions. In Human–Computer Interaction. Theoretical Approaches and Design Methods: Thematic Area, Held as Part of the 24th HCI International Conference, Proceedings, HCII 2022, Virtual Event, June 26–July 1, 2022, Part I (pp. 137–160). Cham: Springer International Publishing.Google Scholar
Ullman, T. D., Spelke, E., Battaglia, P., & Tenenbaum, J. B. (2017). Mind games: Game engines as an architecture for intuitive physics. Trends in Cognitive Sciences, 21(9), 649665.CrossRefGoogle ScholarPubMed
Wu, J., Yildirim, I., Lim, J. J., Freeman, B., & Tenenbaum, J. (2015). Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. Advances in Neural Information Processing Systems, 28.Google Scholar