Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-24T09:15:30.372Z Has data issue: false hasContentIssue false

Could Bayesian cognitive science undermine dual-process theories of reasoning?

Published online by Cambridge University Press:  18 July 2023

Mike Oaksford*
Affiliation:
Department of Psychological Sciences, Birkbeck College, University of London, London, UK [email protected] https://www.bbk.ac.uk/our-staff/profile/8009448/mike-oaksford

Abstract

Computational-level models proposed in recent Bayesian cognitive science predict both the “biased” and correct responses on many tasks. So, rather than possessing two reasoning systems, people can generate both possible responses within a single system. Consequently, although an account of why people make one response rather than another is required, dual processes of reasoning may not be.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Wim De Neys makes a compelling case that recent evidence showing that system 1 can make both incorrect or biased and correct responses raises problems for the switching mechanism that moves between system 1 and system 2. In this commentary, I argue that recent work in the new paradigm in human reasoning (Oaksford & Chater, Reference Oaksford and Chater2020) or Bayesian cognitive science (Chater & Oaksford, Reference Chater and Oaksford2008), more generally, shows that the so-called biased response can be correct, given the right background beliefs or in the right environment. Consequently, rather than requiring two reasoning systems, the evidence Wim cites may instead suggest that people consider more than one possible correct response.

Is it surprising that system 1 can compute the correct response? Other animals, which likely can only possess a putative system 1, are capable of rational decision making (Monteiro, Vasconcelos, & Kacelnik, Reference Monteiro, Vasconcelos and Kacelnik2013; Oaksford & Hall, Reference Oaksford and Hall2016; Stanovich, Reference Stanovich2013). Moreover, the unconscious inferences underpinning perception and action are widely believed to be the product of the same rational Bayesian inferences (Clark, Reference Clark2013; Friston, Reference Friston2010) that underpin new paradigm approaches to human verbal reasoning (Oaksford & Chater, Reference Oaksford and Chater1994, Reference Oaksford and Chater2012, Reference Oaksford and Chater2020; Oaksford & Hall, Reference Oaksford and Hall2016). Within a single model (reasoning system?), these approaches can predict both the “biased” and correct responses. For example, optimal data selection predicts not only so-called confirmation bias in Wason's selection task, but also, depending on the model's parameters, the reflective, falsification response (Oaksford & Chater, Reference Oaksford and Chater1994; see also, Coenen, Nelson, & Gureckis, Reference Coenen, Nelson and Gureckis2019). These different possibilities can be unconsciously simulated by varying these parameters. The possibility that becomes the focus of attention in working memory (WM), and hence which response is made first, will depend on which is best supported by environmental cues or prior knowledge.

This pattern, whereby both the “biased” and correct response can arise from the same computational-level model of the reasoning process, is common across Bayesian cognitive science. A further example is Oaksford and Hall's (Reference Oaksford and Hall2016) model of the base-rate neglect task, on which Wim comments approvingly. This model is related to models of categorisation, where categories are causally related to their features (cues) (Rehder, Reference Rehder and Waldmann2017). Both responses arise from sampling a posterior distribution when the base rates of being female in a sample are updated by the cues to femininity in the description of a person randomly drawn from that sample. Whether the prior (respond male) is washed out (respond female) depends on the perceived strength of the cues in the description of the person sampled. So, both responses can be considered correct depending on other background knowledge.

Further examples abound. In deductive reasoning, a similar variation in endorsing conditional inferences is predicted by the same probabilistic factors as in data selection (Oaksford & Chater, Reference Oaksford and Chater2007, Fig. 5.5; Vance & Oaksford, Reference Vance and Oaksford2021). In computational-level theories of the conjunction fallacy (Tentori, Crupi, & Russo, Reference Tentori, Crupi and Russo2013) and argumentation (Hahn & Oaksford, Reference Hahn and Oaksford2007), responses may be based on the probability of the conclusion (Pr(C)) or the Bayesian confirmation-theoretic relation between premises and conclusion (e.g., Pr(C|P) − Pr(C), the likelihood ratio, etc.). These can lead to conflicting possible responses regarding the strength of the argument and to endorsing the conjunction fallacy. In argumentation, the same Bayesian model explains when an informal argument fallacy, for example, ad hominem or circular reasoning, is fallacious and when it is not. “Biased” responses may also arise from how the brain estimates probabilities by sampling (e.g., Dasgupta, Schulz, & Gershman, Reference Dasgupta, Schulz and Gershman2017; Zhu, Sanborn, & Chater, Reference Zhu, Sanborn and Chater2020). Small samples may be combined with priors to produce initially “biased” responses that move towards the correct response, given more sampling time.

For many of these tasks, it is doubtful that people can explicitly calculate the appropriate responses without formal training and pencil and paper. Although, given more time or a second chance to respond, they may produce the alternative possibilities produced by their unitary reasoning system. Even for tasks where explicit computation is possible, like the bat and ball task, the “biased” response is a necessary step in computing the reflective response. This task involves solving the simultaneous equations (a) x + y = $1.10, (b) x − y = $1, for y (cost of the ball). The first step involves taking (b) from (a): y − y = $1.10 − $1. The next step requires an understanding that y − −y = 2y, which is beyond many UG psychology students in the UK. Yet they may realise that the difference, $1.10 − $1, is on the way to the solution. Getting this far may also lead to their maths tutor awarding them more than 50% of the marks in a classroom test. But giving this answer may leave a feeling of unrightness because the process was not completed. Given a second chance, people respond with a figure less than 10 cents, indicating an understanding that y − −y is greater than y (Bago, Raoelison, & De Neys, Reference Bago, Raoelison and De Neys2019). So, the intuitive response arises as part of computing the correct solution suggesting that heuristics are unnecessary. The two possible responses emerge from the same rational cognitive process.

In summary, so-called biases may often be a function of the same processes that lead to the reflective, rational response (see also, Kruglanski & Gigerenzer, Reference Kruglanski and Gigerenzer2011). The response depends on how prior knowledge or cues in the task materials set the parameters of the computational models. Because the environment can change, and cues are not always present, people may unconsciously simulate more than one possibility. That people do so and record the results may be the core insight of mental models theory (Oaksford, Reference Oaksford2022). The bat and ball task shows that algebraic tasks, usually requiring pencil and paper, can be automatised and at least partially solved unconsciously. Dual-process theories and the new paradigm in reasoning were once in lockstep (Elqayam & Over, Reference Elqayam and Over2013). However, the specific computational-level theories developed within the new paradigm that predicts both the “biased” and the correct response on many tasks may be better interpreted as undermining the basis for this distinction on which dual-process theory depends.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Bago, B., Raoelison, M., & De Neys, W. (2019). Second-guess: Testing the specificity of error detection in the bat-and-ball problem. Acta Psychologica, 193, 214228. https://doi.org/10.1016/j.actpsy.2019.01.008CrossRefGoogle ScholarPubMed
Chater, N., & Oaksford, M. (Eds.) (2008). The probabilistic mind: Prospects for Bayesian cognitive science. Oxford University Press.10.1093/acprof:oso/9780199216093.001.0001CrossRefGoogle Scholar
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral & Brain Sciences, 36, 181253.10.1017/S0140525X12000477CrossRefGoogle ScholarPubMed
Coenen, A., Nelson, J. D., & Gureckis, T. M. (2019). Asking the right questions about the psychology of human inquiry: Nine open challenges. Psychonomic Bulletin & Review, 26, 15481587. https://doi-org.ezproxy.lib.bbk.ac.uk/10.3758/s13423-018-1470-5CrossRefGoogle ScholarPubMed
Dasgupta, I., Schulz, E., & Gershman, S. J. (2017). Where do hypotheses come from? Cognitive Psychology, 96, 125.10.1016/j.cogpsych.2017.05.001CrossRefGoogle ScholarPubMed
Elqayam, S., & Over, D. E. (2013). New paradigm psychology of reasoning: An introduction to the special issue edited by Elqayam, Bonnefon, and over. Thinking & Reasoning, 19, 249265. https://doi-org.ezproxy.lib.bbk.ac.uk/10.1080/13546783.2013.841591CrossRefGoogle Scholar
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127138. https://doi.org/10.1038/nrn2787CrossRefGoogle ScholarPubMed
Hahn, U., & Oaksford, M. (2007). The rationality of informal argumentation: A Bayesian approach to reasoning fallacies. Psychological Review, 114, 704732.CrossRefGoogle ScholarPubMed
Kruglanski, A. W., & Gigerenzer, G. (2011). Intuitive and deliberate judgments are based on common principles. Psychological Review, 118, 97109. doi-org.ezproxy.lib.bbk.ac.uk/10.1037/a0020762CrossRefGoogle ScholarPubMed
Monteiro, T., Vasconcelos, M., & Kacelnik, A. (2013). Starlings uphold principles of economic rationality for delay and probability of reward. Proceedings of the Royal Society B, 280, 20122386. doi:10.1098/rspb.2012.2386CrossRefGoogle ScholarPubMed
Oaksford, M. (2022). Mental models, computational explanation, and Bayesian cognitive science: Commentary on Knauff and Gazzo Castañeda (2022). Thinking & Reasoning. doi.org/10.1080/13546783.2021.2022531Google Scholar
Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608631.CrossRefGoogle Scholar
Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford University Press.10.1093/acprof:oso/9780198524496.001.0001CrossRefGoogle Scholar
Oaksford, M., & Chater, N. (2012). Dual processes, probabilities, and cognitive architecture. Mind & Society, 11, 1526.10.1007/s11299-011-0096-3CrossRefGoogle Scholar
Oaksford, M., & Chater, N. (2020). New paradigms in the psychology of reasoning. Annual Review of Psychology, 71, 305330.10.1146/annurev-psych-010419-051132CrossRefGoogle ScholarPubMed
Oaksford, M., & Hall, S. (2016). On the source of human irrationality. Trends in Cognitive Science, 20, 336344.10.1016/j.tics.2016.03.002CrossRefGoogle ScholarPubMed
Rehder, B. (2017). Concepts as causal models: Categorization. In Waldmann, M. R. (Ed.), The Oxford handbook of causal reasoning (pp. 347375). Oxford University Press.Google Scholar
Stanovich, K. E. (2013). Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice. Thinking & Reasoning, 19, 126.10.1080/13546783.2012.713178CrossRefGoogle Scholar
Tentori, K., Crupi, V., & Russo, S. (2013). On the determinants of the conjunction fallacy: Probability versus inductive confirmation. Journal of Experimental Psychology: General, 142, 235255.CrossRefGoogle ScholarPubMed
Vance, J., & Oaksford, M. (2021). Explaining the implicit negations effect in conditional inference: Experience, probabilities, and contrast sets. Journal of Experimental Psychology: General, 150, 354384. https://doi.org/10.1037/xge0000954CrossRefGoogle ScholarPubMed
Zhu, J.-Q., Sanborn, A. N., & Chater, N. (2020). The Bayesian sampler: Generic Bayesian inference causes incoherence in human probability judgments. Psychological Review, 127, 719748. https://doi.org/10.1037/rev0000190CrossRefGoogle ScholarPubMed