Despite the huge amount of experimental work done under the dual-process banner, dual-process theory still lacks an agreed theoretical framework. One issue concerns implementation. Are the neural resources involved in supporting each type of processing discrete, or do they overlap? Another issue is coordination. How are the two types of processing related functionally, and how are their activities coordinated? Most dual-process theorists adopt a default-interventionist view, according to which system 1 generates default responses, and system 2 is activated only occasionally, generating a more considered response, which overrides the default one. (I use “system 1” as a label for the suite of intuitive processes, without assuming that they form a unified neural system; similarly for “system 2.”) However, this prompts the further question of exactly how the switch between intuitive system 1 processing and deliberate system 2 processing is managed.
It is this question – the “switch issue” – that occupies De Neys. He proposes that switching is controlled by system 1: System 1 monitors its own responses, calculates a measure of their uncertainty, and initiates system 2 when the measure exceeds a certain threshold. System 1 also monitors the outputs of system 2 and terminates system 2 processing when a response with a suitably low uncertainty is generated. De Neys argues that this requires us to give up the assumption that system 2 responses are beyond the reach of system 1 (the “exclusivity assumption”).
I like De Neys's revision of dual-process theory (a form of “dual-process theory 2.0”), but I am going to suggest that system 1 has an even bigger role in system 2 processing than De Neys recognizes. First, however, I want to make a comment about exclusivity.
De Neys argues that if switching is under system 1 control, then exclusivity cannot hold, because system 1 initiates a switch when it generates both the intuitive and the deliberate response and is uncertain which to select (target article, sect. 2.2). This is too strong, however. For as De Neys acknowledges, system 1 can also initiate a switch when it has generated just one response or no response at all (target article, sect. 4.4). So some switching could occur even if exclusivity held. However, I think we should deny exclusivity all the same. De Neys presents empirical evidence against it, and he may be right that switching is often triggered by conflict within system 1. Moreover, as De Neys notes, system 1 may include automatized versions of system 2 processes, and if it does, then exclusivity will not hold. The upshot is that while we should reject the strong exclusivity claim that no system 2 response can be generated by system 1, we should not endorse the strong inclusivity claim that every system 2 response can be generated by system 1.
On now to the larger issue. I agree with De Neys that system 1 plays a role in controlling system 2, but I think we need to go further – much further. System 1, I propose, does not only initiate and monitor system 2 processes; it also generates them. I have developed this idea in previous work (e.g., Frankish, Reference Frankish, Evans and Frankish2009, Reference Frankish, Langland-Hassan and Vicente2018, Reference Frankish, Clowes, Gärtner and Hipólito2021), so I shall merely sketch it here.
The core idea is that system 2 processing involves the conscious manipulation of culturally transmitted symbols – words, numerals, diagrams, and so on – either external or, more often, mentally imaged. The manipulations are generated by system 1, and they serve to break down the original problem into simpler subproblems which system 1 can solve. I have described the process as one of deliberative mastication. If all goes well, it culminates in a solution to the original problem.
As an example, take division. We can solve simple division problems intuitively, but we deal with more complex ones by executing a procedure for long division, writing down dividend and divisor in a certain format, solving the simpler problems the format highlights, writing down the answers to these problems, and so on, till we have our answer. This is, I suggest, an example – albeit an unusually explicit one – of slow, effortful, system 2 reasoning, and it is under continuous system 1 control. System 1 initiates the actions involved (writing and manipulating the numerals), receives relevant perceptual inputs, recognizes the subproblems posed, solves these subproblems, and so on – all the while monitoring to see if a solution to the overall problem has been reached.
All system 2 processes, I suggest, are similar, being constituted by activities that decompose a problem into intuitively solvable chunks, though these activities are usually internalized ones involving the manipulation of inner speech or other mental imagery rather than external symbols (for examples, see the works cited above).
This proposal explains why system 2 processing places heavy demands on attention and working memory (which are required for imagery manipulation) and why its processes are transparent (the images can be recalled and reported). Moreover, it offers an economical answer to the implementation question I mentioned at the start. In this view, the core cognitive resources driving system 2 processing are those of system 1, though further resources, including those of working memory, language, and perception, are employed as well. Thus, system 2 is not a separate neural system but a virtual system, realized in activities generated by system 1.
This view extends the approach De Neys proposes, and his speculations about how system 1 controls system 2 could be elaborated to reflect system 1's expanded role. At each stage in a system 2 process, system 1 will calculate what activity to generate next, receive perceptual or imagistic feedback, generate responses to the subproblem presented, and calculate whether and how to continue the process, using techniques of the sort De Neys describes, including uncertainty monitoring and calculation of opportunity costs.
In conclusion, De Neys's proposal not only advances theorizing about fast-and-slow thinking but also points to how it might be advanced still further, moving us toward a dual-process theory 3.0 in which system 1 not only initiates system 2 thinking but generates and sustains it as well.
Despite the huge amount of experimental work done under the dual-process banner, dual-process theory still lacks an agreed theoretical framework. One issue concerns implementation. Are the neural resources involved in supporting each type of processing discrete, or do they overlap? Another issue is coordination. How are the two types of processing related functionally, and how are their activities coordinated? Most dual-process theorists adopt a default-interventionist view, according to which system 1 generates default responses, and system 2 is activated only occasionally, generating a more considered response, which overrides the default one. (I use “system 1” as a label for the suite of intuitive processes, without assuming that they form a unified neural system; similarly for “system 2.”) However, this prompts the further question of exactly how the switch between intuitive system 1 processing and deliberate system 2 processing is managed.
It is this question – the “switch issue” – that occupies De Neys. He proposes that switching is controlled by system 1: System 1 monitors its own responses, calculates a measure of their uncertainty, and initiates system 2 when the measure exceeds a certain threshold. System 1 also monitors the outputs of system 2 and terminates system 2 processing when a response with a suitably low uncertainty is generated. De Neys argues that this requires us to give up the assumption that system 2 responses are beyond the reach of system 1 (the “exclusivity assumption”).
I like De Neys's revision of dual-process theory (a form of “dual-process theory 2.0”), but I am going to suggest that system 1 has an even bigger role in system 2 processing than De Neys recognizes. First, however, I want to make a comment about exclusivity.
De Neys argues that if switching is under system 1 control, then exclusivity cannot hold, because system 1 initiates a switch when it generates both the intuitive and the deliberate response and is uncertain which to select (target article, sect. 2.2). This is too strong, however. For as De Neys acknowledges, system 1 can also initiate a switch when it has generated just one response or no response at all (target article, sect. 4.4). So some switching could occur even if exclusivity held. However, I think we should deny exclusivity all the same. De Neys presents empirical evidence against it, and he may be right that switching is often triggered by conflict within system 1. Moreover, as De Neys notes, system 1 may include automatized versions of system 2 processes, and if it does, then exclusivity will not hold. The upshot is that while we should reject the strong exclusivity claim that no system 2 response can be generated by system 1, we should not endorse the strong inclusivity claim that every system 2 response can be generated by system 1.
On now to the larger issue. I agree with De Neys that system 1 plays a role in controlling system 2, but I think we need to go further – much further. System 1, I propose, does not only initiate and monitor system 2 processes; it also generates them. I have developed this idea in previous work (e.g., Frankish, Reference Frankish, Evans and Frankish2009, Reference Frankish, Langland-Hassan and Vicente2018, Reference Frankish, Clowes, Gärtner and Hipólito2021), so I shall merely sketch it here.
The core idea is that system 2 processing involves the conscious manipulation of culturally transmitted symbols – words, numerals, diagrams, and so on – either external or, more often, mentally imaged. The manipulations are generated by system 1, and they serve to break down the original problem into simpler subproblems which system 1 can solve. I have described the process as one of deliberative mastication. If all goes well, it culminates in a solution to the original problem.
As an example, take division. We can solve simple division problems intuitively, but we deal with more complex ones by executing a procedure for long division, writing down dividend and divisor in a certain format, solving the simpler problems the format highlights, writing down the answers to these problems, and so on, till we have our answer. This is, I suggest, an example – albeit an unusually explicit one – of slow, effortful, system 2 reasoning, and it is under continuous system 1 control. System 1 initiates the actions involved (writing and manipulating the numerals), receives relevant perceptual inputs, recognizes the subproblems posed, solves these subproblems, and so on – all the while monitoring to see if a solution to the overall problem has been reached.
All system 2 processes, I suggest, are similar, being constituted by activities that decompose a problem into intuitively solvable chunks, though these activities are usually internalized ones involving the manipulation of inner speech or other mental imagery rather than external symbols (for examples, see the works cited above).
This proposal explains why system 2 processing places heavy demands on attention and working memory (which are required for imagery manipulation) and why its processes are transparent (the images can be recalled and reported). Moreover, it offers an economical answer to the implementation question I mentioned at the start. In this view, the core cognitive resources driving system 2 processing are those of system 1, though further resources, including those of working memory, language, and perception, are employed as well. Thus, system 2 is not a separate neural system but a virtual system, realized in activities generated by system 1.
This view extends the approach De Neys proposes, and his speculations about how system 1 controls system 2 could be elaborated to reflect system 1's expanded role. At each stage in a system 2 process, system 1 will calculate what activity to generate next, receive perceptual or imagistic feedback, generate responses to the subproblem presented, and calculate whether and how to continue the process, using techniques of the sort De Neys describes, including uncertainty monitoring and calculation of opportunity costs.
In conclusion, De Neys's proposal not only advances theorizing about fast-and-slow thinking but also points to how it might be advanced still further, moving us toward a dual-process theory 3.0 in which system 1 not only initiates system 2 thinking but generates and sustains it as well.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.