Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T04:52:03.217Z Has data issue: false hasContentIssue false

Challenging some common beliefs: Empirical work within the adaptive toolbox metaphor

Published online by Cambridge University Press:  01 January 2023

Arndt Bröder*
Affiliation:
University of Bonn and Max Planck Institute for Research on Collective Goods
Ben R. Newell
Affiliation:
University of New South Wales
*
* Corresponding author: Arndt Bröder, Dept. of Psychology, University of Bonn, Kaiser-Karl-Ring 9, D-53111 Bonn, Germany. Email: [email protected].
Rights & Permissions [Opens in a new window]

Abstract

The authors review their own empirical work inspired by the adaptive toolbox metaphor. The review examines factors influencing strategy selection and execution in multi-attribute inference tasks (e.g., information costs, time pressure, memory retrieval, dynamic environments, stimulus formats, intelligence). An emergent theme is the re-evaluation of contingency model claims about the elevated cognitive costs of compensatory in comparison with non-compensatory strategies. Contrary to common assertions about the impact of cognitive complexity, the empirical data suggest that manipulated variables exert their influence at the meta-level of deciding how to decide (i.e., which strategy to select) rather than at the level of strategy execution. An alternative conceptualisation of strategy selection, namely threshold adjustment in an evidence accumulation model, is also discussed and the difficulty in distinguishing empirically between these metaphors is acknowledged.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2008] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Over (2003) points out that many evolutionary psychologists have used tools as vivid metaphors for characterising the mind as comprising a range of specific modules. For example, Reference Cosmides and ToobyCosmides and Tooby (1994) suggested that the mind be viewed like a Swiss army knife, with individual blades specialised for particular “survival-related” tasks. In a similar vein, Gigerenzer, Todd and the ABC Group (1999) proposed an “adaptive toolbox” containing a variety of special tools for different tasks. Their idea is that the mind has evolved mechanisms or heuristics that are suited to particular tasks, such as choosing between alternatives, categorising items, estimating quantities, selecting a mate, judging habitat quality, even determining how much to invest in one’s children. Gigerenzer and Todd argue that just as a car mechanic uses specific wrenches, pliers and spanners in maintaining a car engine rather than hitting everything with a hammer, so too the mind relies on unique one-function devices to provide serviceable solutions to individual problems.

To illustrate the basic idea we describe the operation of two of the heuristics contained in the toolbox. Imagine you are facing a choice between two alternatives — such as two companies to invest in — and your task is to pick the one that is better with regard to some criterion (e.g., future returns on investments). “Take-the-Best” (TTB) is designed for just such a situation. TTB operates according to two principles. The first — the recognition principle — states that for any decision made under uncertainty, if only one amongst a range of alternatives is recognised, then the recognised alternative will be chosen. When this first principle can be relied on people are said to be using the Recognition Heuristic (RH) — i.e., choosing objects that they recognise (Reference Hausmann and LägeGoldstein & Gigerenzer, 2002). The second principle is invoked when more than one alternative is recognised, and the recognition principle cannot provide discriminatory information. In such cases, people are assumed to have access to a reference class of cues or features, which are searched in descending order of feature validity (search rule) until one that discriminates between alternatives is discovered. Search then stops (stopping rule) and this single best discriminating feature is used to make the choice (decision rule). The algorithm is thus non-compensatory because, rather than using all discriminatory pieces of information (as a compensatory model like linear regression would), it bases its choice on a single piece (Reference Gigerenzer and ToddGigerenzer & Goldstein, 1996).

These simple steps for searching, stopping and deciding might seem rather trivial, but Reference Gigerenzer and ToddGigerenzer and Goldstein (1996) showed convincingly that the TTB algorithm is as accurate — and sometimes even slightly more accurate — than more computationally complex and time consuming algorithms. These initial results, from a task in which the goal was to decide which of two cities had a higher population, were replicated in a variety of real-world environments ranging from predicting professorial salaries, to the amount of sleep engaged in by different mammals (Reference Czerlinski, Gigerenzer, Goldstein, Gigerenzer and ToddCzerlinski, Gigerenzer, & Goldstein, 1999). The toolbox, however, is one of different metaphors used to characterize intelligent decision making. On one hand, the toolbox with its incorporation of the modularity assumption challenges the idea of the mind as containing a “master tool” that comes as a general problem solver. On the other hand, the toolbox idea itself has also been challenged by theoretical arguments. For example, some authors claim that simple heuristics may not be so simple in the first place because they need a vast amount of pre-computation (e.g., for constructing a cue-search hierarchy, Reference Goldstein and GigerenzerJuslin & Persson, 2002). Others conjecture that compensatory strategies may not be so costly as the toolbox and common wisdom in decision research presuppose (e.g., Chater, Oaksford, Nakisa, & Reddington, 2003). Theoretical objections to the toolbox are summarized and discussed in Reference Newell, Shanks and MaxNewell and Shanks (2007). Another challenge is empirical: Do people use different tools adaptively, and more specifically, do they use simple heuristics like RH and TTB? In this article, we will review empirical work from our labs that addresses this latter question and asks which factors affect the strategies people select. Whereas the goal is of course not new, we are convinced that our results have some new implications for the toolbox metaphor as well as for multi-attribute decision research in general.

2 Organization of the review

Newell and Bröder (2008) mentioned several facts and topics about human cognition that have to be addressed by theories of decision making, namely (1) capacity limitation, (2) automaticity vs. controlled processing, (3) learning, (4) categorization, and (5) metacognition. These areas of interest and the question of whether people adaptively choose strategies constitute one dimension of our review. Our empirical work predominantly covers the question of adaptivity and areas (1), (2), and (5) which are closely interconnected. Whereas capacity limitations mainly concern controlled, effortful, and perhaps serial processes, any degree of automatization will unburden the limited capacity (Reference Schneider and ShiffrinSchneider & Shiffrin, 1977). Metacognition — deciding how to decide — is concerned with allocating capacity to decision tasks and almost certainly consumes cognitive capacity itself. However, this latter aspect has hitherto been neglected in decision research and the toolbox approach. The second dimension around which we examine the empirical evidence is the “target” of the respective studies: different studies focus on the search rule, the stopping rule, or the decision rule people use. Although these aspects are closely intertwined empirically (e.g., Bröder, 2003), most studies focus on one or two aspects for methodological reasons. We will first report studies concerning adaptivity and the use of simple heuristics and then turn to results relevant for the question of capacity limitations, automatization, and metacogntition.

3 Do people select simple and less simple heuristics adaptively?

Reference Luchins and LuchinsPayne, Bettman, and Johnson (1993) report many results that suggest adaptive strategy changes contingent on task demands. For example, time pressure or the dispersion of attribute weights clearly influenced information search behavior in a preferential choice task (Reference Lee and CumminsPayne, Bettman, & Johnson, 1988). Reference Newell and ShanksRieskamp and Hoffrage (1999) confirmed these results in a mutli-attribute inference task. Under time pressure, participants search for less information and do so more attribute-wise (rather than option-wise) which is similar to the search rule predicted by lexicographic heuristics like TTB. Being forced into simple processing by time pressure may not be a strong argument in favour of adaptive strategy selection, however, so other investigators varied the nominal costs of information purchases in a hypothetical stock market game (Reference BröderBröder, 2000; Reference Hogarth and KarelaiaNewell & Shanks, 2003). In this task, participants make repeated stock purchase decisions between hypothetical companies that are described by four binary cues (e.g., Turnover growth in last months — yes vs. no). Typically, cue values are hidden and have to be actively uncovered by clicking the fields with the computer mouse. Participants are free to uncover as much information as they want in any sequence. This MouseLab-like procedure (see Reference Lee and CumminsPayne et al., 1988) allows for outcome-based strategy assessment based on the choices as well as monitoring of the information acquisition process. Reference Hogarth and KarelaiaNewell and Shanks (2003) found that raising the costs for information search led to a lesser amount of purchases, but still, participants on average bought more cue information than “necessary” for performing a simple lexicographic strategy. Hence, participants did not generally adhere to the stopping rule dictated by TTB. In another study of Newell, Weston, and Shanks (2003, Exp. 2), 38% of the participants even went on purchasing a cue that was costly but objectively useless, hence using a clearly maladaptive stopping rule. These studies and additional asymmetries in favor of compensatory decision making (see below) suggest that there is an initial preference for being “well-informed” before making a decision, at least as long as information is easy to obtain and the task is not too complex with respect to the number of options and/or attributes. There is converging evidence from studies testing the assumed noncompensatory nature of the RH which show that participants rarely ignore information which is available in addition to the recognition cue (Reference Chater, Oasksford, Nakisa and RedingtonBröder & Eichler, 2006; Reference RieskampNewell & Fernandez 2006; Reference Rieskamp, Hoffrage, Gigerenzer and ToddNewell & Shanks, 2004; Reference Stone and SchkadePohl, 2006; Reference Richter and SpäthRichter & Späth, 2006). The process model of the RH clearly states that “if one object is recognized and the other is not, the inference is determined; no other information about the recognized object is searched for and, therefore, no other information can reverse the choice determined by recognition” (Reference Hausmann and LägeGoldstein & Gigerenzer 2002, p. 82); however, even under conditions that are ideal for the RH (high recognition validity, natural recognition knowledge, inferences from memory), the decisions of 50% of the participants were affected by additional cue knowledge in a study by Pachur, Bröder, & Marewski (in press). These results suggest that lexicographic stopping rules may be the exception rather than the rule in decision making.

Bröder (2000) focused on the decision rule people used and also manipulated the nominal costs for information purchase. An outcome-based classification procedure suggested that the choices of about 65% of participants were compatible with TTB under high search cost conditions. A subsequent experiment confirmed that this high percentage (which contrasted with low TTB percentages in other studies) was in fact caused by the information costs, but not by other factors such as outcome feedback, or successive information retrieval. In addition, search behavior corresponded well to the decision rule participants used. Hence, both our labs showed that stopping and/or decision rules were sensitive to search costs to a certain degree, probably reflecting adaptivity. However, several criticisms can be raised: First, there was no formal assessment of expected payoffs in these studies and hence, strategy changes might not have been “adaptive” but rather caused by stinginess. That is, high nominal costs of information may simply have deterred participants from purchasing information despite its potential value for good decisions. This would demonstrate sensitivity to costs, but not necessarily adaptive behavior. Second, in Bröder’s (2000) study, information about cues could only be purchased in the order of their validities, probably boosting the use of TTB-like strategies.

Limiting participants to searching information in one particular order overlooks a crucial yet under-researched issue: How people learn cue validities and construct cue-search hierarchies. As noted earlier, Reference Goldstein and GigerenzerJuslin and Persson (2002) argued that a good deal of the “simplicity” inherent in simple heuristics comes from the massive amounts of precomputation required to construct cue hierarchies. Reference Newell and FernandezNewell, Rakow, Weston and Shanks (2004) sought to gain some insight into how people learned cue validities and search rules by using experimental designs in which participants could purchase cues in any, rather than a fixed order. Following Reference NewellMartignon and Hoffrage (1999) we noted that the overall usefulness of a cue must take account of both its validity and its redundancy — or ability to discriminate between two options in two-alternative forced choice task. More useful cues are those that can frequently be used to make an inference (i.e., have a high discrimination rate); and, when used, usually point in the correct direction (i.e., have a high validity).

In support of this assertion, Newell et al. (2004) found that, in a simulated stock market environment involving a series of predictions about pairs of companies, participants’ pre-decisional search strategies conformed to a pattern that revealed sensitivity to both the validity and discrimination rate of cues. Given sufficient practice in the environment, participants searched through cues according to how “successful” they were for predicting the correct outcome (see Reference NewellMartignon & Hoffrage, 1999, for a detailed discussion and definition of “success” — it is a function of the validity and discrimination rate of cues). Thus, rather than using a “validity” search rule — as prescribed by TTB and enforced in some experimental tests — participants tended to use a “success” search rule. (See also Reference NewellRakow, Newell, Fayers, & Hersby, 2005). This initial work on cue search needs to be supplemented by more extensive explorations of potential mechanisms for learning and implementing cue hierarchies.

We noted earlier that in experiments with explicit costs participants might be deterred from further search through and acquisition of information simply because of high nominal costs. To overcome this possibility Bröder and colleagues kept the nominal search costs identical in different conditions of their experiments but varied the payoff functions to yield different expected payoffs in different experimental conditions: some environments were compensatory, meaning that the costs spent on additional cues were compensated by better accuracy and increased payoff; and some were noncompensatory, so that the costs for additional cues would in the long run exceed their utility for making better decisions. The empirical question of adaptivity was now whether people would be able to figure out the appropriate strategies in the respective environments. The filled circles in Figure 1 summarize the proportions of participants classified as using TTB’s decision rule across a range of 11 experimental conditions from several studies (Reference Chater, Oasksford, Nakisa and RedingtonBröder, 2003; Reference Bröder, Eichler and ZimmerBröder & Eichler, 2001; Reference Bröder and SchifferBröder & Schiffer, 2003a; 2006a) as a function of the expected payoff of TTB relative to that of a compensatory strategy known as “Franklin’s rule” (FR) which is a weighted additive rule. It is easy to see that there is an adaptive trend (r = .83) which shows that the majority of people tend to use appropriate strategies in compensatory (left of “1”) and noncompensatory (right of “1”) environments. However, adaptivity is not perfect since in all cases, there is a significant percentage of people not using the appropriate strategy.

Figure 1: Adaptive strategy selection demonstrated by the percentage of participants classified as TTB users in the stock market game as a function of on the expected payoff of TTB relative to a compensatory strategy. Filled circles are experimental conditions in which the task was new to participants, and they show a clear adaptive trend. Open squares depict the maladaptive routines after the environmental payoff structure had changed (Reference Bröder and SchifferBröder & Schiffer, 2006a), and the triangle shows the high cognitive load condition of Bröder and Schiffer (2003a).

Hence, the results of both labs converge on similar conclusions: There is a certain extent of adaptivity in strategy choice concerning search, stopping as well as decision rules. Participants are not only abhorred by costs, but they seem able to figure out payoff structures (even if differences are subtle — see Figure 1) and select the strategy accordingly. However, there are large individual differences in strategy selection. The attempt to find personality dimensions as correlates of strategy preferences has not been successful so far, even though we tried 15 plausible dimensions (see Bröder, in press, for an overview). However, it is yet an open question whether the different strategy preferences diagnosed in a one-shot assessment of an experiment will turn out to be stable across tasks and situations. If not, then states rather than traits should be investigated as variables causing the individual differences, for example mind-sets or spillover effects from routines established in similar tasks.

Recently, Bröder and Schiffer (2006a) reported results which qualify the optimistic notion of adaptivity documented by the filled circles in Figure 1. Three of the open squares in the figure do not fit into the picture. These experimental conditions have in common that the payoff structure of the environment had changed after participants had become used to another environment before. That means, the low percentage of TTB users in the noncompensatory environment reflects the fact that this group had been exposed to a compensatory payoff structure before. Obviously, most participants adhered to a decision strategy established as a routine before. These maladaptive routine effects were only marginally relieved by a hint about the change or even by a switch to a similar but different task. This observation contrasts with most participants’ obvious ability to adapt flexibly to a new task. We conclude that different mechanisms for strategy selection may be at work when people are confronted with a new task than when they routinely use a strategy. Inertia effects like these are predicted by Rieskamp’s (2006) reinforcement learning model.

One additional observation was made repeatedly in the stock market paradigm: There was an initial preference for compensatory decision making and deep information search (Reference BröderBröder, 2000; 2003; Reference Hogarth and KarelaiaNewell & Shanks, 2003; Reference Hogarth and KarelaiaNewell, Weston & Shanks, 2003; Reference Rieskamp and OttoRieskamp & Otto, 2006). Compensatory strategies were even somewhat more subject to maladaptive routines than TTB (Reference Bröder and SchifferBröder & Schiffer, 2006a). We conjecture that participants feel on the “safe” side if they use all information, and they have to learn actively whether information can safely be ignored. Many learn to adapt their stopping and/or decision rule, others keep on buying information even when it is of no use (Reference Hogarth and KarelaiaNewell, Weston & Shanks, 2003).

To summarize the adaptivity results: The toolbox idea is corroborated in principle because many participants adapt to payoff schemes. This supplements Payne et al.’s (1993) work which showed that strategy selection is contingent on task demands in the domain of preferential choices. In addition to the formal similarity between multi-attribute preferential choice and multiple-cue probabilistic inferences, these empirical similarities support the idea of similar cognitive processes (or at least similar principles) in both domains. Note, however, that the observation that people appear to choose among heuristics of varying complexity could also be reinterpreted as a threshold adjustment in an evidence accumulation metaphor (e.g., Reference Luchins and LuchinsLee & Cummins, 2004; Reference RieskampNewell, 2005). Evidence accumulation models assume individual decision thresholds of evidence. Information search continues until a threshold in favour of one option has been crossed and a decision is made. Thresholds can be set at a continuum from strict to lenient. Lenient criteria imply fast and frugal information searches, whereas strict criteria demand more information before making a decision. Hence, “strategies” like TTB or WADD can also be viewed as endpoints of a continuum that defines one general process of decision making. Rather than selecting strategies, the decision maker might adjust thresholds. At the moment, data do not allow for a clear decision between the model classes because apparent strategy switches can be reinterpreted as criterion shifts or vice versa (see Reference Hausmann and LägeHausmann & Läge, 2008). Large individual differences in adaptivity remain, and the general preference for compensatory deciding observed in studies on TTB and the RH casts doubt on the assumption that simple heuristics are the default mode of probabilistic inferences — at least in tasks with cue information that is easily accessible. Furthermore recent work suggests that a “unified model” which treats TTB and more compensatory strategies as special cases of the same sequential sampling process provides an interpretable account of individual differences in participants’ judgments. Although such a threshold model is more complex than “parameter-free” models like TTB, it is preferred to simpler models on the grounds of model fit criteria (e.g., minimum description length) (Reference Newell, Collins, Lee, McNamara and TraftonNewell, Collins, & Lee, 2007).

4 Capacity limitations, automaticity, and metacognition

In accordance with the multiple-strategy assumption in decision research, Reference Beach and MitchellBeach and Mitchell (1978) formulated an early attempt to define criteria that might govern strategy selection. In their contingency model “strategy selection is viewed as a compromise between the press for more decision accuracy as the demands of the decision task increase and the decision maker’s resistance to the expenditure of his or her personal resources” (Reference Beach and MitchellBeach & Mitchell, 1978, p. 447). They classified compensatory strategies as “analytic” and noncompensatory ones as less analytic and assumed that the “use of a less analytic strategy requires, on the average, less expenditure of personal resources than does use of a more analytic strategy” (p. 448). This intuitively plausible assumption has guided a significant part of research, for example Payne et al.’s (1993) systematic analysis of adaptive decision making. Christensen-Szalanski (1978; 1980) as well as Reference Chu and SpiresChu and Spires (2003) supported the assumption by showing that it fits people’s intuitions. Payne et al. (1993) extended and specified the model further by deriving a measure for the cognitive costs caused by strategies: They counted the elementary information processing steps necessary to perform a decision rule and proposed that “the cognitive effort needed to reach a decision using a particular strategy is a function of the number and type of operators (productions) used by that strategy, with relative effort levels of various strategies contingent on task environments” (Reference Luchins and LuchinsPayne et al., 1993, p. 14). Both Reference Beach and MitchellBeach and Mitchell (1978) and Payne et al. (1993) admitted that the exact nature of the deliberation process is unknown and subject to further research, and the latter authors speculated about different degrees of sophistication of this process. This reasoning about the apparent costs of compensatory strategies is explicitly incorporated in the adaptive toolbox metaphor and its rhetoric in which compensatory strategies are associated with theories that “assume the human mind has essentially unlimited demonic or supernatural reasoning power” (Reference Glöckner and BetschGigerenzer & Todd, 1999, p. 7). This image is contrasted against the fast and frugal heuristics.

The emphasis on the execution costs of various decision strategies promoted by the contingency model and the adaptive toolbox leads to a simple and straightforward prediction: These relative costs should decrease with increased cognitive capacity. Or in other words, greater cognitive capacity should reduce the pressure to use simplifying strategies like TTB (e.g., Reference Beach and MitchellBeach & Mitchell, 1978; pp. 445–446). To our great surprise, in a first study on that topic, our results were opposite to this prediction and suggest a re-evaluation of the contingency model. In the study of Bröder and Eichler (2001) participants invested in the stock market game and subsequently filled out an intelligence test. After classifying participants’ decision strategies, results showed that TTB users were slightly more intelligent than compensatory decision makers! This was opposite to the expectation from the contingency logic which predicts simpler strategies will be associated with less capacity. Only after a post-hoc analysis of the game’s payoff structure, we realized that there had been a relatively subtle (10%) advantage in the expected payoff of TTB as compared to the compensatory strategy WADD in this task. In two subsequent experiments, we replicated the small, but consistent superiority of TTB users with respect to intelligence in environments with noncompensatory payoff structures (Reference Chater, Oasksford, Nakisa and RedingtonBröder, 2003). This suggests that cognitive capacity — as indexed by intelligence — is not consumed by strategy execution, but rather by strategy selection. Since intelligence can be related to many other causal variables, we also manipulated cognitive capacity experimentally in a subsequent experiment by imposing a very attention-demanding secondary task on half of the participants during their decisions (they had to count the occurrences of the number “nine” in a stream of digits and were probed in random intervals; Reference Bröder and SchifferBröder & Schiffer, 2003a). In the environment used, there was a very subtle payoff advantage for TTB, and results showed 60% TTB users in the condition without cognitive load, whereas only 26% used TTB in the condition with heavy cognitive load. The others were classified as using compensatory strategies. This is again contrary to the expectation of the contingency model and again supports another conclusion: At least in our paradigm, the costs of strategy execution do not seem to differ much between TTB and compensatory strategies, and participants were able to use compensatory strategies even under conditions of heavy cognitive load. Rather, the cognitive load impaired participants’ ability to figure out the payoff structure of the environment and to choose the appropriate heuristic.

This interpretation is also compatible with results reported by Bröder and Schiffer (2006a) demonstrating massive routine effects in the use of decision strategies. Routine effects have been known as “Einstellung” effects for a long time in the psychology of thinking (Reference Luchins and LuchinsLuchins & Luchins, 1959). Although Betsch and co-workers have demonstrated routines in repeated decisions before (see Reference Bröder and SchifferBetsch and Haberstroh, 2005, for a review), these demonstrations concerned the choice of routine options rather than strategies. Bröder and Schiffer (2006a) based their research on these observations, but they demonstrated that routines are retained also at the level of strategies, even in a changing environment where they become maladaptive (but see Rieskamp, 2008, for an alternative interpretation). The combination of a quick adaptation to new environments, but slow adaptation to changing environments suggests that strategy execution can become routinized. Strategy selection, on the other hand, may require a costly re-examination of the environment in order to adjust the strategy accordingly. This selection process cannot become routinized, but it always requires deliberate processes. The apparently routinized strategy execution was reflected in the time needed for each decision which was much shorter for later trials in the task than for the first 10 to 20 trials. In the first phase of the experiments, most participants adaptively chose the appropriate strategy. When the environment changed after 80 trials, the reaction times did not increase again (even after a hint about the change), and the result was a maladaptive trend to stick to one’s established strategy. This stickiness was even more pronounced for compensatory strategies. We hypothesize that the meta-decision how to decide was only executed at the beginning of the experiment (consuming time and capacity), whereas the execution was routinized after a few trials, probably without consuming cognitive capacity further. This routinization, however, happens at the expense of flexibility (Reference Schneider and ShiffrinSchneider & Shiffrin, 1977).

Hence, the contingency model’s and the toolbox’ rhetoric emphasis on processing costs of strategies and heuristics may be mistaken, since the actual capacity-consuming process is apparently the meta-decision rule that selects strategies. But note that these conclusions may be valid only for situations as complex as our experiments, in which we only used up to three options with up to six cues at most. Furthermore, the cues used in our experiments were almost exclusively binary, and it is conceivable that multi-valued cues are harder to process. Perhaps, processing capacity limits were not reached, and strategy execution costs may become a severe factor only with very complex decision situations.

However, after having developed this interpretation, one other result of Bröder and Schiffer (2006b) qualifies this conclusion. In this study, participants again had to work on various capacity-demanding secondary tasks during decision making. In contrast to the study mentioned before, this boosted the use of a TTB heuristic at the expense of compensatory strategies. The important difference here was that all cue information had to be retrieved from memory rather than from the computer screen. In a virtual criminal case, participants had learned various details about suspects which they later used for decisions about the probability of being the perpetrator. For example, they learned about aspects of the suspects’ clothing and later received information about witness reports that established a clear cue validity hierarchy. In a series of paired comparisons, they had to decide which suspect was the perpetrator with a higher probability. Earlier studies in this memory-search paradigm had already shown that TTB is much more prevalent here than in screen-based information presentation. This suggests that memory retrieval is costly and promotes early stopping rules in the same way as high explicit costs promote early stopping of information search in screen-based tasks (Reference BröderBröder, 2000; 2003; Reference Hogarth and KarelaiaNewell & Shanks, 2003). Furthermore, the costs of retrieval are apparently less severe when information is stored in a pictorial rather than a verbal format (Reference Bröder and SchifferBröder & Schiffer, 2003b; 2006b) which is compatible with knowledge from cognitive psychology (Reference PaivioPaivio, 1991). However, recent work comparing judgments made on the basis of pictorial and verbal information in screen-based tasks found no evidence for a difference in TTB use as a function of format. It appears then that the format effect is dependent on inducing memory retrieval costs (Reference Newell, Collins, Lee, McNamara and TraftonNewell, Collins, & Lee, 2007).

Bröder and Gaissmaier (2007) reanalyzed response times from published studies and found evidence that people who were classified as TTB users based on decision outcomes apparently also used TTB’s stopping rule: The response times increased monotonically with the number of cues that had to be retrieved for performing a lexicographic strategy. Other explanations (similarity of options, difficulty of decisions) accounted for less variance in the decision times than the assumption of this simple stopping rule. In one experiment, there were apparently several participants with an even simpler strategy called “Take The First” who retrieved cues in the order of retrieval ease (as defined by the learning procedure) and showed response times consistent with a stopping rule terminating search after one discriminating cue had been found.

To conclude: If information is available on the screen without burdening working memory too much, cognitive processing costs for strategies are not a serious factor, the format of stimuli materials have little effect, and cost differences between strategies like TTB and WADD are neglible. Only if search costs are explicit are stricter stopping (and decision) rules employed. Information integration also does not seem to be costly, a conclusion demonstrated by the performance of participants in the high cognitive load condition (counting nines) of Bröder and Schiffer’s (2003a) experiment, described earlier, in which 60% of participants probably used a compensatory rule. Memory retrieval, on the other hand, appears to cause cognitive costs and promotes early stopping rules, but costs can be reduced by the use of integrated, pictorial stimuli. Whereas the distinction between inference from givens vs. inferences from memory is clear for controlled laboratory experiments, it may be less so in the applied everyday context. Here, we will often confront situations that involve both kinds of information retrieval. For instance, consumer choices may depend on an attribute matrix provided in a “Consumer report” magazine as well as on facts we remember about the options. Hence, actual decisions probably involve a mixture of information sources and hence, a mixture of different cognitive costs.

5 Conclusions

In this review, we focused on our own empirical work that was stimulated by and took place within the adaptive toolbox metaphor. Since we did not report on numerous other studies conducted within this framework, it is fair to conclude that the toolbox has been extremely fruitful in reanimating the interest in adaptive multi-attribute decision making, supplementing Payne et al.’s (1993) work on preferences with work on inferences. Because metaphors are not “correct” or “wrong” per se (they are all wrong, as Ebbinghaus [1885] already noted), they have to be evaluated by their fruitfulness. In this respect, the toolbox fares quite well. Whether the box crammed with disparate tools is a more adequate metaphor than an “adjustable spanner” (Reference RieskampNewell, 2005) remains to be seen. However, the success of evidence accumulation models in many other areas of cognition leads us to be optimistic that they can perhaps also be fruitfully applied to the more “controlled” processes in the decision making domain (Reference Bröder and SchifferBusemeyer & Townsend, 1993: Reference Richter and SpäthWallsten & Barton, 1982). Techniques for specifying and empirically testing evidence accumulation thresholds are arguably more advanced and established than are models of “strategy selection” (e.g., Vickers, 1979). Thus although the two model classes may currently be difficult to distinguish at the data level, future investigations may determine the superior metaphor.

One main result emerging from the synopsis of the work reported here is a fundamental re-evaluation of the contingency model and its successors (Reference Beach and MitchellBeach & Mitchell, 1978; Reference Bröder and SchifferChristensen-Szalanski, 1978; 1980; Reference Chu and SpiresChu & Spires, 2003; Reference Luchins and LuchinsPayne et al., 1993). The wide-spread credo is that compensatory strategies are cognitively more costly than noncompensatory ones. On the other hand, they are believed to be more accurate. Consequently, there is a conflict demanding a compromise between the two. However, the second conviction (higher accuracy) has been called into question by the toolbox proponents who showed via simulations that noncompensatory rules can be as accurate as compensatory ones (Reference Glöckner and BetschGigerenzer et al., 1999). This clearly came as a surprise and has been replicated and investigated more thoroughly since then (Hogarth & Kareleia, 2005, 2007). Interestingly, the toolbox rhetoric on the other hand relies heavily on the assumption that compensatory strategies are cognitively costly. As all of our results suggest, this does not seem to be the case. Compensatory strategies were performed under high cognitive load and they were subject to “thoughtless” routines. Hence, multiple pieces of information can be combined compensatorily without the “unlimited resources” postulated for “rational demons” (see Reference Glöckner and BetschGigerenzer & Todd, 1999). Whether this is done sequentially in a simple random walk process (e.g., Reference Luchins and LuchinsLee & Cummins, 2004) or by simultaneous constraint satisfaction in a network model (e.g., Glöckner & Betsch, 2008) or some other way remains open to question. What can be costly is information search where costs are either determined by extrinsic (time pressure) or intrinsic (memory retrieval) factors. We do not want to suggest that the execution of compensatory strategies is never costly: Several studies have shown that the order of presentation, the presentation format (numerical vs. verbal), or the similarity of alternatives have a strong influence on the way people assess information (e.g. Reference Schkade and KleinmuntzSchkade & Kleinmuntz, 1994; Stone & Schkade, 1994), presumably reflecting different levels of processing ease. Furthermore, there will certainly be costs in very complex situations with many alternatives and attributes. However, our results suggest that the cognitive costs for strategy execution may have been overestimated in relation to the costs for strategy selection in moderately complex situations.

A closely related result is that enhanced capacity increased the proportion of people using simple heuristics — in environments in which they were appropriate! This was true for intelligence (Reference Chater, Oasksford, Nakisa and RedingtonBröder, 2003) as well as for free working memory capacity (Reference Bröder and SchifferBröder & Schiffer, 2003a). Since these factors had no direct effects on strategy execution but rather on the adaptivity of the strategy use, we conclude that the decision how to decide (or which strategy to select) is the most demanding task in a new decision situation. Although there has been some speculation about this deliberation process (Reference Luchins and LuchinsPayne et al., 1993), it has been neglected as a target of research. Probably, the empirical investigation of the rules used is challenging enough, and researchers have avoided adding another level of complexity. We argue that the selection process is the crux of the matter since it consumes cognitive resources. Without modeling this demanding process, any theory of tool selection or threshold adjustment remains incomplete.

Footnotes

Ben Newell acknowledges the support of the Australian Research Council (Grant: DP 0558181) and the University of New South Wales for awarding him the John Yu Fellowship to Europe. Both authors would also like to thank the Max Planck Institute for Research on Collective Goods for hosting Ben Newell’s visit and the symposium.

References

Beach, L. R. & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439449.10.5465/amr.1978.4305717CrossRefGoogle Scholar
Betsch, T. & Haberstroh, S. (2005). Preface. In Betsch, T. & Haberstroh, S. The routines of decision making (pp. ix-xxv). Mahwah, NJ, US: Lawrence Erlbaum Associates.Google Scholar
Bröder, A. (2000). Assessing the empirical validity of the “Take The Best”-heuristic as a model of human probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 13321346.Google Scholar
Bröder, A. (2003). Decision making with the “adaptive toolbox”: Influence of environmental structure, intelligence, and working memory load. Journal of Experimental Psychology:Learning Memory, and Cognition, 29, 611625.Google ScholarPubMed
Bröder, A. (in press). The quest for take the best - Insights and outlooks from experimental research. To appear in Todd, P. Gigerenzer, G. & the ABC Research Group, Ecological rationality: Intelligence in the world, New York: Oxford University Press.Google Scholar
Bröder, A. & Eichler, A. (2001). Individuelle Unterschiede in bevorzugten Entscheidungsstrategien. [Individual differences of preferred decision strategies] In: Zimmer, A. C. K. Lange et al. (Hrsg.). Experimentelle Psychologie im Spannungsfeld von Grundlagenforschung und Anwendung (p. 6875), [CD-ROM].Google Scholar
Bröder, A. & Eichler, A. (2006). The use of recognition and additional cues in inferences from memory. Acta Psychologica, 121, 275284.10.1016/j.actpsy.2005.07.001CrossRefGoogle ScholarPubMed
Bröder, A. & Gaissmaier, W. (2007). Sequential processing of cues in memory-based multi-attribute decisions. Psychonomic Bulletin and Review, 14, 895900.CrossRefGoogle Scholar
Bröder, A. & Schiffer, S. (2003a). Bayesian strategy assessment in multi-attribute decision research. Journal of Behavioral Decision Making, 16, 193213.CrossRefGoogle Scholar
Bröder, A. & Schiffer, S. (2003b). “Take The Best” versus simultaneous feature matching: Probabilistic inferences from memory and effects of representation format. Journal of Experimental Psychology: General, 132, 277293.CrossRefGoogle ScholarPubMed
Bröder, A. & Schiffer, S. (2006a). Adaptive flexibility and maladaptive routines in selecting fast and frugal decision strategies. Journal of Experimental Psychology: Learning, Memory, & Cognition, 32, 904918.Google ScholarPubMed
Bröder, A. & Schiffer, S. (2006b). Stimulus format and working memory in fast and frugal strategy selection. Journal of Behavioral Decision Making, 19, 361380.CrossRefGoogle Scholar
Busemeyer, J. R. & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432459.CrossRefGoogle Scholar
Chater, N. Oasksford, M. Nakisa, R. & Redington, M. (2003). Fast, frugal, and rational: How rational norms explain behavior. Organizational Behavior and Human Decision Processes, 90, 6386.10.1016/S0749-5978(02)00508-3CrossRefGoogle Scholar
Christensen-Szalanski, J. J. J. (1978). Problem solving strategies: a selection mechanism, some implications and some data. Organizational Behavior and Human Performance, 22, 307323.CrossRefGoogle Scholar
Christensen-Szalanski, J. J. J. (1980). A further examination fo the selection of problem solving strategies: the effects of deadlines and analytic aptitudes. Organizational Behavior and Human Performance, 25, 107122.CrossRefGoogle Scholar
Chu, P. C. & Spires, E. E. (2003). Perceptions of accuracy and effort of decision strategies. Organizational Behavior and Human Decision Processes, 91(2), 203214.CrossRefGoogle Scholar
Cosmides, L. & Tooby, J. (1994). Beyond intuition and instinct blindness: Toward an evolutionarily rigorous cognitive science. Cognition, 50, 4177.CrossRefGoogle ScholarPubMed
Czerlinski, J. Gigerenzer, G. & Goldstein, D. G. (1999). How good are simple heuristics? In Gigerenzer, G. & Todd, P. M. & The ABC Research Group (Eds), Simple heuristics that make us smart (pp. 97118). Oxford: Oxford University Press.Google Scholar
Ebbinghaus, H. (1885). Über das Gedächtnis. Untersuchungen zur Experimentellen Psychologie [About memory. Investgations in experimental psychology]. Leipzig: Duncker & Humblot [Reprint 1966, Amsterdam: E.J.Bonset].Google Scholar
Gigerenzer, G. & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650669.CrossRefGoogle ScholarPubMed
Gigerenzer, G. & Todd, P. M. (1999). Fast and frugal heuristics: the adaptive toolbox. In Gigerenzer, G. Todd, P. M. & the ABC Research Group, Simple heuristics that make us smart (pp. 334). New York: Oxford University Press.Google Scholar
Gigerenzer, G. Todd, P. M. & the ABC Research Group (1999). Simple heuristics that make us smart. Oxford: Oxford University Press.Google Scholar
Glöckner, A. & Betsch, T. (2008). Modelling option and strategy choices with connectionist networks: Towards an integrative model of automatic and controlled decision making. Judgment and Decision Making, 3, 215228.10.1017/S1930297500002424CrossRefGoogle Scholar
Goldstein, D. G. & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 7590.CrossRefGoogle ScholarPubMed
Hausmann, D. & Läge, D. (2008). Sequential evidence accumulation in decision making: The individual desired level of confidence can explain the extent of information acquisition. Judgment and Decision Making, 3, 229243.CrossRefGoogle Scholar
Hogarth, R. M. & Karelaia, N. (2005). Ignoring information in binary choice with continuous variables: When is less “more”. Journal of Mathematical Psychology, 49, 115124.CrossRefGoogle Scholar
Hogarth, R. M. & Karelaia, N. (2007). Heuristic and linear models of judgment: Matching rules and environments. Psychological Review, 114, 733758.CrossRefGoogle ScholarPubMed
Juslin, P. & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A “lazy” algorithm for probabilistic inference from generic knowledge. Cognitive Science, 26, 563607.CrossRefGoogle Scholar
Lee, M. D. & Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models. Psychonomic Bulletin and Review, 11, 343352.CrossRefGoogle Scholar
Luchins, A. S. & Luchins, E. H. (1959). Rigidity of behavior: A variational approach to the effect of Einstellung. Eugene: University of Oregon Press.Google Scholar
Martignon, L. & Hoffrage, U. (1999). Why does one-reason decision making work? A case study in ecological rationality. In Gigerenzer, G. Todd, P. M. & The ABC Research Group (Eds), Simple heuristics that make us smart (pp. 119140). Oxford: Oxford University Press.Google Scholar
Newell, B. R. (2005). Re-visions of rationality? Trends in Cognitive Sciences, 9, 1115.CrossRefGoogle ScholarPubMed
Newell, B. R. & Bröder, A. (2008). Cognitive processes, models and metaphors in decision research. Judgment and Decision Making, 3, 195204.CrossRefGoogle Scholar
Newell, B.R. Collins, P. & Lee, M.D. (2007). Adjusting the spanner: Testing an evidence accumulation model of decision making. In McNamara, D. and Trafton, G. (Eds.), Proceedings of the 29th Annual Conference of the Cognitive Science Society. (pp. 533538). Austin, TX: Cognitive Science Society.Google Scholar
Newell, B.R. & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 333346.CrossRefGoogle Scholar
Newell, B. R. Rakow, T. Weston, N. J.& Shanks, D. R. (2004). Search strategies in decision-making: The success of success. Journal of Behavioral Decision Making, 17, 117137.CrossRefGoogle Scholar
Newell, B. R. & Shanks, D. R. (2003). Take-the-best or look at the rest? Factors influencing ‘one-reason’ decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 5365.Google ScholarPubMed
Newell, B. R. & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology: Learning, Memory & Cognition, 30, 923935.Google ScholarPubMed
Newell, B. R. & Shanks, D.R. (2007). Perspectives on the tools of decision making. In Max, Roberts (Ed.) Integrating the mind (pp. 131151). Hove, UK: Psychology Press.Google Scholar
Newell, B. R. Weston, N. J. & Shanks, D. R. (2003). Empirical tests of a fast and frugal heuristic: Not everyone “takes-the-best”. Organizational Behavior and Human Decision Processes, 91, 8296.10.1016/S0749-5978(02)00525-3CrossRefGoogle Scholar
Over, D. E. (2003). From Massive Modularity to Metarepresentation: The Evolution of Higher Cognition. In Over, D. E. (Ed.) Evolution and the psychology of thinking: The debate (pp. 121144). Hove: Psychology Press.Google Scholar
Pachur, T. Bröder, A. & Marewski, J. (in press). The Recognition Heuristic in Memory-Based Inference: Is Recognition a Non-Compensatory Cue? Journal of Behavioral Decision Making.Google Scholar
Paivio, A. (1991). Dual code theory: Retrospect and current status. Canadian Journal of Psychology, 45, 255287.10.1037/h0084295CrossRefGoogle Scholar
Payne, J. W. Bettman, J. R. & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 534552.Google Scholar
Payne, J. W. Bettman, J. R. & Johnson, E. J. (1993). The adaptive decision maker. Cambridge: Cambridge University Press.Google Scholar
Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 251271.10.1002/bdm.522CrossRefGoogle Scholar
Rakow, T. Newell, B. R. Fayers, K. & Hersby, M. (2005). Evaluating three criteria for establishing cue-search hierarchies in inferential judgment. Journal of Experimental Psychology: Learning, Memory & Cognition, 31, 10881104.Google ScholarPubMed
Richter, T. & Späth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology: Learning, Memory & Cognition, 32, 150162.Google ScholarPubMed
Rieskamp, J. (2006). Perspectives of Probabilistic Inferences: Reinforcement Learning and an Adaptive Network Compared. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 13551370.Google Scholar
Rieskamp, J. (2008). The importance of learning when making inferences. Judgment and Decision Making, 3, 261277.CrossRefGoogle Scholar
Rieskamp, J. & Hoffrage, U. (1999). When do people use simple heuristics and how can we tell? In Gigerenzer, G. Todd, P. M. & the ABC Research Group, Simple heuristics that make us smart (pp. 141167). New York: Oxford University Press.Google Scholar
Rieskamp, J. & Otto, P. E. (2006). SSL: A Theory of How People Learn to Select Strategies. Journal of Experimental Psychology: General, 135, 207236.CrossRefGoogle ScholarPubMed
Schkade, D. A. & Kleinmuntz, D. N. (1994). Information displays and choice processes: Differential effects of organization, form, and sequence. Organizational Behavior and Human Decision Processes, 57, 319337.CrossRefGoogle Scholar
Schneider, W. & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 166.CrossRefGoogle Scholar
Stone, D. N. & Schkade, D. A. (1991). Numeric and linguistic information representation in multiattribute choice. Organizational Behavior and Human Decision Processes, 49, 4259.CrossRefGoogle Scholar
Vickers, D. (1979). Decision processes in visual perception. New York: Academic Press.Google Scholar
Wallsten, T. S. & Barton, C. (1982). Processing probabilistic multidimensional information for decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 361384.Google Scholar
Figure 0

Figure 1: Adaptive strategy selection demonstrated by the percentage of participants classified as TTB users in the stock market game as a function of on the expected payoff of TTB relative to a compensatory strategy. Filled circles are experimental conditions in which the task was new to participants, and they show a clear adaptive trend. Open squares depict the maladaptive routines after the environmental payoff structure had changed (Bröder & Schiffer, 2006a), and the triangle shows the high cognitive load condition of Bröder and Schiffer (2003a).