Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-27T14:31:52.123Z Has data issue: false hasContentIssue false

The limited value of precise tests of the recognition heuristic

Published online by Cambridge University Press:  01 January 2023

Thorsten Pachur*
Affiliation:
University of Basel, Department of Psychology, Missionsstrasse 60–62, 4055, Basel, Switzerland
*
Rights & Permissions [Opens in a new window]

Abstract

The recognition heuristic models the adaptive use and dominant role of recognition knowledge in judgment under uncertainty. Of the several predictions that the heuristic makes, empirical tests have predominantly focused on the proposed noncompensatory processing of recognition. Some authors have emphasized that the heuristic needs to be scrutinized based on precise tests of the exclusive use of recognition. Although precise tests have clear merits, I critically evaluate the value of such tests as they are currently employed. First, I argue that using precise measures of the exclusive use of recognition has to go beyond showing that the recognition heuristic—like every model—cannot capture reality completely. Second, I illustrate how precise tests based on response times can lead to unsubstantiated conclusions if the fact that the recognition heuristic does not model the recognition judgment itself is ignored. Finally, I highlight two key but so far neglected aspects of the recognition heuristic: (a) the connection between recognition memory and the recognition heuristic; and (b) the mechanisms underlying the adaptive use of recognition.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2011] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

“When I complain of my memory, they seem not to believe I am in earnest, and presently reprove me as though I accused myself for a fool; not discerning the difference between memory and understanding. [E]xperience rather daily showing us […] that a strong memory is commonly coupled with an infirm judgment.” (de Montaigne, 1595/Reference de Montaigne2003, p. 22)

1 Introduction

In his Essays, the French philosopher Michel de Montaigne suggested that a good memory is not necessarily coupled with good decision making. In fact, he seems to imply that decisions can sometimes even benefit from deficits in memory. How could this be possible? One answer is that because structures in the mind often reflect meaningful regularities in the world (e.g., Anderson & Schooler, Reference Anderson and Schooler1991; Pachur, Schooler, & Stevens, in press), blanks in memory can be exploited for making inferences about the world.

The notion that judgments feed on dynamics in memory has been taken up in several models of decision making. For instance, Tversky and Kahneman (Reference Tversky and Kahneman1973) proposed that the ease with which instances or occurrences can be brought to mind “is an ecologically valid clue” (p. 209) about the world and that an availability heuristic based on this ease might operate when people judge probability or frequency. More recently, Goldstein and Gigerenzer (Reference Goldstein and Gigerenzer2002) described the recognition heuristic as a model of how people recruit recognition memory when making inferences more generally. In contrast to the availability heuristic, the recognition heuristic is a clearly specified computational model with precise search, stopping, and decision rules. Moreover, the recognition heuristic was proposed as an adaptive mental tool with specific boundary conditions (Gigerenzer, Todd, & the ABC Research Group, Reference Gigerenzer and Todd1999).Footnote 1

The recognition heuristic makes several testable predictions about recognition and its use in decision making. First, as the recognition heuristic is assumed to be ecologically rational (i.e., exploiting a regularity in the environment), recognition should be frequently correlated with quantities in the world. Second, people’s use of the recognition heuristic should be sensitive to the structure of the environment. Third, the recognition heuristic predicts that recognition is processed in a noncompensatory fashion—that is, recognition should supersede further cue knowledge. Finally, the heuristic predicts, under certain conditions, a counterintuitive less-is-more effect, where recognizing fewer objects can lead to more accurate inferences than recognizing more objects. (For an overview of tests of these predictions, see Pachur, Todd, Gigerenzer, Schooler, & Goldstein, Reference Pachur, Todd, Gigerenzer, Schooler and Goldstein2011.)

The precise definition of the recognition heuristic and its assumed role as an adaptive mental tool made it an attractive study object. Maybe not surprisingly, not all empirical investigations have found evidence supporting the heuristic. Of the several predictions that the heuristic makes, it seems fair to say that the assumed noncompensatory processing of recognition has received the greatest attention so far—and has generated the strongest objection (Bröder & Eichler, Reference Bröder and Eichler2006; Glöckner & Bröder, Reference Glöckner and Bröder2011; Hilbig & Pohl, Reference Hilbig and Pohl2008, 2009; Hilbig, Erdfelder, & Pohl, Reference Hilbig, Erdfelder and Pohl2010; Hilbig, Pohl, & Bröder, Reference Hilbig, Pohl and Bröder2009; Hochman, Ayal, & Glöckner, Reference Hochman, Ayal and Glöckner2010; Newell & Fernandez, Reference Newell and Fernandez2006; Newell & Shanks, Reference Newell and Shanks2004; Oeusoonthornwattana & Shanks, Reference Oeusoonthornwattana and Shanks2010; Oppenheimer, Reference Oppenheimer2003; Pachur, Bröder, & Marewski, Reference Pachur, Bröder and Marewski2008; Pohl, Reference Pohl2006; Richter & Späth, Reference Richter and Späth2006).

Some authors have emphasized the need for precise tests of the recognition heuristic, and (a) developed precise measures of the exclusive use of recognition, arguing that “precise models deserve precise measures” and (b) conducted precise tests of the information processing in recognition-based inference based on response times (Glöckner & Bröder, Reference Glöckner and Bröder2011; Hilbig, Reference Hilbig2010a, 2010b; Hilbig, Erdfelder, et al., Reference Hilbig, Erdfelder and Pohl2010; Hilbig & Pohl, Reference Hilbig and Pohl2008, 2009; Hilbig & Richter, Reference Hilbig and Richter2011; Hilbig, Scholl, & Pohl, Reference Hilbig, Scholl and Pohl2010). In the following, I discuss the value of such precise tests as they are currently used and argue that they have done little to advance our understanding of recognition-based inference. In addition, I highlight two key issues underlying the use of recognition in decision making that seem to have been neglected as a result of the strong focus on testing the noncompensatory processing of recognition. First, we need to better understand the relationship between recognition as studied in the memory literature and the recognition memory tapped by the recognition heuristic. Second, I summarize proposals of how people might adaptively adjust their reliance on recognition across different situations. Importantly, I do not argue that the developments of precise measures or demonstrations of the recognition heuristic’s failure to predict data should be ignored. Rather, I call for a more constructive way to use these findings for refining models of memory-based decision making.

2 Why precise tests of the recognition heuristic are not always useful

2.1 Precise measures of the exclusive use of recognition

The key factor enabling the recognition heuristic’s ecological rationality is that recognizing an object is often correlated with other properties of the object and can thus be used to infer these properties (Goldstein & Gigerenzer, Reference Goldstein and Gigerenzer2002). Moreover, recognition often correlates with other cues (Marewski & Schooler, Reference Marewski and Schooler2011). To illustrate, a recognized city is often more populous than an unrecognized one and it is also more likely to have a university or an international airport (both of which also predict city size). This collinearity between cues is a common situation in the real world and it is also key to Brunswik’s (Reference Brunswik1952) notion of vicarious functioning. Moreover, Davis-Stober, Dana, and Budescu (Reference Davis-Stober, Dana and Budescu2010) have shown that under conditions of collinearity, restricting search to only one cue (as proposed by the recognition heuristic) can actually represent the optimal strategy to make inferences.

However, the fact that recognition is often correlated with other cues also makes it difficult to rigorously test the recognition heuristic. Specifically, Hilbig and colleagues pointed out that high adherence rates—that is, that people often infer a recognized object to have a higher criterion value than an unrecognized one—do not necessarily mean that people use the recognition heuristic (Hilbig & Pohl, Reference Hilbig and Pohl2008; Hilbig, Erdfelder, et al., Reference Hilbig, Erdfelder and Pohl2010). As recognition and other cues often hint at the same object, people might have considered these cues as well (inconsistent with the heuristic’s predicted noncompensatory processing of recognition). To address this problem, measures were developed that reflect the exclusive reliance on recognition more precisely than the adherence rate. For instance, Hilbig & Pohl’s (Reference Hilbig and Pohl2008) discrimination index (DI) expresses the degree to which the probability that the decision maker chooses a recognized object differs between cases where recognition leads to a correct (C) and cases where recognition leads to a false (F) response (for a similar approach, see Pachur & Hertwig, Reference Pachur and Hertwig2006; Pachur, Mata, & Schooler, Reference Pachur, Mata and Schooler2009). The index is defined as DI = p(chooseR|C) – p(chooseR|F). As the recognition heuristic predicts that the decision maker ignores further cue knowledge when making inferences within a particular environment, DI should be zero. In various investigations, however, Hilbig and colleagues showed that for most participants DI is larger than zero—even when adherence rates are rather high.

In a further development, Hilbig, Erdfelder, et al. (Reference Hilbig, Erdfelder and Pohl2010) proposed a multinomial measurement model (r-model) that allows estimating the probability with which the decision maker applies the recognition heuristic (i.e., processing recognition in a noncompensatory fashion) as well as the probability that further cues are inspected.Footnote 2 The model also allows to disentangle systematic and unsystematic (i.e., use of further cues vs. guessing) factors underlying the nonreliance on the recognition heuristic. In applications of the r-model, Hilbig, Erdfelder, et al. showed that the probability that participants strictly follow recognition is often considerably lower than if considering adherence rates only. It was concluded that, inconsistent with the prediction of the recognition heuristic, “information integration beyond recognition plays a vital role” (p. 123).

Clearly, these results demonstrate that people do not always strictly adhere to the recognition heuristic and that this is not merely due to unsystematic factors (i.e., guessing or inattention). Rather, violations of the heuristic’s predictions are often systematic, indicating that at least some people do not always ignore useful information beyond recognition. This may suggest that the noncompensatory recognition heuristic is a less adequate model than a compensatory strategy, which integrates several cues; and some authors have concluded that “any theory of comparative judgment must allow for use of further knowledge of information in recognition cases” (Hilbig, Erdfelder, et al., Reference Hilbig, Erdfelder and Pohl2010, p. 132). However, a comparison of the recognition heuristic with various compensatory models showed that, although the recognition heuristic does not predict the data perfectly, it still provides the best account currently available (Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, Reference Marewski, Gaissmaier, Schooler, Goldstein and Gigerenzer2010).

How then should we evaluate the violations of the recognition heuristic’s predictions as revealed by precise measures of the exclusive use of recognition (Hilbig & Pohl, Reference Hilbig and Pohl2008; Hilbig & Richter, Reference Hilbig and Richter2011)? In my view, Marewski et al.’s (Reference Marewski, Gaissmaier, Schooler, Goldstein and Gigerenzer2010) results underline the limited value of using highly precise measures as applied in tests by Hilbig and Pohl (Reference Hilbig and Pohl2009) and Hilbig, Erdfelder, et al. (Reference Hilbig, Erdfelder and Pohl2010). In fact, one way to interpret Hilbig, Erdfelder, et al.’s critical results for the recognition heuristic is that they remind us of the fact that the recognition heuristic merely models and therefore simplifies reality. But such an insight is not very useful if it remains unclear how exactly the recognition heuristic fails to capture the decision making process—and how to model the cognitive process instead. (In the next section, I discuss candidate mechanisms that might underlie people’s decision to suspend the recognition heuristic.) Without doubt, high precision in measurement is a useful goal to advance understanding of a phenomenon. The proposed precise measures of the use of the recognition heuristic (i.e., DI and the r-model, as well as Pachur & Hertwig’s, Reference Pachur and Hertwig2006, d’) therefore represent important progress over simple adherence rates and clearly should be used when investigating, for instance, adaptive changes or individual differences in the use of the heuristic (e.g., Pachur & Hertwig, Reference Pachur and Hertwig2006; Pachur et al., 2009). However, the development of more precise measures should go hand in hand with the development of more precise and accurate models and should not stop with demonstrations that a model somehow fails to predict some data.

Note that this issue is not restricted to the recognition heuristic. At least for models in the behavioral sciences, given precise enough measures violations can probably be found for every model ever proposed. For instance, prospect theory (Tversky & Kahneman, Reference Tversky and Kahneman1992), one of the most prominent models of risky choice, has clearly been rejected by some data (e.g., Birnbaum & Chavez, Reference Birnbaum and Chavez1997; Brandstätter, Gigerenzer, & Hertwig, Reference Brandstẗter, Gigerenzer and Hertwig2006; for an overview, see Birnbaum, Reference Birnbaum2008). Nevertheless, prospect theory still proves useful for investigating and quantifying risky choice (e.g., Pachur, Hanoch, & Gummerum, Reference Pachur, Hanoch and Gummerum2010) and continues to stimulate new challenges (Brandstätter et al., Reference Brandstẗter, Gigerenzer and Hertwig2006). Similarly, in classification research, I am not aware of a model that does not fail to account for some data given sufficiently precise measures (for an overview, see Rouder & Ratcliff, Reference Rouder and Ratcliff2004). Nevertheless exemplar models, prototype models, and rule-based models (as well as combinations thereof) still offer useful frameworks for understanding how people structure objects in the world.

To summarize: developing precisely formulated cognitive models is an important goal for understanding behavior. Therefore, a precise computational model like the recognition heuristic is easier to test than a vaguely described model like the availability heuristic. Nevertheless, higher precision in modeling also exacts a price: a precise model will be easier to falsify than a vague model, and the falsification will be more likely the more precise the measures used. Therefore, to retain the purpose of modeling, refinement in measurement should be accompanied by advancing model development. Refuting a model does not automatically confirm alternative but unspecified and untested models. Importantly, once an alternative model has been proposed, its descriptive superiority has to be demonstrated in a comparative test against the “null” model (see Brighton & Gigerenzer, Reference Brighton and Gigerenzer2011; Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein2011; Marewski et al., Reference Marewski, Gaissmaier, Schooler, Goldstein and Gigerenzer2010). Although—as I have argued—precise measures can be of only limited value in isolated tests of a model, precise measures may be more useful in the context of such comparative tests.

Finally, let us not forget that descriptive adequacy, though a central dimension for model evaluation, is not the only one. For instance, Shiffrin, Lee, Kim, and Wagenmakers (Reference Shiffrin, Lee, Kim and Wagenmakers2008) highlighted that, in addition to achieving a “basic level of descriptive adequacy” (p. 1249), a good model should provide insight, facilitate generalization, direct new empirical explorations, and foster theoretical progress. Demonstrations that the recognition heuristic cannot capture reality perfectly scarcely impair the achievements of the heuristic on these dimensions (e.g., predicting the less is more effect, modeling ecological rationality), though theory development should not stop there.

2.2 Response time tests of the recognition heuristic

The recognition heuristic models inferences from memory, that is, when cue values have to be retrieved from memory. Although search processes in memory are not amenable to direct observation, it has been proposed that they are nevertheless reflected in response time patterns (e.g., Bergert & Nosofsky, Reference Anderson and Schooler2007; Pachur & Hertwig, Reference Pachur and Hertwig2006; Sternberg, Reference Sternberg1966). Accordingly, one could argue that precise tests of the recognition heuristic should test the implications of the assumed limited information search for response times. However, as criticized by some (e.g., Dougherty, Franco-Watkins, & Thomas, Reference Dougherty, Franco-Watkins and Thomas2008), when proposing the recognition heuristic, Goldstein and Gigerenzer (Reference Goldstein and Gigerenzer2002) did not provide a model of the recognition process and its temporal dynamics. As I illustrate next, this omission may not only miss an opportunity for theory integration (Dougherty et al., Reference Dougherty, Franco-Watkins and Thomas2008; Katsikopoulos, Reference Katsikopoulos2010; Pachur, Reference Pachur2010; Pleskac, Reference Pleskac2007; Schooler & Hertwig, Reference Schooler and Hertwig2005). Rather, neglecting the dynamics of the recognition process can also limit the value of precise response time tests of the recognition heuristic.

Based on Goldstein and Gigerenzer’s (Reference Goldstein and Gigerenzer2002) description of the recognition heuristic, Hilbig and Pohl (Reference Hilbig and Pohl2009; see also Glöckner & Bröder, Reference Glöckner and Bröder2011) derived and tested several response time predictions of the recognition heuristic. For instance, response times should be faster when a recognized object is compared to an unrecognized object than when two recognized objects are compared. Further, when a recognized object is compared to an unrecognized object, the response time should be unaffected by (a) the amount of cue knowledge available for the recognized object, and (b) whether recognition leads to a correct or an incorrect decision. The predictions are based on the premise that response times in recognition-based inference provide a pure measure of the amount of processed cue information. Contradicting these derived predictions, in empirical tests Hilbig and Pohl did not find that people’s response times were consistently faster when only one rather than both objects were recognized. Moreover, response times were faster for recognized objects for which additional knowledge was available compared to recognized objects for which no additional knowledge was available. From these results, the authors concluded that “support was obtained for the integration of information and the impact of differences in evidence between objects. Decision times …supported the notion that the (speed of the) decision process is determined by the degree to which one object is superior and thus by the degree of conflict rather than by recognition alone.” (p. 1303)  They argued that the observed patterns are more in line with compensatory, “evidence accumulation” models.Footnote 3

But does it make sense to derive and test response-time predictions from a model that does not account for the recognition process? It is well established that the temporal dynamics of the recognition process itself are sensitive to various factors, such as word frequency and word length (e.g., O’Regan & Jacobs, Reference O’Regan and Jacobs1992). Moreover, the amount of time required for a recognition judgment—i.e., fluency—might depend strongly on the decision maker’s certainty in the recognition judgment. Erdfelder, Küpper-Tetzel, and Mattern (Reference Erdfelder, Küpper-Tetzel and Mattern2011) showed that a model that integrates the dynamics of the recognition process with the recognition heuristic can account for response time patterns that Hilbig and Pohl (Reference Hilbig and Pohl2009) interpreted as evidence against the recognition heuristic. To model the recognition process, Erdfelder et al. used a two-high threshold model (Snodgrass & Corwin, Reference Snodgrass and Corwin1988; see also Bröder & Schütz, Reference Bröder and Schütz2009). According to the model, fluency is mainly a function of the “memory state” of the decision maker—that is, how certain she is that the object was encountered before. Fluency is highest under certainty and lowest under uncertainty, where the recognition judgments concerning the recognized and the unrecognized objects are based on guessing.

How could Erdfelder et al.’s model (2011)—integrating the recognition heuristic with a two-high threshold memory model—account for the finding that the time people take to choose a recognized object varies as a function of whether they have additional knowledge or not—even if this additional knowledge is ignored? The main reason is that, in the real world, people’s memory state of an object—and by implication the fluency of the object’s name—is strongly correlated with the availability of further knowledge about the object (Marewski & Schooler, Reference Marewski and Schooler2011). Moreover, fluency is often correlated with the criterion (Hertwig, Herzog, Schooler, & Reimer, Reference Hertwig, Herzog, Schooler and Reimer2008). As a result, a person is more likely to recognize those objects swiftly (a) about which she can retrieve further knowledge and (b) that score high on the criterion. People may thus decide faster because they recognize the object faster—and not because of less conflict during the inference process (as argued by Hilbig & Pohl, Reference Hilbig and Pohl2009). In other words, the observation that recognition-based responses are faster when they are correct or when additional knowledge about the recognized object is available does not necessarily mean that recognition was used in a compensatory fashion. Finally, Erdfelder et al.’s model can also account for the finding that response times in cases in which only one object is recognized are not consistently faster than in cases in which both objects are recognized.

Taken together, combining the recognition heuristic with an established model of the recognition process reveals that response time patterns that have been interpreted as supporting compensatory processes can be fully consistent with a noncompensatory use of recognition (see Erdfelder et al., Reference Erdfelder, Küpper-Tetzel and Mattern2011, pp. 18–19). Precise tests of the recognition heuristic can thus be misleading if the precision of the test is not matched to the precision (or completeness) of the model. For a derivation of response time predictions for the recognition heuristic based on the ACT-R architecture, see Marewski and Mehlhorn (in press).

Admittedly, Hilbig and Pohl (Reference Hilbig and Pohl2009; Experiment 3) attempted to control for possible differences in fluency in one experiment, and repeated their analyses based on residual response times (after regressing response times on fluency). They found that, on the aggregate level, similar patterns emerged as when fluency was not controlled for. It is well known, however, that analyses on the aggregate level can hide substantial individual differences in strategy use. Several studies have shown that, even if only a small proportion of participants choose systematically differently as predicted by the recognition heuristic, the pattern on the aggregate level can contradict the recognition heuristic (Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein2011; Pachur et al., Reference Pachur, Bröder and Marewski2008). This also holds for response time data. As a reanalysis of data reported by Pachur et al. reveals, response time patterns can differ considerably between different strategy users. As shown in Figure 1, for 51 of the 105 participants included in the analysis who were classified as not following the recognition heuristic (for details see Pachur et al., Reference Pachur, Bröder and Marewski2008, p. 203–204), the response times (controlling for fluency) were considerably faster when there was less conflicting knowledge (i.e., many cues supporting recognition) compared to when there was more conflicting knowledge (in the experiments, participants always had three additional cues, which either supported or contradicted recognition). For the 54 participants classified as following the recognition heuristic (because they always chose the recognized object), by contrast, this trend was considerably attenuated (although it did not disappear completely). Focusing on the aggregate level only might thus lead to the erroneous conclusion that the response-time patterns of all participants were strongly affected by the amount of conflicting knowledge.

Figure 1: Response times in Pachur, Bröder, and Marewski (Reference Pachur, Bröder and Marewski2008; Experiments 1–3 collapsed), separately for participants classified as compensatory users or noncompensatory users of recognition. Shown are the marginal estimated means (based on response times z-standardized for each participant), controlling for the fluency of the recognized and unrecognized objects.

3 Neglected issues in studying recognition-based inference

Without doubt, the thesis that recognition supersedes additional cue knowledge is a strong prediction. It may therefore not seem too surprising that a large proportion of empirical tests have focused on this aspect of the recognition heuristic. However, the recognition heuristic offers a much richer conceptual framework for studying adaptive decision making. Moreover, it provides a great opportunity for bridging memory and decision-making research (e.g., Dougherty, Gronlund, & Gettys, Reference Dougherty, Gronlund, Gettys, Schneider and Shanteau2003; Weber, Goldstein, & Barlas, Reference Weber, Goldstein, Barlas, Busemeyer, Hastie and Medin1995; see Tomlinson, Marewski, & Dougherty, Reference Tomlinson, Marewski and Dougherty2011). In the following, I highlight two important aspects of the recognition heuristic that seem to have been overlooked as a result of the overwhelming attention to the predicted noncompensatory processing: (a) the connection between research on the recognition heuristic and research on recognition memory, and (b) the mechanisms underlying people’s adaptive use of the recognition heuristic.

3.1 Different types of recognition memory

Above I have illustrated how ignoring the processes underlying the recognition judgment can make precise tests of the recognition heuristic based on response times rather uninformative. But the need to better understand the contribution of recognition memory to recognition-based inference goes further. For instance, Pleskac (Reference Pleskac2007) found in mathematical analyses that the accuracy of recognition memory should play a crucial role in the performance of the recognition heuristic (i.e., the recognition validity). In a study comparing recognition-based inferences by young and older adults, however, Pachur et al. (2009) found no association between the accuracy of people’s recognition memory and their individual recognition validity. This discrepancy suggests that the type of recognition memory usually studied in the memory literature might differ from the type of recognition memory tapped by the recognition heuristic. In common measures of recognition memory, participants are first asked to study a list of known words and are later asked to discriminate these studied words from other known words that were not studied. This episodic recognition thus requires the recollection of contextual information (such as source, time and place, feelings) about previous encounters (see Neely & Payne, Reference Neely and Payne1983; Tulving, Reference Tulving, Tulving and Donaldson1972). Semantic recognition, by contrast, which is crucial for tasks such as lexical decisions (e.g., Scarborough, Cortese, & Scarborough, Reference Scarborough, Cortese and Scarborough1977) relies on context-independent features. It is possible that the two types of recognition memory play different roles in the use of recognition in decision making. For instance, semantic recognition might be crucial for distinguishing previously seen from novel objects, whereas the ability for episodic recognition might be key for evaluating whether using recognition in a particular situation is appropriate or not (Hertwig et al., Reference Hertwig, Herzog, Schooler and Reimer2008; Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, Reference Marewski, Gaissmaier, Schooler, Goldstein, Gigerenzer, Taatgen and van Rijn2009; Volz et al., Reference Volz, Schooler, Schubotz, Raab, Gigerenzer and von Cramon2006; for a discussion, see Pachur et al., 2009). A stronger connection to concepts in the memory literature could thus be helpful for research on the recognition heuristic, leading to better understanding the role of recognition in recognition-based inference and, in particular, to better explanations of individual differences in the use of the recognition heuristic.

3.2 The adaptive use of recognition

How do people decide whether to follow the recognition heuristic or not? Although a central question for the notion that the recognition heuristic is an adaptive tool, it has received relatively little attention so far. In one of the few studies examining the mechanisms underlying the adaptive use of recognition directly, Pachur and Hertwig (Reference Pachur and Hertwig2006) tested three different hypotheses. According to the threshold hypothesis, people’s reliance on the recognition heuristic in a particular environment depends on whether the recognition validity exceeds a certain threshold or not. According to the matching hypothesis, people follow the heuristic with a probability that matches their individual recognition validity. According to the suspension hypothesis, the nonuse of the recognition heuristic results from object-specific knowledge, rather than being directly linked to the recognition validity (which is the same for all objects in an environment). Pachur and Hertwig found that the individual adherence rates were uncorrelated with the individual recognition validities (see also Pohl, Reference Pohl2006), inconsistent with both the matching and the threshold hypotheses. Supporting the suspension hypothesis, however, the degree to which participants followed recognition varied considerably across the different objects (focusing, of course, on those cases where the object was recognized). This suggests that the decision of whether to use the recognition heuristic or not is made for each individual pair of objects rather than for an entire environment.

Nevertheless, there is clear evidence that across different environments, people follow the recognition heuristic more when the recognition validity in an environment is high compared to when it is low (Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein2011; Pachur et al., Reference Pachur, Todd, Gigerenzer, Schooler and Goldstein2011). It thus seems that the question of adaptivity can be posed on two levels: First, within an environment, is it useful to follow recognition in a particular pair of objects? Second, is the recognition heuristic an appropriate tool in a particular environment? Pachur et al. (2009) referred to these two levels as item adaptivity and environment adaptivity, respectively. In the following, I discuss what mechanisms might give rise to item and environmental adaptivity.

3.2.1 Item adaptivity

The results of Pachur and Hertwig (Reference Pachur and Hertwig2006) indicated that reliance on the recognition heuristic is based on object-specific information. What might the information be that people recruit to evaluate the adequacy of using the recognition heuristic? One possibility is recognition speed (i.e., fluency). There are at least two reasons why fluency might be a useful indicator for the appropriateness of following recognition. First, as fluency is often correlated with the criterion (e.g., Hertwig et al., Reference Hertwig, Herzog, Schooler and Reimer2008), following recognition when the recognized object was recognized swiftly should, ceteris paribus, lead to more correct decisions than when the recognized object was recognized slowly (see Marewski et al., Reference Marewski, Gaissmaier, Schooler, Goldstein and Gigerenzer2010). Second, as outlined by Erdfelder et al. (2011), fluency might indicate the certainty (and thus accuracy) of the recognition judgment—that is, whether the object was indeed previously encountered or not.

An alternative possibility is that additional cue knowledge—rather than being used directly to make an inference—is used as a “meta-cue” to decide whether to use or to suspend the recognition heuristic. For illustration, consider a person is asked to judge whether Chernobyl or an unrecognized Russian city is larger. Because the person knows that Chernobyl is well known due to a nuclear disaster, she might suspend the recognition heuristic in that particular case and choose the unrecognized city (see Oppenheimer, Reference Oppenheimer2003). A third possibility is that processes of source monitoring (Johnson, Hashtroudi, & Lindsay, Reference Johnson, Hashtroudi and Lindsay1993; Lindsay & Johnson, Reference Lindsay and Johnson1991) influence the decision of whether to follow recognition or not. Specifically, one might infer simply from one’s ability to retrieve specific knowledge about the source of an object’s recognition—for instance, that a city is recognized from a friend’s description of a trip—that recognition is an unreliable cue in this case. Why? One indication that recognition is a potentially valid predictor is when an object is recognized after encountering it multiple times in many different contexts (e.g., hearing a name in several conversations with different people, or across various media), rather than through one particular, possibly biased source. Thus, being able to easily think of one particular source could indicate unreliability. Conversely, if an object has appeared in many different contexts, retrieving information about any specific context is more difficult and associated with longer retrieval times than when an object has appeared in only one particular context (known as the “fan effect”—Anderson, Reference Anderson1974). As a consequence, difficulty in retrieving detailed information concerning a particular context in which an object was encountered could indicate that recognition has been produced by multiple sources and is therefore an ecologically valid cue (see Pachur et al., Reference Pachur, Todd, Gigerenzer, Schooler and Goldstein2011).

3.2.2 Environment adaptivity

As mentioned above, the average adherence rate in an environment usually follows the average recognition validity rather closely (Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein2011; Pachur et al., 2009, 2011). How do people achieve this apparent adaptive use of the recognition heuristic? Given that within an environment individual recognition validities are uncorrelated with individual adherence rates (Pachur & Hertwig, Reference Pachur and Hertwig2006; Pohl, Reference Pohl2006), individual learning seems an unlikely factor. What are the alternatives? One possibility is that the mechanisms underlying item adaptivity and environment adaptivity are closely connected. For instance, if the fluency of recognized objects tends to be lower and discrediting cue or source knowledge is more likely to be prevalent in environments with a low than in those with a high recognition validity, item adaptivity might lead to environment adaptivity. Another possibility is that people have subjective theories about the predictive power of recognition in different environments and adjust their reliance based on these beliefs (e.g., Wright & Murphy, Reference Wright and Murphy1984). Although these theories may not always be correct, they could nevertheless capture relative differences in recognition validity between environments rather well.

Taken together, because tests of the recognition heuristic have been primarily concerned with testing the predicted noncompensatory processing, we know relatively little about the principles underlying people’s decision to use or suspend the recognition heuristic. Nevertheless, from the little we know, the emerging picture suggests that there are actually many different reasons—rather than only one reason—for people to discard the recognition heuristic and use alternative strategies instead.

4 Conclusion

Consistent with other models of decision heuristics, the recognition heuristic assumes limited search and noncompensatory processing. Clever empirical tests based on precise measures of noncompensatory processing have shown that this assumption is sometimes violated. Should we therefore retire the recognition heuristic, as some have demanded? I have argued for a more cautious and constructive approach to testing the recognition heuristic. In fact, it is not surprising that the recognition heuristic cannot capture all the data. Like every model, it is a simplification of reality and thus wrong. Mere demonstrations that a model deviates from reality are not very helpful to advance science. What is required in addition is a new (or modified) model that can accommodate the violations of the rejected model. Moreover, given that the recognition heuristic as proposed by Goldstein and Gigerenzer (Reference Goldstein and Gigerenzer2002) does not provide a complete account of cognition (e.g., by not modeling the recognition process), highly precise tests can yield rather ambiguous results. Although it is violated by some data, the recognition heuristic is, in my view, currently still the best model we have available to predict people’s recognition-based inferences. And having an imperfect model is clearly better than having no model at all (or only a vague one). When considering possible alternative models, it should also not be overlooked that recognition-based inference can probably only be understood if we continue to focus on the close link between the mind and the environment. Only then can we further refine our understanding why, as Montaigne observed, failures in memory can actually be beneficial for making good judgments.

Footnotes

I thank Jonathan Baron, Benjamin Hilbig, Julian Marewski, Rüdiger Pohl, and Oliver Vitouch for comments on an earlier draft of this paper, and Laura Wiles for editing the manuscript.

1 The availability heuristic, by contrast, has often been criticized as being only vaguely defined (e.g., Fiedler, Reference Fiedler and Scholz1983; Wallsten, Reference Wallsten and Scholz1983). Moreover, neither its boundary conditions nor its relationship to other heuristics (such as representativeness; see Sherman & Corty, Reference Sherman, Corty, Wyer and Srull1984) have been specified (Gigerenzer, Reference Gigerenzer1996).

2 For a comparison of various approaches to measure the use of the recognition heuristic, such as adherence rates, DI, and the r-model, see Hilbig (Reference Hilbig2010a).

3 Note that compensatory models can differ considerably in their processing assumptions and predicted decisions, making it rather uninformative to collapse them all (e.g., Rieskamp, Reference Rieskamp2008).

References

Anderson, J. R. (1974). Retrieval of propositional information from long-term memory. Cognitive Psychology, 5, 451474.CrossRefGoogle Scholar
Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2, 396408.CrossRefGoogle Scholar
Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized rational and take-the-best models of decision making. Journal of Experimental Psychology: Learning, Memory and Cognition, 33,107129.Google ScholarPubMed
Birnbaum, M. H. (2008). New paradoxes of risky decision making. Psychological Review, 115, 463501.CrossRefGoogle ScholarPubMed
Birnbaum, M. H., & Chavez, A. (1997). Tests of theories of decision making: Violations of branch independence and distribution independence. Organizational Behavior and Human Decision Processes, 71, 161194.CrossRefGoogle Scholar
Brandstẗter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113, 409432.CrossRefGoogle Scholar
Brighton, H., & Gigerenzer, G. (2011). Towards competitive instead of biased testing of heuristics: A reply to Hilbig and Richer (2011). Topics in Cognitive Science, 3, 197205.CrossRefGoogle Scholar
Bröder, A., & Eichler, A. (2006). The use of recognition information and additional cues in inferences from memory. Acta Psychologica, 121, 275284.CrossRefGoogle ScholarPubMed
Bröder, A., & Schütz, J. (2009). Recognition ROCs are curvilinear-or are they? On premature arguments against the two-high-threshold model of recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 587606.Google ScholarPubMed
Brunswik, E. (1952). The conceptual framework of psychology. Chicago: University of Chicago Press.Google Scholar
Davis-Stober, C. P., Dana, J., & Budescu, D. V. (2010). Why recognition is rational: Optimality results on single-variable decision rules. Judgment and Decision Making, 5, 216229.CrossRefGoogle Scholar
Dougherty, M. R. P, Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics. Psychological Review, 115, 199213.CrossRefGoogle ScholarPubMed
Dougherty, M. R. P., Gronlund, S. D., & Gettys, C. F. (2003). Memory as a fundamental heuristic for decision making. In Schneider, S. L. & Shanteau, J. (Eds.) Emerging perspectives on judgment and decision research (pp. 125164). Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
Erdfelder, E., Küpper-Tetzel, C. E., & Mattern, S. D. (2011). Threshold models of recognition and the recognition heuristic. Judgment and Decision Making, 6, 722.CrossRefGoogle Scholar
Fiedler, K. (1983). On the testability of the availability heuristic. In Scholz, R. W. (Ed.), Decision making under uncertainty: Cognitive decision research, social interaction, development and epistemology (pp. 109119). Amsterdam: North-Holland.CrossRefGoogle Scholar
Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychological Review, 103, 592596.CrossRefGoogle Scholar
Gigerenzer, G., & Goldstein, D. G. (2011). The recognition heuristic: A decade of research. Judgment and Decision Making, 6, 100121.CrossRefGoogle Scholar
Gigerenzer, G., Todd, P. M., & the ABC Research Group (1999). Simple heuristics that make us smart. New York, US: Oxford University Press.Google Scholar
Glöckner, A., & Bröder, A. (2011). Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time. Judgment and Decision Making, 6, 2342.CrossRefGoogle Scholar
Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 7590.CrossRefGoogle ScholarPubMed
Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency heuristic: A model of how the mind exploits a by-product of information retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 11911206.Google Scholar
Hilbig, B. E. (2010a). Precise models deserve precise measures: A methodological dissection. Judgment and Decision Making, 5, 272284.CrossRefGoogle Scholar
Hilbig, B. E. (2010b). Reconsidering “evidence” for fast and frugal heuristics. Psychonomic Bulletin and Review, 17, 923930.CrossRefGoogle ScholarPubMed
Hilbig, B. E., & Pohl, R. F. (2008). Recognizing users of the recognition heuristic. Experimental Psychology, 55, 394401.CrossRefGoogle ScholarPubMed
Hilbig, B. E., & Pohl, R. F. (2009). Ignorance- versus evidence-based decision making: A decision time analysis of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 12961305.Google ScholarPubMed
Hilbig, B. E., & Richter, T. (2011). Homo heuristicus outnumbered: Comment on Gigerenzer and Brighton (2009). Topics in Cognitive Science, 3, 187196.CrossRefGoogle ScholarPubMed
Hilbig, B. E., Pohl, R. F., & Bröder, A. (2009). Criterion knowledge: A moderator of using the recognition heuristic? Journal of Behavioral Decision Making, 22, 510522.CrossRefGoogle Scholar
Hilbig, B. E., Erdfelder, E., & Pohl, R. F. (2010). One-reason decision making unveiled: A measurement model of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 123134.Google ScholarPubMed
Hilbig, B. E., Scholl, S. G., & Pohl, R. F. (2010). Think or blink—Is the recognition heuristic an “intuitive” strategy? Judgment and Decision Making, 5, 300309.CrossRefGoogle Scholar
Hochman, G., Ayal, S., & Glöckner, A. (2010). Physiological arousal in processing recognition information: Ignoring or integrating cognitive cues? Judgment and Decision Making, 5, 285299.CrossRefGoogle Scholar
Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114, 328.CrossRefGoogle ScholarPubMed
Katsikopoulos, K. V. (2010). The less-is-more effects: Predictions and tests. Judgment and Decision Making, 5, 244257.CrossRefGoogle Scholar
Lindsay, D. S., & Johnson, M. K. (1991). Recognition memory and source monitoring. Bulletin of the Psychonomic Society, 29, 203205.CrossRefGoogle Scholar
Marewski, J. N., & Mehlhorn, K. (in press). Using the ACT-R architecture to specify 39 quantitative process models of decision making. Judgment and Decision Making.Google Scholar
Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2009). Do voters use episodic knowledge to rely on recognition? In Taatgen, N. A. & van Rijn, H. (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 22322237). Austin, TX: Cognitive Science Society.Google Scholar
Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2010). From recognition to decisions: Extending and testing recognition-based models for multi-alternative inference. Psychonomic Bulletin and Review, 17, 287309.CrossRefGoogle Scholar
Marewski, J. N., & Schooler, L. J., (2011). Cognitive niches: An ecological model of strategy selection. Psychological Review, 118, 393437.CrossRefGoogle ScholarPubMed
de Montaigne, M. (1595/2003). The complete essays. London: Penguin.Google Scholar
Neely, J. H., & Payne, D. G. (1983). A direct comparison of recognition failure rates for recallable names in episodic and semantic memory tests. Memory and Cognition, 11, 161171.CrossRefGoogle ScholarPubMed
Newell, B. R., & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 333346.CrossRefGoogle Scholar
Newell, B. R., & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 923935.Google ScholarPubMed
Oeusoonthornwattana, O., & Shanks, D. R. (2010). I like what I know: Is recognition a non-compensatory determiner of consumer choice? Judgment and Decision Making, 5, 310325.CrossRefGoogle Scholar
Oppenheimer, D. M. (2003). Not so fast! (and not so frugal!): Rethinking the recognition heuristic. Cognition, 90, B1B9.CrossRefGoogle ScholarPubMed
O’Regan, J. K., & Jacobs, A. M. (1992). Optimal viewing position effect in word recognition: A challenge to current theory. Journal of Experimental Psychology: Human Perception and Performance, 18, 185197.Google Scholar
Pachur, T. (2010). Recognition-based inference: When is less more in the real world? Psychonomic Bulletin and Review, 17, 589598.CrossRefGoogle ScholarPubMed
Pachur, T., Bröder, A., & Marewski, J. N. (2008). The recognition heuristic in memory-based inference: is recognition a non-compensatory cue? Journal of Behavioral Decision Making, 21, 183210.CrossRefGoogle Scholar
Pachur, T., Hanoch, Y., & Gummerum, M. (2010). Prospects behind bars: Analyzing decisions under risk in a prison population. Psychonomic Bulletin and Review, 17, 630636.CrossRefGoogle Scholar
Pachur, T., & Hertwig, R. (2006). On the psychology of the recognition heuristic: Retrieval Primacy as a key determinant of its use. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 9831002.Google ScholarPubMed
Pachur, T., Mata, R., & Schooler, L. J. (2009). Cognitive aging and the adaptive use of recognition in decision making. Psychology and Aging, 24, 901915.CrossRefGoogle ScholarPubMed
Pachur, T., Schooler, L. J., & Stevens, J. R. (in press). When will we meet again? Regularities in the dynamics of social contact reflected in memory and decision making. In Hertwig, R., Hoffrage, U., & the ABC Research Group, Simple heuristics in a social world. New York: Oxford University Press.Google Scholar
Pachur, T., Todd, P. M., Gigerenzer, G., Schooler, L. J. & Goldstein, D. G. (2011). The recognition heuristic: A review of theory and tests. Frontiers in Cognitive Science, 2, article 147, 114.Google ScholarPubMed
Pleskac, T. J. (2007). A signal detection analysis of the recognition heuristic. Psychonomic Bulletin and Review, 14, 379391.CrossRefGoogle ScholarPubMed
Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 251271.CrossRefGoogle Scholar
Richter, T., & Späth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 150162.Google ScholarPubMed
Rieskamp, J. (2008). The probabilistic nature of preferential choice. Journal of Experimental Psychology. Learning, Memory, and Cognition, 34, 14461465.CrossRefGoogle ScholarPubMed
Rouder, J. N., & Ratcliff, R. (2004). Comparing categorization models. Journal of Experimental Psychology: General, 133, 6382.CrossRefGoogle ScholarPubMed
Scarborough, D. L., Cortese, C., & Scarborough, H. S. (1977). Frequency and repetition effects in lexical memory. Journal of Experimental Psychology: Human Perception and Performance, 3, 117.Google Scholar
Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological Review, 112, 610628.CrossRefGoogle ScholarPubMed
Sherman, S. J., & Corty, E. (1984). Cognitive heuristics. In Wyer, R. S. & Srull, T. K. (Eds.), Handbook of social cognition (Vol. 1, pp. 189286). Hillsdale, NJ: Erlbaum.Google Scholar
Shiffrin, R. M., Lee, M. D., Kim, W. J., & Wagenmakers, E.-J. (2008). A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science, 32, 12481284.CrossRefGoogle ScholarPubMed
Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 3450.CrossRefGoogle ScholarPubMed
Sternberg, S. (1966). High-speed scanning in human memory. Science, 153, 652654.CrossRefGoogle ScholarPubMed
Tomlinson, T., Marewski, J. N., & Dougherty, M. (2011). Four challenges for cognitive research on the recognition heuristic and a call for a research strategy shift. Judgment and Decision Making, 6, 8999.CrossRefGoogle Scholar
Tulving, E. (1972). Episodic and semantic memory. In Tulving, E. & Donaldson, W. (Eds.), Organization of memory (pp. 381403). New York, NY: Academic Press.Google Scholar
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207232.CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297323.CrossRefGoogle Scholar
Volz, K. G., Schooler, L. J., Schubotz, R. I., Raab, M., Gigerenzer, G., & von Cramon, D. Y. (2006). Why you think Milan is larger than Modena: Neural correlates of the recognition heuristic. Journal of Cognitive Neuroscience, 18, 19241936.CrossRefGoogle ScholarPubMed
Wallsten, T. S. (1983). The theoretical status of judgmental heuristics. In Scholz, R. W. (Ed.), Decision making under uncertainty: Cognitive decision research, social interaction, development and epistemology (pp. 2139). Amsterdam: Elsevier.CrossRefGoogle Scholar
Weber, E. U., Goldstein, W. M., & Barlas, S. (1995). And let us not forget memory: The role of memory processes and techniques in judgment and choice. In Busemeyer, J. R., Hastie, R. & Medin, D. L. (Eds.), Decision making from the perspective of cognitive psychology (pp. 3381). New York: Academic Press.Google Scholar
Wright, J. C., & Murphy, G. L. (1984). The utility of theories in intuitive statistics: The robustness of theory-based judgments. Journal of Experimental Psychology: General, 113, 301322.CrossRefGoogle Scholar
Figure 0

Figure 1: Response times in Pachur, Bröder, and Marewski (2008; Experiments 1–3 collapsed), separately for participants classified as compensatory users or noncompensatory users of recognition. Shown are the marginal estimated means (based on response times z-standardized for each participant), controlling for the fluency of the recognized and unrecognized objects.