Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-26T13:27:02.109Z Has data issue: false hasContentIssue false

Investigating intuitive and deliberate processes statistically: The multiple-measure maximum likelihood strategy classification method

Published online by Cambridge University Press:  01 January 2023

Andreas Glöckner*
Affiliation:
Max Planck Institute for Research on Collective Goods
*
* Address: Andreas Glöckner, Max Planck Institute for Research on Collective Goods, Kurt Schumacher Str. 10, D-53113 Bonn,Germany. Email: [email protected].
Rights & Permissions [Opens in a new window]

Abstract

One of the core challenges of decision research is to identify individuals’ decision strategies without influencing decision behavior by the method used. Bröder and Schiffer (2003) suggested a method to classify decision strategies based on a maximum likelihood estimation, comparing the probability of individuals’ choices given the application of a certain strategy and a constant error rate. Although this method was shown to be unbiased and practically useful, it obviously does not allow differentiating between models that make the same predictions concerning choices but different predictions for the underlying process, which is often the case when comparing complex to simple models or when comparing intuitive and deliberate strategies. An extended method is suggested that additionally includes decision times and confidence judgments in a simultaneous Multiple-Measure Maximum Likelihood estimation. In simulations, it is shown that the method is unbiased and sensitive to differentiate between strategies if the effects on times and confidence are sufficiently large.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2009] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Methods for strategy classification

In different situations, people might use different strategies to decide. These strategies might sometimes be completely based on conscious processes, such as comparing the available options on the most important attribute and choosing the option that is better on this attribute (e.g., Beach & Mitchell, Reference Beach and Mitchell1978; Fishburn, Reference Fishburn1974; Payne, Bettman, & Johnson, Reference Payne, Bettman and Johnson1988), or people might rely more or less on automatic processes that integrate information unconsciously (e.g., Busemeyer & Townsend, Reference Busemeyer and Townsend1993; Dougherty, Gettys, & Ogden, Reference Dougherty, Gettys and Ogden1999; Glöckner & Betsch, Reference Glöckner and Betsch2008b). Decision researchers often are interested in the question which strategy was (more likely) used by each person. Several methods have been suggested to identify decision strategies. The three predominant approaches are structural modeling, process tracing, and comparative model fitting (for overviews see Bröder & Schiffer, Reference Bröder and Schiffer2003a; Glöckner & Witteman, in press; Harte & Koele, Reference Harte and Koele2001).

1.1 Structural modeling

Structural modeling uses a multiple regression approach to identify how cues or attributes are utilized in making judgments (Brehmer, Reference Brehmer1994; Doherty & Brehmer, Reference Doherty, Brehmer, Goldstein and Hogarth1997; Doherty & Kurz, Reference Doherty and Kurz1996). Specifically, a set of judgments (criterion) is predicted by cue values (predictors). Regression weights can be interpreted as indicators for individuals’ usage of cues in their judgments. Structural modeling usually does not aim to analyze processes (as in the paramorphic approach; see Hoffman, Reference Hoffman1960) but input-output relations between cues and judgments only (i.e., as-if models; but see Bröder, Reference Bröder2000, Exp. 1). Although the method was tremendously useful in showing that people integrate information in a weighted compensatory manner when making more or less intuitive judgments (Doherty & Brehmer, Reference Doherty, Brehmer, Goldstein and Hogarth1997; Hammond, Hamm, Grassia, & Pearson, Reference Hammond, Hamm, Grassia and Pearson1987), its focus on outcomes limits its applicability for differentiating among process models.

1.2 Process tracing

Process tracing methods record and analyze parameters of information search before judgments or decisions and aim to infer decision strategies from the amount, distribution and order of information search. For instance, information boards are often used in which information is provided behind hidden information cards, which are opened on request or by mouse-click (e.g., Payne et al., Reference Payne, Bettman and Johnson1988; Rieskamp & Hoffrage, Reference Rieskamp and Hoffrage1999).

This method allows differentiating between decision strategies because some of them differ in their predictions concerning information search. A simple take-the-best strategy (TTB, Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein1996; cf. Fishburn, Reference Fishburn1974), for example, assumes that persons first look up the predictions of the most predictive (valid) cue for all options. The option with the best cue value is selected. If options are tied, the second most valid cue is considered, and so on. In contrast, according to an equal weight strategy (EQW, Payne et al., Reference Payne, Bettman and Johnson1988), individuals look up all cue information for the first option and sum them up. Then they do the same for the second option and so on and select the option with the highest sum of cue values. Hence, a cue-wise information search (i.e., search along cues) and a strong focus on the most valid cue are used as indicators for the usage of TTB (or similar non-compensatory strategies) and an option-wise information search and an equal inspection of all cues indicate the usage of EQW (or other compensatory strategies).

Despite being highly useful for investigating deliberate strategies, standard process-tracing methods such as Mouselab (Payne et al., Reference Payne, Bettman and Johnson1988) influence strategy selection and hinder people from applying intuitive-automatic decision strategies (Glöckner & Betsch, Reference Glöckner and Betsch2008c; see also Norman & Schulte-Mecklenbeck, in press). One reason for this is that classic process tracing methods induce a serial information search and prevent quick comparisons between options and the formation of holistic impressions. One might argue that intuitive-automatic processes cannot be captured by the analysis of information search at all. This conclusion, however, seems to be too strong considering the successful use of less intrusive methods such as eye-tracking technology to investigate intuitive processes (e.g., Glöckner & Herbold, Reference Glöckner and Herbold2008). Eye-tracking methods even provide further dependent measures (e.g., eye-fixation duration and physiological arousal; Horstmann, Ahlgrimm, & Glöckner, under review), which could be included in strategy classification.

1.3 Comparative model fitting

The more recently-developed comparative-model-fitting approach uses a maximum likelihood method to compare choices with the predictions of a set of decision strategies (Bröder & Schiffer, Reference Bröder and Schiffer2003a; Bröder, in press; Wasserman, Reference Wasserman2000). For instance, assume one observes choices in 10 decisions from which 6 are in line with the predictions of TTB and 8 are in line with the prediction of EQW. An obvious scheme would be to classify persons according to the amount of strategy compatible choices — with 8/10 choices in line with EQW, and only 6/10 in line with TTB, we would classify this person as consistent with EQW. However, this simple counting method leads to biased results if strategies predict random choices for some of the tasks (Bröder & Schiffer, Reference Bröder and Schiffer2003a; Bröder, in press).

Maximum likelihood estimation provides a more elegant means of performing strategy classification that is not prone to this source of bias. The basic idea is simple: comparative model fitting determines the strategy that would most likely have produced the observed choice pattern under the assumption of a constant error rate in applying the strategy. For the example above, the best estimates for the error rates in strategy application would be .40 (i.e., 4/10 “errors” in applying a TTB strategy) and .20, respectively. According to the binomial equation (see also equation 1, below), the likelihood that exactly the observed number of strategy conforming choices (6 out of 10 correct under the assumption of an error rate of .40) was produced by TTB is .25 whereas the respective likelihood for EQW is .30. Hence, it is more likely that choices were produced by application of EQW than by TTB.

In contrast to classic process-tracing methods, the comparative model fitting approach avoids influencing decision behavior by the measuring method and nevertheless allows process models to be tested. However, the method is applicable only when strategies make different choice predictions. If strategies make the same predictions for choices, the likelihoods for the strategies will obviously always be equal.

Unfortunately, strategies often make exactly the same choice predictions. This is due to the fact that the principally investigated decision strategies (e.g., TTB and EQW) are special cases of a weighted additive strategy (WADD). According to a WADD strategy, for each option the cue information is weighted by its importance (or validity) and added up. The option with the highest weighted sum is chosen (Payne et al., Reference Payne, Bettman and Johnson1988). Although this strategy sounds quite different from TTB and EQW, it can be easily shown that WADD predicts the same choices as TTB. This is always the case if the validity of each cue is higher than the sum of the validity of all less valid cues (Bröder, Reference Bröder2000; Lee & Cummins, Reference Lee and Cummins2004). Similarly, WADD predicts the same choices as EQW if the validity of all cues is similar or equal. Hence, in a strict sense, classification for EQW and TTB based on choices only never rules out that a more complex WADD strategy was used. A person could have used WADD with a specific cue weighting scheme instead. Therefore, in all studies relying on the choice-based strategy classification method (e.g., Bröder & Schiffer, Reference Bröder and Schiffer2003b; Bröder & Gaissmaier, Reference Bröder and Gaissmaier2007) the estimated proportions of TTB and EQW users are upper limits for the usage of these simple strategies whereas the usage of WADD is likely to be underestimated.

The problem of similar choice predictions becomes even more severe if one considers that people might also use intuitive decision strategies. Intuitive decision strategies often also predict choices that follow weighted additive information integration without the assumption that individuals calculate weighted sums (Busemeyer & Johnson, Reference Busemeyer, Johnson, Koehler and Harvey2004; Glöckner & Betsch, Reference Glöckner and Betsch2008b; Hammond, Hamm, Grassia, & Pearson, Reference Hammond, Hamm, Grassia and Pearson1987). Therefore, based on the analysis of choices only, they cannot be distinguished from WADD nor, in a strict sense, from TTB and EQW either.

In this article I aim to show that the problem can be solved in that the method for comparative model fitting based on maximum likelihood estimation of choices suggested by Bröder and Schiffer (Reference Bröder and Schiffer2003a) is extended by including additional dependent measures such as decision time and confidence (for earlier approaches, see Bergert & Nosofsky, Reference Bergert and Nosofsky2007; Glöckner, Reference Glöckner2006). A Multiple-Measure Maximum Likelihood (MM-ML) strategy classification method is suggested that allows identifying decision strategies even if they make the same choice predictions and different predictions concerning only one of the other dependent variables (i.e., decision time, confidence). Further advantages of the inclusion of additional dependent measures will be discussed.

2 Examples for strategy classification

To apply a strategy classification method, it is necessary to select a set of strategies that allows for deriving predictions concerning choices, decision time, and confidence. Furthermore, item types have to be selected for which the strategies make different predictions on as many dependent variables as possible. These types have to be repeatedly presented (e.g., 10 times; Bröder & Schiffer, Reference Bröder and Schiffer2003a). In this analysis I focus on choices in probabilistic inference tasks, in which persons select the better of two goods based on recommendations of four advisors (cues) with different reliability of recommendations (i.e., cue validity). The considered strategies are WADD, TTB, EQW, a random choice strategy (RAND), and an intuitive parallel constraint satisfaction strategy (PCS; Glöckner & Betsch, Reference Glöckner and Betsch2008b). The choice predictions of PCS and WADD are essentially equal (considering different cue-validity transformation functions) and hence the strategies cannot be differentiated based on choices only. The steps to derive the strategies’ predictions concerning choices, decision times and confidence are explained in detail elsewhere (Glöckner, in press) and the most important aspects will be summarized below. The resulting predictions for six types of items are shown in Table 1. It is assumed that an experiment was conducted in which each item type was presented 10 times (in individually randomized order), and choices, decision times and confidence were recorded.

Table 1: Item types and predictions of strategies

Note. Items types and predictions of decision strategies. In the upper part of the table, the item types are presented. The cue validities v are provided beside each cue. Below the predictions concerning choices are shown. A and B stand for the predicted option. “A:B” indicates random choices between A and B. The lower part of the table shows predictions for decision times and confidences expressed in contrast weights that add up to zero and have a range of 1. Contrast values represent relative weights comparing different cue patterns for one strategy.

2.1 Specification of WADD

In order to derive predictions from WADD, it has to be determined how cue validities are used in calculating weighted sums. In this paper it is assumed that persons correct their weights for the fact that binary cues with a validity of .50 have no predictive power (w = v - .50; cf. Glöckner & Betsch, Reference Glöckner and Betsch2008c). Although sometimes stated otherwise, choice predictions (as well as time and confidence predictions) of WADD are not invariant to this transformation. In the following I use the label WADDcorrected when referring to the predictions of such a WADD strategy with corrected weights.

2.2 Decision Time Predictions

Decision-time predictions for the deliberate strategies TTB, EQW, and WADD are determined according to the number of elementary information processes necessary to apply the strategy (Payne et al., Reference Payne, Bettman and Johnson1988). For instance, for item types 1 to 5 (see Table 1), according to TTB, only one cue has to be considered, whereas for item 6, two cues have to be considered, which necessitates applying more elementary information processes. For statistical reasons, decision time predictions are transformed to contrast weights which add up to zero and have a range of 1. For WADD and EQW, no differences in decision times are predicted and all contrast weights are set to zero. For PCS decision time predictions were derived from a simulation of the underlying network model (i.e., based on the iteration the PCS algorithm needs to find a consistent solution; Glöckner & Betsch, Reference Glöckner and Betsch2008b).

2.3 Confidence Predictions

Confidence predictions of TTB were derived based on the validity of the differentiating cue (Gigerenzer, Hoffrage, & Kleinbölting, Reference Gigerenzer, Hoffrage and Kleinbölting1991). For WADD and EQW the difference between the weighted (unweighted) sums of cue values for the two options was calculated and used as prediction for confidence. For PCS the predictions were derived from model simulations (i.e., based on the difference between the activation of the options after the consistent solution was found).

3 The maximum likelihood strategy classification method for choices

The maximum likelihood strategy classification method for choices calculates the conditional likelihood of an observed set of choices for different types of tasks j given the application of a certain decision strategy k and a constant error rate εk. The likelihood values of the different strategies are compared and individuals are classified as users of the strategy that most likely produced the observed choices. For each of the choices and each strategy, it is determined if the choice was in line with the prediction of the strategy or not. Let n j be the number tasks of type j that are presented and let n jk be the number of correct predictions of strategy k. The likelihood of observing a certain number of correct predictions n jk given a constant error rate follows a binomial distribution. Hence, the likelihood of observing a set of choices given a strategy k and a constant error rate εk can be calculated by:

(1)

The single free parameter εk can be estimated using standard statistical software packages such as STATA or, in this simple case, by:

(2)

Individuals are classified as users of the strategy with the highest likelihood value L k(C). If a strategy does not differentiate between options for a specific type of items, individuals are assumed to guess and εk is assumed to be .5 for this type. Bröder and Schiffer (Reference Bröder and Schiffer2003a) showed in simulations that up to an error rate of 25% the method differentiates well between strategies which make different predictions concerning choices (i.e., classification error below 20%).Footnote 1 In decision research, the method has been successfully applied to judgments and choices based on probabilistic inference (e.g., Bröder, Reference Bröder2003; for an overview see Bröder, in press; Bröder & Gaissmaier, Reference Bröder and Gaissmaier2007; Bröder & Schiffer, Reference Bröder and Schiffer2003b, Reference Bröder and Schiffer2006; Glöckner, Reference Glöckner2006, Reference Glöckner, Plessner, Betsch and Betsch2007) and decisions under risk (Glöckner & Betsch, Reference Glöckner and Betsch2008a).

An earlier publication (Glöckner, Reference Glöckner2006) highlighted the limitations of this method and made a first attempt to use decision times in individual strategy classification. To differentiate between intuitive and deliberate decision strategies with equal choice predictions, paired t-tests were used to compare individuals’ decision times in choices for different item types, for which one strategy predicts no difference and the other does. A similar method was used in a recent work by Bergert and Nosofsky (Reference Nosofsky and Bergert2007). This method can be criticized in different respects: (a) it does not take into account the total fit of decision times to the total set of predictions of the strategies but is based on pair-wise comparisons of two types of items only, (b) it gives a certain strategy the advantage of the null hypothesis without controlling for the beta-error,Footnote 2 and (c) the results of the choice-based strategy classification (i.e., L k(C)) and the t-test(s) (i.e., t and p value) for choices cannot easily be integrated into one single measure of fit for the strategy. While the first two problems might be circumvented using correlation measures and estimating the beta-error based on the expected effect size and number of observations, the third problem is harder to tackle (Glöckner, in press). The Multiple-Measure Maximum Likelihood strategy classification method which is introduced next solves the first and the last and reduces the second problem by using one single maximum likelihood measure for choices, decision times and confidence.

4 Multiple-Measure Maximum Likelihood (MM-ML) strategy classification

Maximum likelihood estimation is, of course, not limited to dichotomous outcomes (i.e., choices) but can also be applied to continuous variables such as decision times. However, estimation of the likelihood of a set of observations necessitates assumptions about the distribution underlying the data generation process for the variable. One standard assumption is that log-transformed decision times are normally distributed (Bergert & Nosofsky, Reference Bergert and Nosofsky2007, Appendix C). Under this assumption, the likelihood value of observing a log-transformed decision time x given N[µ, σ] can be derived from the density function of the normal distribution:

(3)

and for a set of i independent observations x drawn from the same distribution by:

(4)

The density function of the normal distribution (equation 3) contains two parameters. The mean is represented by µ, the standard deviation is indicated by σ (π and e are of course constants). The variable x indicates the value for which the likelihood value should be determined. According to the properties of a normal distribution, the likelihood value of x decreases with increasing distance from µ (because the exponent of e becomes a higher negative number) and it also decreases with decreasing σ. The total likelihood of events is the product of the single likelihoods of these events. Therefore in equation 4 the total likelihood for all observed decision times results from multiplying the likelihood of all single events (as indicated by the product sign).

Under the assumption that choices and decision times are independent (for a more detailed discussion of the issue of independence see below), the likelihood of observing a set of choices and decision times given the application of a strategy k, a constant error rate for choices εk, and decision times that are drawn from a unique normal distribution N[µ, σ] is:

(5)

Equation 5 should obviously be applied only for decision strategies such as WADD and EQW, which predict equal decision times for all considered types of tasks. Strategies TTB and PCS make (interval-scaled) predictions. Let us denote these predictions t i and assume that they are scaled as contrast weights which add up to 0 and have a range of 1. Let us further assume that decision times for the item i are drawn from different normal distributions with means

(6)

in which R represents a (non-negative and to be estimated) scaling parameter. The likelihood value for observing a set of choices and decision times drawn from different normal distributions (with equal σ)Footnote 3 can then be calculated by inserting equation 6 in equation 5:

(7)

Furthermore, assuming that confidence judgments are independent of choices and decision times and normally distributed, confidence estimation can be added to equation 7 in the same manner as decision time.Footnote 4 From extending equation 7 and adding subscript T and C for parameters referring to decision time and confidence, respectively, results in equation 8:

(8)

This equation contains seven free parameters. For decision strategies that make different predictions for decision times and confidence for the considered item types (i.e., PCS, TTB) all seven parameters will be estimated. For strategies that predict equal decision times, the parameter RT is not necessary (i.e., EQW, WADD) and hence only six parameters have to be estimated. Similarly, RC can be omitted if a strategy makes all equal predictions for confidences. For a RAND strategy, RT and RC can be omitted as well as the error parameter εk which is set to be .50 (indicating random choices). Hence, for a RAND strategy only 4 parameters are estimated.

Likelihood values L k should be corrected for the different numbers of free parameters N p using the Bayesian Information Criterion (BIC) which also takes into account the number of observations N obs (Schwarz, Reference Schwarz1978):

(9)

Individuals should be classified as users of the strategy which has the lowest BIC value. The number of independent observations N obs, which is used to calculate the BIC, is not always equal to the number of total observations. According to STATA 10.0 Online Manual, the number of independent categories (i.e., types of tasks) should be used if it can be assumed that the instances of these categories are highly correlated. This is the case for our data because responses to the repeated presentations of one type of items should be similar. I compared results using the total number of observations per person (N obs = 60 [tasks]* 3[choice, decision time, confidence] = 180) and the number of independent categories (i.e., types) (N obs = 6 * 3 = 18) in the simulation reported below and indeed found that the usage of the latter formula seems to be preferable.

The simulations investigated whether choices, decision times and confidence data generated by different strategies with certain error rates for choices and different effect sizes for decision time and confidence are correctly classified using the MM-ML method. I expected that this method a) is capable of identifying the decision strategy that generated the data, b) is not biased in favor of one or the other strategy, c) differentiates appropriately between strategies which make identical choice predictions (if the effect size for time and confidence is sufficiently large), and d) leads to less misclassifications than the usage of the choice-based strategy classification.

5 Simulation

5.1 Method

The simulation used the 5 decision strategies and 6 types of tasks from Table 1. I assumed that these tasks were presented 10 times each, resulting in a total of 60 choices. In the simulation, the choices, decision times and confidence were generated by the 5 strategies TTB, EQW, WADD, PCS, and RAND. The error rate for choices varied from 5% to 25% in steps of 5%. I also manipulated the size of the differences between decision times and confidences for different types of items in relation to the standard deviation. To do this, I drew data from normal distributions N(µ = contrast weight, σ = sd) in which the mean was the contrast weight defined in Table 1 and the standard deviation sd was varied on the levels 0.8, 1, 1.33, and 2. Remember that the contrast weights are scaled to a range of 1. Hence, sd = 1 means that for comparing the fastest with the slowest item type, the effect size is 1. The maximum effect sizes produced by our manipulation of sd are consequently 1.25, 1, 0.75 and 0.5. For simplicity, sd was manipulated jointly for decision times and confidences. For each combination from each strategy, 100 data sets were generated and the MM-ML strategy classification was applied. Hence, in the simulation I used a 5 (data-generating decision strategy) x 5 (error rate) x 4 (standard deviation) x 100 (repetitions) design. Simulations were run using a BIC correction with N obs = 18 and N obs = 180. The results for N obs = 18 are reported only because they were consistently better (i.e., less biased in favor of strategies with less parameters).Footnote 5

5.2 Results

Figure 1 shows the classification results by data generating strategy and maximal effect size (i.e., inverse of sd) aggregated over the manipulation of error rate. The classification for data generated by TTB and EQW were almost perfect. The classification of data generated by WADD was very good as well, although there was a small constant amount of misclassification in favor of PCS. Remember that WADD and PCS make equal choice predictions. Hence, the method generated very few misclassifications in favor of the more complex strategy (with one additional parameter). On the other hand, the accuracy of the classification of data produced by PCS depended crucially on the effect size. As one would expect, with decreasing effect size the number of misclassification in favor of the strategy not predicting a difference (i.e., WADD) increased. For our small number of observations, the maximal effect size (i.e., measured between the most extreme items only) of 1.25 and 1 led to acceptable results. Below that, misclassifications prevailed. Finally, data produced by a RAND strategy are to a certain degree misclassified as being produced by EQW. Note that this misclassification was likely to be due to the fact that, for the selected item types, EQW predicts random choice for 4 out of 6 considered types. These misclassifications could be reduced by including a limit error rate of .30 for all systematic strategies and not classifying participants with higher error rates (see discussion below).

Figure 1: Strategy classification results by data generating strategy for 60 observations.

The manipulation of the error rate in strategy application had only a minor influence on strategy classification results for all levels of sd. The results concerning the influence of error rate on strategy classification are shown in Figure 2. The left part shows the result aggregated for strong effects (sd ≤ 1) and the right side for weaker effects (sd > 1). As one could have expected, for strong effects, strategy classification also worked quite well between PCS and WADD. The classification between these strategies with equal choice predictions was considerably worse if the effect was weaker. There was a strong tendency towards misclassification in favor of the strategy that does not predict differences in response times (i.e., WADD) as compared to the strategy that predicted differences (i.e., PCS). Hence with weaker effects the method is biased in favor of the strategy that predicts no difference (i.e., null hypothesis). In the high effect-size conditions, the increasing error rate had almost no increasing effect for misclassifications (Figure 2, left) whereas increasing error rate led to increasing misclassifications in the lower effect-size conditions (Figure 2, right).

Figure 2: Strategy classification results by error rate in strategy application for strong effects (sd≤1, left) and weaker effects (sd>1, right).

To investigate whether the MM-ML method leads to fewer strategy misclassifications than the classic choice based strategy classification by Bröder and Schiffer (Reference Bröder and Schiffer2003a), I rerun the same analysis using the choice based strategy classification method and excluding PCS (because it obviously could not be differentiated from WADD based on choices only). In line with findings by Bröder and Schiffer, the analysis worked very well, but revealed an increasing misclassification rate with an increasing error rate (Figure 3).

Figure 3: Strategy classification based on choices only by data generating strategy and error rate in strategy application.

A considerable number of choices that were produced by RAND were wrongly classified as being produced by EQW. This bias was stronger as compared to the one observed for the MM-ML method (see Figure 1). In Table 2 the classification results for both methods are directly compared for ε = 0.25. It can be seen that over all strategies the MM-ML method leads to a higher level of correct classifications as compared to the choice based strategy classification (cf. bold numbers in the diagonals of Table 2).

Table 2: Comparison of strategy classification methods

Note. Numbers represent percentages of strategy classification for the respective strategy (in columns) for an error rate of ε = .25 only. In the MM-ML method, WADD and PCS are combined concerning data generating strategy and strategy classification for comparative reasons.

A final simulation investigated the influence of the number of observations on the quality of the strategy classification with the MM-ML method. Therefore the number of observations used in the analysis was raised from 60 to 120 (i.e., by using 20 instead of 10 decisions per item type). Doubling the number of observation increased the quality of classification (Figure 4). With 120 observations the classifications of TTB, EQW, WADD, and RAND were close to perfect. With the higher number of observations, the classification of PCS was also satisfactory for a lower maximum effect size (i.e., 0.75) but for the lowest maximum effect size (i.e., 0.5) there were still a considerable number of misclassifications in favor of WADD.

Figure 4: Strategy classification results by data generating strategy for 120 observations.

5.3 Discussion

The simulations revealed that the inclusion of decision times and confidences in the analysis generally improves strategy classification. This is particularly the case if the effects for both variables are strong. If the effects are strong, the method also allows differentiating reliably between strategies which make the same choice predictions. In cases with weaker effects, the method is increasingly biased in favor of the strategies which predict no difference concerning decision times and choices (i.e., which has less free parameters). This problem obviously occurs with any statistical test because the latter strategies have the advantage of being the null hypothesis.

The simplest way to circumvent the problem is to include items for which particularly strong differences in confidence and decision time are expected. Additionally, more items could be used to increase power by increasing within-subjects sample-size. An increase from 10 to 20 choices per item type reduces the bias in favor of the null hypothesis considerably. Finally, one might consider using a different correction of the likelihood than the BIC correction (similar to setting a compromise alpha level). However, to the best of my knowledge there is no simple method to find the correct adjustment (although it could, of course, be derived from simulations). Hence including items with expected large differences and increasing the number of items seem to be preferable. Note that previous studies found strong effects for confidence and time in probabilistic inference tasks (Glöckner, Reference Glöckner2006; Glöckner & Betsch, Reference Glöckner and Betsch2008c), as well as in gambling tasks (Glöckner & Betsch, Reference Glöckner and Betsch2008a; Glöckner & Herbold, Reference Glöckner and Herbold2008). These findings indicate that biased classification due to weak effect size might not be too much of a problem. However, researchers should check the size of the effect in pre-tests, or they should at least calculate it before interpreting their results.

6 Applying the MM-ML method in research practice

Applying the MM-ML method obviously necessitates the use of a statistical package that allows for calculating complex maximum likelihood estimations. I have programmed the necessary estimation routines in STATA. The estimation programs are described in the supplementary material (http://journal.sjdm.org/vol4.3.html). Applying the method mainly requires bringing data into a specific format and defining predictions. The overall estimation program (which can be applied to any number of item types, choices per item, participants, and strategies) provides per-individual estimates of the parameters for each strategy (Figure 5, top), as well as an aggregated matrix with the total likelihood (i.e., BIC score) that the data for each individual (in rows) were produced by a particular strategy (in columns) (Figure 5, bottom).

Figure 5: Example output of the STATA implementation of the Multiple-Measure Maximum Likelihood strategy classification method for parameters per individual (top) and for the overall estimation (bottom). The individual output contains estimates for all coefficients and the overall fit of the individual data to the prediction of the considered strategy. The overall estimation shows BIC scores for each individual (rows) and each of the five considered strategies (columns). Lower scores indicate a better fit.

The STATA output will be explained in more detail. The individual output (Figure 5, top) shows the results for comparing the data of subject 1 with the predictions of strategy 4 (see last line of output). The total number of observations is 126 (6 frequencies for choices in task types, 60 decision times, 60 confidence ratings). The resulting parameter estimates are listed as constant coefficients. In the example, the choice error rate (epsilon) was .167, the log-transformed (and for order effects corrected) mean decision time was 8.57 (mu_Time), the mean confidence (mu_Conf) was 53.98. The provided significance tests (which test if the estimated constant coefficient is different from zero) are mainly informative for the rescaling factors R. In this example, RT (R_Time) and RC (R_Conf) were both significantly different from zero. This indicates that the specific predictions for time and confidence (reflected in contrast weights) significantly contribute to explain the data. The tests for R produce results similar to correlations between data and contrast weights (i.e., the correlations of observed decision times with the contrast weights for the respective task types).Footnote 6 The BIC score (last line) indicates the overall fit of data and strategy predictions for the specific participant. More precisely, it gives the corrected log-transformed likelihood for the data given the application of a certain strategy under the assumptions of a constant error rate ε, normally distributed decision times and confidence ratings, and independence between observations.

The lower part of Figure 5 shows the output for the results of all individual comparisons. It presents the resulting BIC scores for each subject (in rows) and considered strategy (in columns; i.e., TTB, EQW, WADD, PCS, RAND). Lower values indicate a better fit. The example result of subject 1 and strategy 4 is consequently presented in row 1, column 4 of the matrix. The strategy that explains a subjects’ data best can be easily determined by identifying the lowest number in the persons’ row (e.g., the data of subject 1 are most likely generated by strategy 1).

The MM-ML method has been successfully applied to empirical data (i.e., Figure 5 is based on real data) and it appears that for the types of items considered here (using 60 observations only) the method is well applicable. Additional practical suggestions on the application of the method are given in Glöckner (in press).

7 Limitations of the method and suggested solutions

7.1 Exhaustive Set of Strategies

The quality of strategy classification is crucially dependent on the set of competing strategies that are considered. As pointed out by Bröder and Schiffer (Reference Bröder and Schiffer2003a), one has to assume that (optimally) an exhaustive set of strategies is investigated. Although the selection of an exhaustive set of strategies is practically not possible, the suggested statistical method (and also the provided STATA program) can be used to compare any number of strategies. Pragmatically, researchers should nevertheless aim to consider only plausible strategies to hold the analysis manageable.

7.2 Criterion for classification of strategy use

Absolute value. One of the frequently raised questions for ML-strategy classification is whether there should be a criterion of fit that has to be reached in order to classify a person as having used a systematic strategy. Could an absolute likelihood or BIC value (e.g., BIC < 500) be defined that has to be reached for a classification? Considering equation 8 the answer has to be no. The total likelihood and also the BIC score crucially depend on the number of observations considered. The BIC score increases (and the likelihood decreases) with the amount of considered observations because more likelihood values (which are usually smaller than 1) are multiplied with each other.

Error in choices. The simplest (advisable) criterion for non-classification is to use a criterion that determines the maximal acceptable error rate in choices for systematic strategies. The lowest useful criterion is .50 (chance level). Considering the simulation results, a somewhat stricter criterion of ε < .30 could be advisable for low numbers of observations. If a researcher has good reason to believe that participants make only few errors (i.e., incentivized environment with clearly structured information), even more strict criteria might be used. The aspired limit error rate can be easily changed in the STATA estimation program (see supplementary material). Setting a stricter error rate limit for all systematic strategies increases the number of cases in which persons are classified as users of RAND. Note that a RAND strategy should always be included in the strategy classification. This assures that for strategy classification a systematic strategy has to be better than a random choice after correcting for the additional free parameters.

Bayes ratio. Another possibility is to compare likelihoods by determining the Bayes ratio. The Bayes ratio is calculated by dividing the likelihood value of the most likely strategy by the likelihood of the second most likely strategy (Wasserman, 1999). The reliability of a classification increases with increasing Bayes ratio. According to Wasserman (1999) ratios larger than 3 can be considered moderate evidence for the model which might be considered a lower limit for strategy classification. Note however that the Bayes ratio is calculated on likelihoods that are not corrected for number of free parameters. Hence, comparing a strategy which makes differential predictions for choices, times and confidence (e.g., PCS) with a strategy which does not (e.g., RAND) would lead to biased Bayes ratios. The application of Bayes ratios as criterion for classification vs. non-classification is therefore in a MM-ML method often not possible.

7.3 Independence of observations assumption

As mentioned above, the multiplication of likelihoods used in the MM-ML method (i.e., equation 8) relies on the assumption that likelihood values for choice, decision times and confidence are independent of each other. Considering the finding that decision time and confidence are often negatively correlated (e.g., Glöckner & Betsch, Reference Glöckner and Betsch2008c), this assumption might seem questionable at first glance. A closer look, however, reveals that such a correlation does not challenge the independence assumption underlying the MM-ML method because the method takes into account correlations that are predicted by strategies. In Table 1 it can be seen that the correlation between confidence and time should be r = −1 according to TTB and r = −.85 according to PCS (assuming that all 6 cue combinations are equally likely). Remember that likelihoods are calculated based on the assumption that values (for time and confidence) are normally distributed around the predicted mean for the respective item type (see equation 6). Therefore likelihoods are based on deviations from the mean after correcting for systematic differences in means (i.e., correlations of residuals after partialling out systematic effect of item types).

Furthermore, note that in the simulations reported above these systematic correlations between time and confidence were also induced to the data by generating them from the predictions of the strategies (Table 1). The size of the correlation was implicitly manipulated by adding relatively small or large error terms to these systematic components (i.e., increasing correlation with increasing maximal effect size). Hence, the simulations also show that these correlations do not lead to biases in strategy classification.

7.4 Correction for learning effects

It can be expected that decision times decrease over time, particularly if persons have to repeat each item type 10 or more times. This could harm strategy classification based on decision time. To avoid systematic biases induced by order, an individually randomized presentation order should be used. Furthermore, it is advisable to reduce error variance by partialling out the effect of order on log-transformed decision times and to use the resulting residuals instead of the raw values in the MM-ML method.

7.5 Dependence of parameters and estimation of strategies with additional parameters

Another possible caveat for the method might be that the different estimated parameters are not mutually independent. For instance, it might be argued that σ increases with increasing µ (i.e., heteroscedasticity). This can, however, be easily handled in STATA by including one parameter as predictor for the other in the estimation program (Gould, Pitblado, & Sribney, Reference Gould, Pitblado and Sribney2006). Before doing so, it is necessary, however, to have a good hypothesis about the relation of the parameters. Furthermore, one might want to test strategies that have free parameters themselves (e.g., Bergert & Nosofsky, Reference Bergert and Nosofsky2007; Busemeyer & Johnson, Reference Busemeyer, Johnson, Koehler and Harvey2004; Busemeyer & Townsend, Reference Busemeyer and Townsend1993; Nosofsky & Bergert, Reference Nosofsky and Bergert2007). This can, of course, also be handled by including these strategy-parameters in the estimation. Finally, if strategies make predictions on only one of the continuous variables the method can be applied in a simplified version (as indicated in equation 7).

8 General discussion

The Multiple-Measure Maximum Likelihood strategy classification method allows for identifying individuals’ decision strategies by taking into account choices, decision times and confidence judgments at the same time. In contrast to earlier approaches for including decision time into strategy classification (Bergert & Nosofsky, Reference Bergert and Nosofsky2007; Glöckner, Reference Glöckner2006; for an overview, see Glöckner, in press), it allows estimating the overall likelihood of the data given the application of a strategy and comparing any number of strategies based on the whole set of observations. The method allows differentiating between strategies that make the same choice predictions as long as their effects on decision time and confidence are sufficiently large. With decreasing effect sizes and small numbers of observations, the method is biased towards the strategy with fewer parameters. It is therefore advisable to use types of items for which large differences in times and confidences are expected, to use sufficient items per item type and to check the effect size before interpreting the results concerning strategies which make equal choice predictions.

It could be shown that including decision times and confidences reduces the proportion of misclassification as compared to the choice-based strategy classification by Bröder and Schiffer (Reference Bröder and Schiffer2003a). Therefore, it is advisable to use the MM-ML method even in cases where different choice predictions can be derived from different strategies.

Besides providing an overall maximum likelihood measure, the MM-ML method provides a tool to improve our understanding of the processes underlying decision making. It allows investigating the fit for the different dependent variables separately for each strategy and each individual. As can be seen in Figure 5 (left), the method provides a significance test for the scaling parameters RC and RT, which indicates whether the specific prediction for confidence or decision time of the respective strategy was in line with the data or not. Process models might be improved based on this knowledge by simply counting the number of significant predictions for each dependent variable.

8.1 Further applications and extensions

The MM-ML method is very general and not limited to investigate probabilistic inference tasks or to decision research in general. It can be applied for testing any kind of models for cognitive processes that make predictions concerning dichotomous behavior, response times and confidences (e.g., gambling tasks, recognition tasks, multi-attributive decisions). For example, recent research on strategy selection in gambling decisions (e.g., Glöckner & Betsch, Reference Glöckner and Betsch2008a) could be extended by recording choices, decision times and confidence and comparing the total data with predictions of different models (e.g., prospect theory, decision field theory, PCS, priority heuristic). The method can, of course, also be extended to further (normally distributed) dependent measures which can basically be included in the analysis in the same way as decision time and confidence (see above). A possible extension of particular interest to those investigating intuition could be measures for the distribution of eye-fixations and physiological arousal which can both be captured by recent eye-tracking technology (Glöckner & Herbold, Reference Glöckner and Herbold2008; see also Hochman, Glöckner, & Yechiam, in press). For the investigation of deliberate strategies only, classic mouselab measures for information search such as the Payne index (Payne et al., Reference Payne, Bettman and Johnson1988) could also be potentially included in the analysis.

In sum, the MM-ML method is an easy to use and reliable method for individual level strategy classification, which can be applied for intuitive and deliberate strategies. Application of the method, however, necessitates well specified models (Glöckner, in press). The MM-ML method has the major advantage to test strategies or models considering all their predictions at the same time. Furthermore, it will help to improve process models by providing much more detailed information on the fit of predictions for different dependent measures.

Footnotes

*

I am grateful to Joseph G. Johnson, Christoph Engel, Arndt Bröder, Jonathan Baron, Benjamin Hilbig and Nina Horstmann for insightful comments on earlier manuscript drafts. I thank Philipp Weinschenk and Andreas Nicklisch for their help with the math and the equations. Parts of this article were realized during a working stay organized by Edoardo Leva.

1 Note, however, that the exact estimations of classification errors depend crucially on the number of item types and items per type (n j).

2 The strategy that predicts no difference in decision time is often given the advantage of being the null-hypothesis. Only if a significant difference in the direction predicted by the other strategy is found the null-hypothesis is rejected and a classification for the alternative strategy is done. As discussed elsewhere (Glöckner, in press), with small n this leads to over-classification for strategies that predict no differences because the beta-error is bigger than conventional alpha levels.

3 Alternatively, it could be assumed that σ differs between item types and increases with increasing t i. Although this relation might also be modeled in ML calculation, for simplicity a constant σ should be assumed.

4 The assumption that confidence judgments are normally distributed is rather common (e.g., Merkle, Sieck, & Van Zandt, Reference Merkle, Sieck and Van Zandt2008). For a discussion of the independence assumption, see below.

5 Following Schwarz (Reference Schwarz1978) we used BIC instead of the alternative Akaike information criterion AIC = -2*ln(Likelihood) + 2*N p. Note, that using AIC in the simulations led to results similar to using N obs = 18 except for the fact that strategies with more parameters were classified somewhat more often (because 2 < ln(18)).

6 Differences result from the fact that the parameter is estimated jointly with the other parameters in the MM-ML method.

References

Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439449.CrossRefGoogle Scholar
Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized tational and Take-the-Best models of decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 107129.Google Scholar
Brehmer, B. (1994). The psychology of linear judgement models. Acta Psychologica, 87, 137154.CrossRefGoogle Scholar
Bröder, A. (2000). Assessing the empirical validity of the "Take-the-best” heuristic as a model of human probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 13321346.Google Scholar
Bröder, A. (2003). Decision making with the “adaptive toolbox”: Influence of environmental structure, intelligence, and working memory load. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 611625.Google ScholarPubMed
Bröder, A. (in press). Outcome-based strategy classification. In Glöckner, A. & Witteman, C. L. M. (Eds.), Tracing intuition: Recent methods in measuring intuitive and deliberate processes in decision making. London: Psychology Press & Routledge.Google Scholar
Bröder, A., & Gaissmaier, W. (2007). Sequential processing of cues in memory-based multiattribute decisions. Psychonomic Bulletin & Review, 14, 895900.CrossRefGoogle ScholarPubMed
Bröder, A., & Schiffer, S. (2003a). Bayesian strategy assessment in multi-attribute decision making. Journal of Behavioral Decision Making, 16, 193213.CrossRefGoogle Scholar
Bröder, A., & Schiffer, S. (2003b). Take The Best versus simultaneous feature matching: Probabilistic inferences from memory and effects of representation format. Journal of Experimental Psychology: General, 132, 277293.CrossRefGoogle ScholarPubMed
Bröder, A., & Schiffer, S. (2006). Stimulus format and working memory in fast and frugal strategy selection. Journal of Behavioral Decision Making, 19, 361380.CrossRefGoogle Scholar
Busemeyer, J. R., & Johnson, J. G. (2004). Computational models of decision making. In Koehler, D. J. & Harvey, N., (Eds.), Blackwell handbook of judgment and decision making (pp. 133154). Malden, MA: Blackwell Publishing.CrossRefGoogle Scholar
Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432459.CrossRefGoogle Scholar
Doherty, M. E., & Brehmer, B. (1997). The paramorphic representation of clinical judgment: A thirty-year retrospective. In Goldstein, W. M. & Hogarth, R. M. (Eds.), Research on judgment and decision making: Currents, connections, and controversies (pp. 537551). New York: Cambridge University Press.Google Scholar
Doherty, M. E., & Kurz, E. M. (1996). Social judgment theory. Thinking & Reasoning, 2, 109140.CrossRefGoogle Scholar
Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory processes model for judgments of likelihood. Psychological Review, 106, 180209.CrossRefGoogle Scholar
Fishburn, P. C. (1974). Lexicographic orders, utilities, and decision rules: A survey. Management Science, 20, 14421472.CrossRefGoogle Scholar
Gigerenzer, G., Hoffrage, U., & Kleinbölting, H. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98, 506528.CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650669.CrossRefGoogle ScholarPubMed
Glöckner, A. (2006). Automatische Prozesse bei Entscheidungen [Automatic processes in decision making]. Hamburg, Germany: Kovac.Google Scholar
Glöckner, A. (2007). Does intuition beat fast and frugal heuristics? A systematic empirical analysis. In Plessner, H., Betsch, C., & Betsch, T., (Eds.), Intuition in judgment and decision making (pp. 309325). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Glöckner, A. (in press). Multiple measure strategy classification: Outcomes, decision times and confidence. In Glöckner, A. & Witteman, C. L. M. (Eds.), Tracing Intuition: Recent Methods in Measuring Intuitive and Deliberate Processes in Decision Making. London: Psychology Press & Routledge.Google Scholar
Glöckner, A., & Betsch, T. (2008a). Do people make decisions under risk based on ignorance? An empirical test of the Priority Heuristic against Cumulative Prospect Theory. Organizational Behavior and Human Decision Processes, 107, 7595.CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2008b). Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making. Judgment and Decision Making, 3, 215228.CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2008c). Multiple-reason decision making based on automatic processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 10551075.Google ScholarPubMed
Glöckner, A., & Herbold, A.-K. (2008). Information processing in decisions under risk: Evidence for compensatory strategies based on automatic processes. MPI Collective Goods Preprint, No. 42. Available at SSRN: http://ssrn.com/abstract=1307664.Google Scholar
Glöckner, A., & Witteman, C. L. M. (in press). Foundations for tracing intuition: Models, findings, categorizations. In Glöckner, A. & Witteman, C. L. M. (Eds.), Tracing intuition: Recent methods in measuring intuitive and deliberate processes in decision making. London: Psychology Press & Routledge.Google Scholar
Gould, W., Pitblado, J., & Sribney, W. (2006). Maximum Likelihood Estimation with Stata (3rd ed.). College Station, TX: Stata Press.Google Scholar
Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE Transactions on Systems, Man, & Cybernetics, 17, 753770.CrossRefGoogle Scholar
Harte, J. M., & Koele, P. (2001). Modelling and describing human judgement processes: The multiattribute evaluation case. Thinking and Reasoning, 7, 2949.CrossRefGoogle Scholar
Hochman, G., Glöckner, A., & Yechiam, E. (in press). Physiological measures in identifying decision strategies. In Glöckner, A. & Witteman, C. L. M. (Eds.), Tracing intuition: Recent methods in measuring intuitive and deliberate processes in decision making. London: Psychology Press / Routledge.Google Scholar
Hoffman, P. J. (1960). The paramorphic representation of clinical judgment. Psychological Bulletin, 57, 116131.CrossRefGoogle ScholarPubMed
Horstmann, N., Ahlgrimm, A., & Glöckner, A. (under review). How distinct are intuition and deliberation? An eye-tracking analysis of instruction-induced decision modes.Google Scholar
Lee, M. D., & Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models. Psychonomic Bulletin & Review, 11, 343352.CrossRefGoogle Scholar
Merkle, E. C., Sieck, W. R., & Van Zandt, T. (2008). Response error and processing biases in confidence judgment. Journal of Behavioral Decision Making, 21, 428448.CrossRefGoogle Scholar
Norman, E., & Schulte-Mecklenbeck, M. (in press). Take a quick click at that! Mouselab and Eye-Tracking as tools to measure intuition. In Glöckner, A. & Witteman, C. L. M. (Eds.), Tracing intuition: Recent methods in measuring intuitive and deliberate processes in decision making. London: Psychology Press / Routledge.Google Scholar
Nosofsky, R. M., & Bergert, F. B. (2007). Limitations of exemplar models of multi-attribute probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 9991019.Google ScholarPubMed
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 534552.Google Scholar
Rieskamp, J., & Hoffrage, U. (1999). When do people use simple heuristics, and how can we tell? In Simple heuristics that make us smart (pp. 141167). New York, NY: Oxford University Press.Google Scholar
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461464.CrossRefGoogle Scholar
Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44, 92107.CrossRefGoogle ScholarPubMed
Figure 0

Table 1: Item types and predictions of strategies

Figure 1

Figure 1: Strategy classification results by data generating strategy for 60 observations.

Figure 2

Figure 2: Strategy classification results by error rate in strategy application for strong effects (sd≤1, left) and weaker effects (sd>1, right).

Figure 3

Figure 3: Strategy classification based on choices only by data generating strategy and error rate in strategy application.

Figure 4

Table 2: Comparison of strategy classification methods

Figure 5

Figure 4: Strategy classification results by data generating strategy for 120 observations.

Figure 6

Figure 5: Example output of the STATA implementation of the Multiple-Measure Maximum Likelihood strategy classification method for parameters per individual (top) and for the overall estimation (bottom). The individual output contains estimates for all coefficients and the overall fit of the individual data to the prediction of the considered strategy. The overall estimation shows BIC scores for each individual (rows) and each of the five considered strategies (columns). Lower scores indicate a better fit.

Supplementary material: File

Glöckner supplementary material

Glöckner supplementary material 1
Download Glöckner supplementary material(File)
File 133.1 KB
Supplementary material: File

Glöckner supplementary material

Glöckner supplementary material 2
Download Glöckner supplementary material(File)
File 86.4 KB
Supplementary material: File

Glöckner supplementary material

Glöckner supplementary material 3
Download Glöckner supplementary material(File)
File 6.8 KB