Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-27T19:59:24.010Z Has data issue: false hasContentIssue false

Attentional shifts and preference reversals: An eye-tracking study

Published online by Cambridge University Press:  01 January 2023

Carlos Alós-Ferrer*
Affiliation:
Zurich Center for Neuroeconomics (ZNE), Department of Economics, University of Zurich. Blümlisalpstrasse 10, 8006, Zurich, Switzerland
Alexander Jaudas
Affiliation:
Department of Political and Social Sciences, Zeppelin University Friedrichshafen, Germany
Alexander Ritschel
Affiliation:
Zurich Center for Neuroeconomics (ZNE), Department of Economics, University of Zurich
Rights & Permissions [Opens in a new window]

Abstract

The classic preference reversal phenomenon, where monetary evaluations contradict risky choices, has been argued to arise due to a focus on outcomes during the evaluation of alternatives, leading to overpricing of long-shot options. Such an explanation makes the implicit assumption that attentional shifts drive the phenomenon. We conducted an eye-tracking study to causally test this hypothesis by comparing a treatment based on cardinal, monetary evaluations with a different treatment avoiding a monetary frame. We find a significant treatment effect in the form of a shift in attention toward outcomes (relative to probabilities) when evaluations are monetary. Our evidence suggests that attentional shifts resulting from the monetary frame of evaluations are a driver of preference reversals.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2021] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Attention matters. A growing literature is concentrating on the role of attention in human decision making. In the consumer behavior literature, there is little doubt that consumers’ attention is limited, and one of the main objectives of marketing campaigns is simply to attract and direct it (e.g., Reference Roberts and LattinRoberts & Lattin, 1991; Reference De Los Santos, Babur and WildenbeestDe Los Santos & Wildenbeest, 2012). Recent contributions in decision and game theory have shown how differences in attention and information processing correlate with decision-making styles and biases (e.g., Reference Knoepfle, Wang and CamererKnoepfle et al., 2009; Reference Reutskaja, Nagel, Camerer and RangelReutskaja et al., 2011; Reference Polonio, Di Guida and CoricelliPolonio et al., 2015). Prominent models from cognitive psychology conceive of decision values as the result of evidence accumulation processes (e.g., Reference RatcliffRatcliff, 1978; Reference Ratcliff and RouderRatcliff & Rouder, 1998; Reference Usher and McClellandUsher & McClelland, 2001). A key insight of these models is that the construction (or discovery) of value is directed by visual attention, that is, evidence accumulates only if the alternative (or a corresponding attribute) is attended to. This is the essence, for instance, of the attentional drift-diffusion model (Reference Krajbich, Armel and RangelKrajbich et al., 2010; Reference Krajbich and RangelKrajbich & Rangel, 2011; Reference Krajbich, Lu, Camerer and RangelKrajbich et al., 2012). This is in agreement with evidence from decision neuroscience suggesting that decision values (neural correlates of choices) are constructed by aggregating inputs from different decision processes or attribute evaluations (Reference Shadlen and KianiShadlen & Kiani, 2013; Reference Shadlen and ShohamyShadlen & Shohamy, 2016).

In this paper, we provide direct empirical evidence substantiating the role of attention for an important anomaly in decision making under risk, maybe one of the most famous and wide-ranging ones: the classic preference reversal phenomenon (Reference Lichtenstein and SlovicLichtenstein & Slovic, 1971; Reference Grether and PlottGrether & Plott, 1979; see Reference SeidlSeidl, 2002 for a detailed survey). The phenomenon refers to a pattern of decisions under risk where decision makers explicitly provide monetary values for long-shot lotteries which are above those of more moderate ones, but then choose the latter, in contradiction with Expected Utility Theory and any value-based theory as Cumulative Prospect Theory. We focus on eye-tracking measurements during a preference reversal experiment with two different treatments (varying the mode of evaluation of lotteries) to provide direct evidence on the role of attention.

A large literature has demonstrated the robustness of this preference reversal phenomenon and postulated different, sometimes competing, explanations (e.g., Reference Tversky, Sattath and SlovicTversky et al., 1988; Reference Tversky and ThalerTversky et al., 1990; Reference Tversky and ThalerTversky & Thaler, 1990; Reference CaseyCasey, 1994; Reference Fischer, Carmon, Ariely and ZaubermanFischer et al., 1999; Reference Cubitt, Munro and StarmerCubitt et al., 2004; Reference Schmidt and HeySchmidt & Hey, 2004; Reference Butler and LoomesButler & Loomes, 2007). The phenomenon is typically demonstrated in paradigms involving pairs of lotteries consisting of a relatively safe lottery, called the P-bet (for “probability”), and a riskier lottery offering a larger prize (a long shot), called the $-bet. Individual preferences over such pairs are then elicited both through pairwise choices and by comparing valuations obtained separately for each lottery through (typically) stated minimal selling prices (Willingness To Accept, WTA). Decision makers often choose the P-bet in the direct choice task, but explicitly value the $-bet above the P-bet, in contradiction with the most basic tenets of decision theories under risk, and specifically with the indifference between a lottery and its certainty equivalent. This phenomenon reveals an inconsistency between elicitation methods which should be equivalent. In turn, this inconsistency is both highly relevant and consequential for applied economic analysis, because individual preferences are in practice often estimated on the basis of monetary valuations and related constructs (see Bateman et al., 2002, for an overview).

Of course, a number of reversals are to be expected simply because choices and evaluations are noisy, but the fundamental observation which needs to be explained is the asymmetry. That is, the reversal pattern described above, where P-bets are chosen but $-bets are valued above them (often called “predicted reversals”), occurs much more frequently (often above 50% of the time, conditional on the P-bet being chosen) than the opposite pattern, in which $-bets are chosen but P-bets receive a higher valuation (often called “unpredicted reversals”).

A prominent argument on the origins of the reversal phenomenon is the Compatibility Hypothesis (Reference Tversky, Sattath and SlovicTversky et al., 1988; Reference Tversky and ThalerTversky et al., 1990). Essentially, it states that, when an evaluation is elicited, attributes that naturally map onto the evaluation scale are given predominant weight. That is, eliciting a monetary evaluation (willingness to accept) makes the monetary outcomes of lotteries more salient and might anchor valuations, giving rise to an overpricing of the $-bets, where the associated monetary outcomes are large. It is not difficult to see how, in a noisy environment, such a phenomenon might give rise to preference reversals as found in the literature. For, if the elicited evaluations of $-bets are systematically biased up with respect to their true certainty equivalents, it is likely that part of the choices where a P-bet is chosen are associated with overpriced $-bets, resulting in many predicted reversals. In contrast, for choices where the $-bet is chosen, the same overpricing makes it unlikely that the P-bet is valued above the $-bet, resulting in few unpredicted reversals. A formal model based on evaluation noise, making also predictions for the associated decision times, was proposed and tested in Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al. (2016).

The Compatibility Hypothesis and related explanations of the preference reversal phenomenon essentially rest on the assumption that asking for a monetary valuation results in the overweighting of (salient) monetary outcomes. It is then reasonable to assume that eliciting valuations through a monetary scale shift visual attention toward monetary outcomes, compared to evaluation methods not relying on a monetary scale. We hence hypothesize a link between overweighting of monetary outcomes and visual attention on outcomes. Specifically, overweighting should be observable through an attentional shift. However, it should be noted that a failure to find supportive evidence for this hypothesis would not undermine the Compatibility Hypothesis in itself, while finding supportive evidence would be in line both with the Compatibility Hypothesis and our added hypothesis that overweighting should be reflected in a shift in visual attention.

In this study, we want to explicitly test this hypothesis by examining gaze data obtained through eye tracking during decisions under risk in the framework of the preference reversal phenomenon. To establish the link between monetary valuations and attentional shifts, we conduct a preference-reversal experiment with two treatments, one using a standard monetary valuation, and the other relying on an ordinal-based evaluation task without reference to monetary scales. This allows us to test whether the monetary valuation, relative to other evaluation methods, results in a shift in attention toward monetary outcomes.

A previous study by Reference Kim, Seligman and KableKim et al. (2012) investigated visual fixations in a preference-reversal experiment, but relied on a single treatment with monetary valuations. They observed that monetary amounts were fixated more than probabilities during evaluations but the opposite was true during choices, which can be taken as initial evidence in favor of a role of attention in preference reversals. However, their experiment departed from standard implementations in several ways. First, the description used to elicit monetary valuations by Reference Kim, Seligman and KableKim et al. (2012) (“bidding”) corresponds to Willingness To Pay, while the standard in preference reversal experiments is Willingness To Accept (experiments using WTP do not always find the preference reversal phenomenon, see Reference CaseyCasey, 1991 and Reference Schmidt and HeySchmidt & Hey, 2004). Second, lottery choices were repeated twice, which leads to different definitions of preference reversals. Our first treatment can be seen as a (conceptual) replication of this work, while relying on a standard implementation of preference reversal experiments. In particular, we will also compare the number of fixations in outcomes and probabilities within this treatment. However, without an additional treatment, it remains unclear whether the effect reported by Reference Kim, Seligman and KableKim et al. (2012) is due only to the presence of a monetary scale for evaluations (which is absent for choices), or whether it is confounded by the differences between evaluations of single lotteries, where a numerical estimate needs to be provided, and actual binary choices. For instance, people have notorious difficulties dealing with probabilities, hence it is to be expected that the default (in the absence of a monetary scale) is a larger number of fixations on probabilities than on the easy-to-understand outcomes, rather than an equal distribution, and these differences could interact with whether a choice or an evaluation is being made.

We aim to provide additional evidence in the form of a direct comparison across different evaluation methods, while also confirming the results of Reference Kim, Seligman and KableKim et al. (2012). That is, our hypotheses are that monetary amounts should be fixated more than probabilities in an evaluation phase where a monetary scale is used, compared to the choice phase (as in Reference Kim, Seligman and KableKim et al., 2012), but also compared to a different evaluation phase where a monetary scale is absent. Confirming both hypotheses (evaluations vs. choices and monetary vs. non-monetary evaluations) would provide concurrent evidence for the link between overweighting and visual attention. For this purpose, we chose a second treatment where the monetary evaluation is replaced with the elicitation of an ordinal ranking within a small subset of lotteries. The reason is twofold. On the one hand, this treatment is a straightforward implementation of a ranking (as opposed to a monetarily-framed rating) which requires no reference to monetary values at all. On the other hand, it has been previously shown that this evaluation method shuts down the preference reversal phenomenon and, instead, elicits a “reversal of the preference reversal phenomenon” (Reference CaseyCasey, 1991; Reference Bateman, Day, Loomes and SugdenBateman et al., 2007; Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016; Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al., 2020) where the rate of unpredicted reversals exceeds the rate of predicted ones. Thus, this is a natural choice for a comparison treatment where overweighting can be assumed to be less relevant or nonexistent (see Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016; Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al., 2020, for details).

Our treatment comparison is related to the study of Reference Rubaltelli, Dickert and SlovicRubaltelli et al. (2012), who analyzed fixations on gambles which were evaluated according to two different methods (within). Their study did not include a choice phase (and hence preference reversals cannot be observed), and choices were not incentivized. However, their evaluation treatments conceptually parallel ours. The first was a pricing-based evaluation similar to ours, with the difference that they used Willingness To Pay instead of Willingness To Accept. The second asked subjects to evaluate gambles using levels of attractiveness (−5=“very unattractive” to 5=“very attractive”). Although this is not a purely-ordinal ranking task as ours, the abstract rating, though numerical, is in principle void of monetary content. Subjects fixated more on outcomes than probabilities when pricing the lotteries, but there were no differences when rating them according to attractiveness. Although the tasks are very different and not representative of the ones used in the literature on the standard reversal phenomenon, this is in line with the hypothesis that the overweighting of monetary valuations predicted by the Compatibility Hypothesis should correspond to an increased visual attention on outcomes during monetary evaluations.

Finally, we complement the demonstration of an attentional shift across different evaluation modes at the aggregate level with evidence for the role of attention at the level of individual decisions. Following Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al. (2020), we included an additional, independent block of decisions under risk in the experiment, allowing for an out-of-sample estimation of individual utilities and certainty equivalents. For the treatment with monetary evaluations, we then obtain a quantitative measure of overpricing, in the form of the difference between the certainty equivalent and the elicited valuation. We then relate this measure of overpricing to visual attention by examining the effect of fixations on each lottery on the corresponding overpricing. The effects are admittedly modest, but we do find that attention on $-bets is associated with their overpricing, in line with the basic interpretation that increased attention boosts value. In contrast, attention on P-bets has no effect.

Our study belongs to the growing literature directly examining eye-tracking measurements in the social sciences. This technique is relatively common in psychology and neuroscience, but has only recently gained popularity for the study of individual decisions under risk (e.g., Reference Glöckner and HerboldGlöckner & Herbold, 2011; Reference Ludwig, Jaudas and AchtzigerLudwig et al., 2020). Most of the recent studies in this and related fields target gaze and fixation data to study search patterns or processes of information acquisition (e.g., Reference Knoepfle, Wang and CamererKnoepfle et al., 2009; Reference Reutskaja, Nagel, Camerer and RangelReutskaja et al., 2011; Reference Polonio, Di Guida and CoricelliPolonio et al., 2015; Reference Devetag, Di Guida and PolonioDevetag et al., 2016; Reference Polonio and CoricelliPolonio & Coricelli, 2019). Exceptions are Reference Wang, Spezio and CamererWang et al. (2010), who (in addition to fixation patterns) examined pupil dilation in sender-receiver games and found larger pupil dilation when deceiving messages were sent and Reference Alós-Ferrer, Jaudas and RitschelAlós-Ferrer et al. (2019b), who used pupil dilation as an indicator for cognitive effort in a Bayesian Updating task with varying incentives.

There are, of course, many other types of preference reversals in the literature, where two different choices stand in contradiction with a normative prediction. Other prominent examples of preference reversals are the asymmetric dominance or “decoy” effect (Reference Huber, Payne and PutoHuber et al., 1982; Reference PettibonePettibone, 2012), the compromise effect (Reference SimonsonSimonson, 1989), and the similarity effect (Reference TverskyTversky, 1972). In an eye-tracking experiment, Reference Noguchi and StewartNoguchi & Stewart (2014) investigate these effects in consumer choice tasks and conclude that they might be compatible with choices arising from a series of single-attribute comparisons. This view is conceptually aligned with ours in the sense that the relative weight of comparisons along different attributes is at the root of the respective effects.

The paper is structured as follows. Section 2 presents the experimental design in detail. Section 3 discusses the behavioral and eye-tracking results for the treatment comparisons. Section 4 discusses the utility estimation, the derivation of an overpricing measure, and its relation to attention data. Section 5 concludes. The Appendix includes a detailed description of the random utility model estimation procedure (Appendix A), a list of all lottery pairs used in the experiment (Appendix B), translated instructions (Appendix C), and example screenshots of the experiment (Appendix D).

2 Experimental Design and Procedures

Our dataset encompasses a total of 59 subjects (31 females, average age 22.6 years), who were measured in individual sessions.Footnote 1 Individual sessions lasted 48 minutes on average, and subjects earned an average of 15.86 Euro (SD=9.86), plus a 4 Euro show-up fee. Subjects were recruited from the student population of the University of Cologne using ORSEE (Reference GreinerGreiner, 2015), excluding students majoring in psychology and economics (who could have been taught about the preference reversal phenomenon), and subjects who had previously participated in similar experiments (involving lottery choice). The experiment was programmed in PsychoPy (Reference PeircePeirce, 2007). There were two treatments, Price and Rank, with 30 and 29 subjects, respectively.

2.1 Design

The experiment followed closely the general setup of behavioral experiments on preference reversals (e.g., Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016; Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al., 2020), with suitable modifications to accommodate eye-tracking measurements. Our intention was to establish attentional shifts within the classical paradigm without adding any potential confounds, and specifically compare it to the ranking design where the reversal of the preference reversal phenomenon has been elicited. We now describe the paradigm, the treatments, and the adjustments needed for eye-tracking measurements.

The experiment comprised three phases. The first and shortest one consisted of choices between 36 lottery pairs, unrelated to the P- and $-bets used in the subsequent two parts.Footnote 2 32 of these choices were used for the estimation of individual preferences out of sample, which is relevant for the analysis in Section 4 below, and the remaining 4 choices were used to check for dominance violations.Footnote 3

The main part of the experiment consisted of the second and third phases, which taken together correspond to a standard preference reversal experiment, except for the fact that eye-tracking data was collected. The second was the evaluation phase, in which we elicited the subjects’ valuations for 60 P-bets and 60 $-bets. In the Price treatment, subjects stated their willingness-to-accept (WTA) valuations for each lottery. Specifically, they were asked to state their minimal selling price for each of the 120 lotteries. Each lottery was presented on a separate screen. All lotteries were of the form A = (p, x), that is, A pays an amount x with probability p and zero otherwise. Subjects’ WTAs were limited to the range [0, x]. In the Rank treatment, we aimed to obtain ordinal evaluations as in Reference Bateman, Day, Loomes and SugdenBateman et al. (2007) or Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al. (2016). The same 120 lotteries used in the Price treatment were presented in blocks of six, and subjects assigned ranks to them from their most (rank 1) to their least preferred option (rank 6) according to how much they desired to play each lottery. Each block contained three P-bets and three $-bets. To ensure comparability between treatments, the lotteries in Price treatment were also presented in 20 “rounds,” separated by screens announcing the next round. Each such round consisted of six lotteries presented sequentially, with the set of lotteries in a round corresponding to one block in the Rank treatment.Footnote 4

The last phase was a a choice task, identical across treatments.Footnote 5 Subjects faced again the lotteries from the evaluation phase, now presented in 60 pairs, each consisting of a $-bet and a P-bet. For each of the 60 pairs, subjects were asked to choose which lottery they preferred to play. Pairs were constructed in such a way that a block in the second phase contained exactly three of the pairs used in the third phase, but the order of presentation of the pairs was randomized (for ease of implementation, each subject was randomly assigned to one of four different, pre-randomized sequences of lottery pairs).

In all parts of the experiment, lotteries were presented in the form of two framed boxes stacked vertically, one showing the outcome and one showing the probability of the lottery. The position of the two boxes (i.e., whether the monetary amount was on top or not) was counterbalanced across subjects. This presentation ensured a physical separation of the different dimensions of a lottery allowing us to clearly distinguish the areas of interest for the eye-tracking analysis. Appendix C and Appendix D show the instructions and screenshots of the experiment, respectively. The screen position (left or right) of lotteries within pairs was also counterbalanced within subjects, with half of the pairs displaying the $-bet on the right.

After the three phases described above, subjects were asked to complete a short questionnaire eliciting various demographics (gender, age, field of studies) and numerical literacy (Reference Lipkus, Samsa and RimerLipkus et al., 2001). There was no feedback during the course of the experiment, that is, subjects did not receive any information regarding their earnings until the very end of the experiment. All decisions were made independently and at a subject’s individual pace.

After the questionnaire, for each subject, one randomly-chosen lottery from each phase was selected, played, and paid. For the first and third phases, one of the lottery pairs in the corresponding phase was randomly selected and the lottery chosen by the subject was played out. The second phase used the (incentive-compatible) Ordinal Payment Method (Reference Goldstein and EinhornGoldstein & Einhorn, 1987; Reference Tversky and ThalerTversky et al., 1990; Reference Cubitt, Munro and StarmerCubitt et al., 2004). Specifically, the computer selected one block at random, and then randomly selected two of the six lotteries in the block. The one that the subject had priced or ranked higher was then played out. We opted for this incentive scheme instead of the Becker-DeGroot-Marschak procedure because the latter is often found to be noisier (see, e.g., Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016). The total payoff from the experiment was the sum of the amounts received in each phase, plus a lab-mandated show-up fee.

2.2 Eye-tracking Setup

Visual fixations were measured using an SMI RED500 remote eye tracker. The subject’s head was supported by a chin-rest minimizing random movement. Subjects were placed 55 cm in front of a 22” screen which showed the stimuli with a resolution of 1680×1050 pixels. The pupil was recorded at 250 Hz using iView X software, version 2.8.43. The eye tracker was calibrated at the beginning of each part (after instructions) using a 5-point calibration routine. Blinks were removed after data collection using the tools provided by SMI. The raw data files were converted to fixations using the SMI IDF converter tool 3.0.16. To identify fixations the SMI IDF converter tool uses a dispersion-based algorithm with minimum fixation duration of 50 ms (Reference Glöckner and HerboldGlöckner & Herbold, 2011; Reference Glöckner, Fiedler, Hochman, Ayal and HilbigGlöckner et al., 2012) and a maximum dispersion of 85 pixel (see Reference Salvucci and GoldbergSalvucci & Goldberg, 2000, for a comparison of different methods). Pre-defined non-overlapping Areas of Interest (AOIs) were defined around every piece of information (160×95 pixels per AOI).Footnote 6 After collection of the data, fixations were corrected using an algorithm similar to Reference Vadillo, Street, Beesley and ShanksVadillo et al. (2015), and the number and duration of fixations were computed and recorded.

3 Results: Attentional Shifts

We first present the purely-behavioral results to establish the presence of the preference reversal phenomenon (in the Price treatment) or its opposite (in the Rank treatment) as expected. Then we turn to our actual variables of interest, and examine attentional processes through eye-tracking data in two different subsections. Our main result in this section demonstrates an attentional shift toward outcomes (relative to probabilities) for monetary evaluations (Price treatment), compared to rankings (Rank treatment). This is accompanied by an attentional shift toward $-bets (relative to P-bets) for monetary evaluations, which is natural as $-bets involve larger outcomes.

The analysis in this section is based on subject averages (e.g., average number of fixations on the lotteries’ outcomes, computed at the subject level). All between-subject comparisons (across treatments) are made with Mann-Whitney-Wilcoxon (MWW) tests. All within-subject comparisons (differences between the choice and evaluation phases) are made with Wilcoxon-Signed-Rank (WSR) tests.

3.1 Behavioral Results

Figure 1 illustrates the behavioral results. In the Price treatment, the $-bet was evaluated higher than the P-bet in 68.78% of cases, but it was only chosen 29.17% of the time (WSR test, N = 30, z = 4.711, p < 0.0001; Figure 1, left panel, left).Footnote 7 This is a first reflection of the preference reversal phenomenon. In contrast, but as expected, in the Rank treatment the $-bet was ranked better than the paired P-bet in 24.89% of cases, but was chosen over the P-bet in 33.22% of cases. This difference is highly significant (WSR test, N = 29, z = −3.691, p = .0002; Figure 1, left panel, right) and goes in the opposite direction of the Price treatment, reflecting the “reversal of the preference reversal phenomenon” which is characteristic of ordinal treatments (Reference Bateman, Day, Loomes and SugdenBateman et al., 2007; Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016; Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al., 2020).

Figure 1: Left: Proportion of $-Bets preferred over the paired P-bets for both treatments and both phases. Right: Proportion of predicted and unpredicted reversals for both treatments.

Overall, we found 36.00% of reversals (of both types) in the Price treatment, and 18.45% in the Rank treatment. That is, ranking evaluations reduced the overall amount of preference reversals (MWW test, N = 59, z = 3.689, p = .0002). Crucially, and also as expected, it changed the dominant type of reversals. This is illustrated in the right–hand panel of Figure 1, which displays the reversal rates classified as predicted or unpredicted reversals.Footnote 8 In the Price treatment, predicted reversals (46.80%) were far more frequent than unpredicted ones (5.00%; WSR test, N = 27, z = 4.324, p < .0001). The opposite pattern was observed in the Rank treatment, where predicted reversals were far less frequent (8.61%) than unpredicted ones (46.42%; WSR test, N = 29, z = −4.227, p < .0001). The first observation reflects the well-established preference reversal phenomenon, while the second reflects its reversal as in Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al. (2016). Of course, we also observe more predicted reversals in the Price than the Rank treatment (MWW, N = 59, z = 5.573, p < .0001), and fewer unpredicted ones (MWW, N = 56, z = −5.652, p < .0001).

In summary, our behavioral data reflect the well-established preference reversal phenomenon and the previously-observed fact that this phenomenon is reversed if evaluations involve rankings instead of pricing. We now turn to eye-tracking data to study the attentional processes underlying preference reversals.

3.2 Attention Across Attributes

Consider the Price treatment first. Figure 2 (left-hand panel) displays the individual-level average number of fixations (across lotteries) per attribute (outcome and probability) in each phase of this treatment.Footnote 9 In the choice phase there were fewer fixations on outcomes (average 6.18 fixations on outcomes per lottery) than on probabilities (7.28; WSR, N = 30, z = −2.643, p = .0082). This suggests that the default level of attention is larger for probabilities than for outcomes, which is compatible with the view that human beings generally find the former less intuitive than the latter. This is also illustrated in the heatmap in Figure 3. However, this difference disappears for the evaluation phase, where there was no significant difference between the number of fixations on outcomes (15.12) and probabilities (13.71; WSR, N = 30, z = 1.090, p = .2756). To show that the difference across phases is significant, we computed the individual-level difference in the average number of fixations on outcomes and on probabilities. This difference was significantly different across phases (WSR, N = 30, z = 2.705, p = .0068). This is consistent with the Compatibility Hypothesis, which suggests that the level of attention to outcomes should increase for the (monetarily-framed) evaluation phase compared to the choice phase. It is also aligned with the results of Reference Kim, Seligman and KableKim et al. (2012), who however used a different experimental implementation. Since choices and evaluations differ in fundamental ways, though, to test the Compatibility Hypothesis we now compare the results to those of the Rank treatment, where the monetary scale was absent during evaluation.

Figure 2: Average number of fixations on outcomes and probabilities in the choice and evaluation phases, for the Price treatments (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the outcome/probability ratios for the number of fixations in the evaluation phases of both treatments (one outlier outside the picture).

Figure 3: Heatmap for the choice phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. The heatmap is deduced by convolving the fixations (of all individuals and lotteries) by an isotropic bidimensional Gaussian function. The standard deviation of the Gaussian function was set according to Reference Le Meur and BaccinoLe Meur & Baccino (2013). In the actual choice screen, the lotteries were further apart and not labeled, and both the left-right position of lotteries and the top-bottom alignment of outcomes and probabilities were counterbalanced. Actual screenshots are depicted in the Appendix. The figure illustrates that, in general, more attention is devoted to probabilities than to outcomes. The analogous picture for Treatment Ranking displays similar features for the choice phase.

In particular, the enhanced focus on outcomes should be absent in the Rank treatment. This is indeed borne by the data (Figure 2, center panel). In this treatment, subjects fixated more on probabilities than on outcomes both in the choice phase (outcomes, 6.56; probabilities, 8.18; WSR, N = 29, z = −3.708, p = .0002) and in the ranking phase (outcomes, 16.24; probabilities, 22.30; WSR, N = 29, z = −4.444, p < .0001). That is, there is no attentional shift across phases in this treatment. Rather, probabilities are attended more in both phases. The difference between the average number of fixations on outcomes and on probabilities was different across phases (WSR, N = 29, z = −4.444, p < .0001), which is not surprising since there are many more fixations in the ranking phase, hence the difference in fixations between probabilities and outcomes is even larger in the evaluation phase.

Our main result in this section, though, concerns the comparison across treatments. The very different setups of the evaluation phase of both treatments makes a direct comparison of number of fixations difficult. We therefore consider the outcome/probability ratios for the number of fixations in the evaluation phases of both treatments. The ratio indicates how visual attention in each evaluation phase was allocated to the two attributes, i.e., ratios above 1 mean a stronger focus on outcomes and ratios below 1 show a stronger focus on probabilities. This approach allows a simple, intuitive comparison of attention allocation across Price and Rank treatments. The ratios show a strong shift in attention across treatments (Figure 2, right-hand panel). The outcome/probability ratio was 1.42 in the Price treatment and only .73 in the Rank treatment. That is, there was a significant shift in attention toward the outcome in the Price treatment (MWW test, N = 59, z = 5.003, p < .0001) compared to the Rank treatment.Footnote 10 These results confirm that pricing-based evaluations induce a stronger attentional focus on monetary outcomes.

For our purposes, it was important that our Price treatment reproduced standard preference reversal experiments as carried out in the extensive literature on this phenomenon. Unfortunately, this means that the visual layout in the evaluation phase of treatment Rank must differ from the one in the other treatment simply because in the latter several lotteries are presented at once. This criticism could also be leveled at the comparison between the choice and the evaluation phase, and hence also to the analysis in Reference Kim, Seligman and KableKim et al. (2012). To ameliorate this difficulty, however, the presentation of each individual lottery was identical across treatments, including stimuli size.Footnote 11 Still, it could be argued that the differences in layout beyond the individual lotteries might lead to potential confounds (Reference Orquin and HolmqvistOrquin & Holmqvist, 2018). In particular, the presence of multiple lotteries might have increased cognitive load and resulted in more dispersed attention. This would result in attention being more uniformly distributed across the attributes. However, the opposite is true (see Figure 2), and thus we can rule out this alternative explanation. As reported above, in the Rank treatment, there were significantly more fixations on probabilities than on outcomes (22.30 vs. 16.24), while the difference in the Price treatment (13.71 vs. 15.12) was not significant.

Some lotteries have single-digit outcomes only (mostly P-bets), which could possibly be perceived through peripheral vision.Footnote 12 As a robustness check, we reran the analysis using only the $-bets. Since this still includes a few $-bets with single-digit outcomes and excluded a few P-bets with two-digit outcomes, we also ran further robustness checks excluding all single-digit outcome lotteries. All results remained qualitatively the same and some were strengthened. Outcomes were now fixated significantly more often than probabilities in the evaluation phase in the Price treatment. We also conducted a further robustness analysis classifying fixations differently, namely counting consecutive fixations in the same AOI as one. The results remained qualitatively unchanged.

Although the analysis above focuses on fixations, it has to be acknowledged that the differences in layouts increase the number of transitions in the evaluation phase of the Rank treatment simply because there are more areas of interest (multiple lotteries), which then also leads to an overall increase in fixations. Indeed, 37.94% of all transitions across AOIs in this phase were across lotteries. This might raise the concern that fixations arising from across-lottery comparisons might differ from other fixations and create a confound in our results. A similar point affects the comparison of fixations between phases (choices vs. evaluations), both in our treatments and in Reference Kim, Seligman and KableKim et al. (2012). To address this concern, we carried out a robustness analysis as follows. In the evaluation phase of the Rank treatment, 38.51% of all transitions start and end within the same AOI (either the outcome or the probability of a lottery). In the evaluation phase of the Price treatment, the corresponding number is 58.63%. Thus, we repeated the entire analysis using these AOI-internal transitions instead of fixations. For this analysis, we also used AOI-internal transitions for the choice phases. This dependent variable then ignores all transitions across lotteries and should more closely reflect attention within a lottery, and relies only on transitions unaffected by the different screen layouts across the treatments and across the phases within a treatment.

All tests reported above remained qualitatively unchanged with this new analysis, both for the comparison of phases within the treatments and for the comparison of evaluations across treatments. In the Price treatment, there were fewer internal transitions on outcomes (average 1.28) than on probabilities (1.66; WSR, N = 30, z = −2.479, p = .0132), but there was no significant difference in the evaluation phase (4.93 vs. 4.26; WSR, N = 30, z = 0.956, p = .3388). The individual-level difference in the average number of internal transitions on outcomes and on probabilities was significantly different across phases (WSR, N = 30, z = 2.314, p = .0207). In the Rank treatment, there were less internal transitions on outcomes than on probabilities both in the choice phase (1.39 vs. 1.95; WSR, N = 29, z = −3.503, p = .0005) and in the ranking phase (3.07 vs. 4.60; WSR, N = 29, z = −4.152, p < .0001), but the difference between the average number of internal transitions on outcomes and on probabilities was significantly different across phases (WSR, N = 29, z = −4.357, p < .0001). For the treatment difference, we computed the outcome/probability ratio for the number of internal transitions in the evaluation phases of both treatments, which was 1.71 in the Price treatment and only .693 in the Rank treatment. That is, as in the case of fixations, there was a significant shift in attention toward the outcome in the Price treatment (MWW test, N = 59, z = 4.427, p < .0001) compared to the Rank treatment.

We further investigated the effect of attribute values (actual outcomes and probabilities) on attention by conducting random effects panel regressions with robust standard errors for the (log-transformed) outcome/probability fixation ratios. Again, we interpret this variable as the level of attention on outcomes compared to probabilities. The regressions use the individual-level fixations for the evaluations of all lotteries during the evaluation phase for both treatments. Table 1 displays the results. The Rank treatment dummy is negative and highly significant in all three models, indicating an attentional shift toward probabilities compared to outcomes in that treatment, in agreement with the results reported above. The $-bet dummy is positive and highly significant, indicating a shift toward outcomes for $-bets compared to P-bets in the Price treatment (since the interaction term is included). The effect is negative and significant in all three models for the Rank treatment (linear combination test, β=−0.1084, −0.1850, and −0.1246, respectively), indicating a shift toward probabilities for $-bets in this treatment. In addition, we observe that the Outcome coefficient (monetary amount of the non-zero outcome of the lottery) is positive and highly significant in Models 2 and 3, demonstrating that a larger outcome results in a stronger shift toward outcomes compared to probabilities. Analogously, the Probability coefficient is positive, but misses significance at the 5% level (Model 3, p = .0796).

Table 1: Random Effects Panel Regression of the (log-transformed) Outcome/Probability Fixation Ratios.

Standard errors in parentheses

* p < 0.05

** p < 0.01

*** p < 0.001

3.3 Attention Across Lottery Types

Since $-bets involve larger outcomes than P-bets, an attentional shift across treatments should also be reflected in attentional differences between lottery types. In this subsection, we focus on this comparison. Figure 4 (left-hand panel) displays the individual-level average number of fixations per lottery in the Price treatment, separately for the evaluation and choice phases. For this treatment, when subjects were asked to generate a price (WTA) for a lottery, they fixated $-bets more (16.38) than P-bets (12.44; WSR, N = 30, z = 4.094, p < .0001). This can also be seen in the heatmap in Figure 5. In contrast, there were no significant differences in the number of fixations in the choice phase ($-bets, 6.82; P-bets, 6.64; WSR, N = 30, z = 0.391, p = .6959). To show that the difference across phases is significant, we computed the individual-level difference in the average number of fixations between $-bets and P-bets, which was larger for the pricing phase than for the choice phase (WSR, N = 30, z = 4.001, p < .0001). In summary, during the pricing phase subjects fixated more on the $-bets than on the P-bets, while in the choice phase both lotteries were given similar levels of attention. This is in agreement with Reference Kim, Seligman and KableKim et al. (2012), who found more fixations on $-bets than on P-bets in their bidding phase, and the opposite in their choice phase. It would be tempting to interpret these results as evidence for the Compatibility Hypothesis. This, however, would be unwarranted. The reason is that similar effects are obtained in the Rank treatment, whose evaluation phase involved no monetary scale.

Figure 4: Number of Fixations on the $-bet and P-bet in the choice and evaluation phase for the Price treatment (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the $-bet/P-bet ratios of fixations in the evaluation phases of both treatments.

Figure 5: Heatmap for the evaluation phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. Lotteries were evaluated individually and are presented here side-by-side for ease of comparison only. Below the lottery was the input field for the monetary evaluation (not part of AOIs for the analysis). Actual screenshots are depicted in the Appendix. The figure illustrates that, in this treatment, more attention was devoted to $-bets than to P-bets during monetary evaluation.

Indeed, in the Rank treatment (Figure 4, center panel), there were also significant differences in fixations between $-bets (20.79) and P-bets (17.74) in the ranking phase (WSR, N = 29, z = 4.508, p < .0001). Subjects fixated slightly more on $-bets (7.57) than P-bets in the choice phase (7.17), but, as in the Price treatment, the comparison was not significant (WSR, N = 29, z = 1.838, p = .0661). The individual-level difference in the average number of fixations between $-bets and P-bets was significantly different across phases (WSR, N = 29, z = 4.098, p < .0001). That is, there are also significant differences in attention across lottery types in the ranking treatment, in spite of the absence of a monetary scale as required by the Compatibility Hypothesis. To test the latter, we need to focus on the comparison across treatments.

Therefore, we consider the $-bet/P-bet ratios of fixations in the evaluation phases of both treatments. The ratio indicates how visual attention in each evaluation phase was allocated, i.e., ratios above 1 mean a stronger focus on $-bets and ratios below 1 a stronger focus on the P-bets. There was indeed a significant shift in attention toward the $-bet in the Price treatment (Figure 4, right-hand panel): the average $-bet/P-bet ratio was 1.30 in the Price treatment, and only 1.17 in the Rank treatment (MWW test, N = 59, z = 2.108, p = .0351).Footnote 13 These results confirm that monetary valuations (WTA) lead to a stronger attentional focus on the $-bets, the lotteries whose predominant feature is the large monetary outcome.

4 Results: Attention and Overpricing

In the previous section, we have relied on comparisons across treatments to demonstrate the existence of an attentional shift toward outcomes brought forward by a monetary focus during certain evaluation tasks. In this section, we report a complementary analysis by examining the relation between overpricing of lotteries and gaze data as measured by fixations. The argument is that, by focusing attention on higher outcomes during monetary evaluations, lotteries with particularly large outcomes ($-bets) will become overpriced, resulting in a larger number of instances where the $-bet is valued above the P-bet but the latter is chosen. Thus, the objective here is to link overpricing in the Pricing treatment at the lottery level with attention as revealed by fixation data. However, we view this analysis as a proof of concept, since it is unlikely that the effects of attention on the exact, monetary overpricing of lotteries can be reduced merely to the number of fixations.Footnote 14

To accomplish this, a measure of overpricing is needed. For this reason, our design included an initial phase with 32 lottery pairs, independent of the preference reversal design, which served the purpose of providing an out-of-sample estimation of the subjects’ individual preferences (following Reference Alós-Ferrer, Buckenmaier and GaragnaniAlós-Ferrer et al., 2020). The main goal of this estimation was to obtain individual utility functions and certainty equivalents which can be used to quantify the overpricing of lotteries in the evaluation phase of the Price treatment, and relate it to attention as measured by fixation data. In our opinion, the natural choice is to conduct this estimation out-of-sample, using (unrelated) binary choices. This is precisely what we chose to do, using an independent set of lotteries that covers the entire range relevant for the preference reversal experiment. In the first subsection below, we briefly describe the estimation. We then turned to a regression analysis using the so-obtained certainty equivalents as dependent variables and fixation data as a regressor.

4.1 Utility Estimation

The choices in the first part of the experiment were used for the estimation of individual preferences out of sample, in the sense that the estimation relied exclusively on the choices in this first part, but was used as an external measure to analyze the data in the following two parts. The set of lotteries used in the first phase (see Appendix B) was constructed to maximize the precision of the estimated risk attitudes, relying on optimal design theory (Reference SilveySilvey, 1980) in the context of non linear (binary) models (Reference Ford, Torsney and WuFord et al., 1992; Reference AtkinsonAtkinson, 1996), and following Reference MoffattMoffatt (2015).Footnote 15 We assume that the structure of errors follows an additive random utility model (e.g., Reference ThurstoneThurstone, 1927; Reference LuceLuce, 1959; Reference McFaddenMcFadden, 2001) with normally-distributed noise. The estimation procedure employs well-established techniques as used in many recent contributions (Reference Von Gaudecker, Van Soest and WengströmVon Gaudecker et al., 2011; Reference Conte, Hey and MoffattConte et al., 2011; Reference MoffattMoffatt, 2015). We refer the interested reader to Appendix A for a more detailed description of the estimation procedure.

For the functional form of the utilities, we assumed a constant relative risk aversion (CRRA) power utility function given by

with r > 0. The average of the estimated individual risk propensities in our data set is = 0.508 (median 0.440, SD 0.290).Footnote 16

4.2 Overpricing

For each of the 30 subjects that participated in the Price treatment, we collected pricing decisions for 120 lotteries in the evaluation phase. We now use the individual preferences estimated from the first part of the experiment to calculate for each individual the certainty equivalent (CE) for each lottery. For a lottery A and subject i, let EU i(A) be the corresponding expected utility of A for i. The certainty equivalent is defined as and derived from subject i’s utility function u i estimated in the first part of the experiment. The certainty equivalent is the formal translation of monetary evaluation questions, namely the amount of money for sure that leaves the decision maker indifferent between accepting it and playing out the lottery.

For each lottery A and each subject i, define overpricing by

That is, O i(A) is the difference between the stated price and the certainty equivalent for that lottery, and is hence measured in monetary units (Euros) and thus fully comparable across lotteries and subjects. To examine overpricing differences in a straightforward way at the population level, consider the average overpricing for each lottery across all subjects in the Price treatment. Average overpricing for $-bets was € 4.414, compared to only € 1.181 for P-bets (MWW test, N = 120, z = 9.127, p < .0001). This result documents a systematic overpricing of $-bets, in line with the predictions of Reference Tversky, Slovic and KahnemanTversky et al. (1990).

Our design also allowed us to relate overpricing to visual attention by comparing the quantities O i(A) to visual fixations in panel regressions. Of course, this paints an incomplete picture, since the effects of attention on elicited prices are likely to be more subtle than a direct, linear relation between number of fixations on a lottery and reported price for that lottery. However, a significant effect would serve as a further direct demonstration of the link between visual attention and overpricing.

Table 2 reports random effects panel regressions for overpricing, using the number of visual fixations on outcomes and probabilities of the corresponding lottery as regressors. The regression makes use of the individual, trial-level data for all subjects in the Price treatment, i.e., 60 different lotteries of each type ($-bets and P-bets) × 30 subjects. For simplicity, we analyzed the two types of lotteries separately, with Models 1–3 focusing on $-bets and Models 4–6 focusing on P-bets.

Table 2: Random Effects Panel Regression of Fixations on Overpricing.

Standard errors in parentheses

* p < 0.05

** p < 0.01

*** p < 0.001

Model 1 regresses the number of fixations on outcomes on normalized overpricing for $-bets. The coefficient is positive and highly significant (p = .0049) confirming the attentional effect, that is, increased attention on the high outcomes of $-bets is associated with larger overpricing. The model suggests that each fixation is directly responsible for an increase of 3.4 Eurocents in the evaluation of $-bets, relative to the true certainty equivalent. The average number of fixations on the outcomes of $-bets for the price treatment was 8.96, thus the results suggest that, roughly speaking, the direct effect of fixations on $-bets accounts for around 8.96× 3.4 = 30.46 Eurocents per lottery. This is a very modest effect compared to the actual magnitude of overpricing for $-bets, suggesting that (unsurprisingly) overpricing cannot be mechanically reduced to an additive effect on the price each time that the lottery is attended to. However, the very existence of the effect serves as an additional, basic proof of concept substantiating the link between visual attention and overpricing.

Model 2 adds the number of fixations on probabilities, which is also significant. Importantly, visual attention on outcomes remains positive and significant (p = .0136). That is, any kind of visual attention on $-bets is related to overpricing. Note that recent cognitive models as the attentional drift-diffusion model mentioned in the introduction (Reference Krajbich, Armel and RangelKrajbich et al., 2010) essentially postulate that increased attention boosts valuations, which would naturally provide a link between increased attention on (any attribute of) $-bets and their overpricing.

Model 3 shows that the effects remain positive and significant when adding controls such as age, gender, and the numerical literacy test score. Taken together, Models 1–3 demonstrate the link between visual attention and overpricing for $-bets. Models 4–6 reproduce the same analysis for P-bets. In contrast to $-bets, fixations on outcomes or probabilities have no significant effect on overpricing, independently of the addition of further controls.

In summary, our design allowed us to estimate individual certainty equivalents with an out-of-sample estimation procedure and directly show that $-bets are more overpriced than P-bets. This also enables us to confirm the link between visual attention and overpricing, and reveals that increased attention (more fixations) on $-bets leads to higher overpricing for those lotteries.

5 Conclusion

The classic preference reversal phenomenon is historically one of the most important behavioral anomalies in the study of decision making under risk. It casts doubt on fundamental assumptions that underlie the analysis of human decisions. It has accordingly received considerable attention across the disciplines. One of the most important components of explanations of the phenomenon is that, if a monetary evaluation is asked for, the focus on a monetary scale produces an overpricing when the lottery involves a large monetary amount, resulting in an incorrect evaluation of long-shot options compared to moderate ones.

This argument entails an attentional component which can now be tested directly by means of visual attention data. We conducted an experiment with two treatments, one containing a standard “pricing” evaluation which should shift attention toward outcomes, and another relying on an ordinal “ranking” evaluation which should not have such an attentional effect. The treatments correspond to standard experiments in the literature on the preference reversal phenomenon and have been shown to elicit this phenomenon and its reversal, respectively. By testing across treatments, we confirm that the monetary evaluation results in an attentional shift toward outcomes compared to probabilities, and toward long-shots compared to moderate lotteries. This provides direct evidence on the attentional foundations of preference reversals.

Although we kept the stimuli as comparable as possible, it should be remarked that implementing the standard experiments from the literature results in a layout difference (number of evaluated lotteries) across the treatments. Potentially, this could lead to confounds for the comparison of fixations across evaluation phases, and our results for this comparison should be interpreted carefully. However, we found the same effects when analyzing transitions (saccades) which start and finish on the same area of interest (outcome or probability). This alternative analysis naturally excludes additional transitions across lotteries in the Rank treatment. Overall, our results comparing treatments concur with the within-treatment analysis showing increased attention on outcomes compared to probabilities for the pricing treatment (which confirm previous results of Reference Kim, Seligman and KableKim et al., 2012), and showing greater attention placed on probabilities than on outcomes for both phases of the Rank treatment. When restricted to differences in evaluations, our results are also in alignment with those of Reference Rubaltelli, Dickert and SlovicRubaltelli et al. (2012), who showed that subjects fixated more on outcomes than on probabilities in a pricing task, but the difference vanished for an abstract attractiveness rating.

Additionally, by enriching the experiment with an independent block of lottery choices, we are able to estimate utilities and certainty equivalents out of sample, and hence quantify overpricing for each lottery and each subject in the treatment using pricing evaluations. This enables a panel-regression analysis confirming that increased visual fixations on the long-shot lotteries results in increased overpricing of those, while such an effect is absent for the valuations of moderate lotteries.

Together with previous contributions as Reference Kim, Seligman and KableKim et al. (2012) and Reference Rubaltelli, Dickert and SlovicRubaltelli et al. (2012), our evidence suggests that attentional shifts due to evaluations employing a monetary scale (pricing) are at the root of the classic preference reversal phenomenon. More generally, our results demonstrate that the analysis of behavioral anomalies in decisions under risk can greatly benefit from explicitly taking the role of attention into account. We suggest that future research in decision making should consider the attentional aspects (as well as possible bottom-up visual factors) even when relying on well-established behavioral tasks.

Appendix A: Description of RUM Estimation

We now describe the details of the estimation procedure used in the main text, which follows the approach described in Reference MoffattMoffatt (2015), Chapter 13. We index the N = 59 subjects by i = 1,…,N, and the T = 32 trials used for utility estimation by t = 1,…,T. In trial t, subjects faced the binary choice between A t = (p t,x t), which pays x t with probability p t and zero otherwise, and B t = (q t,y t), which pays y t with probability q t and zero otherwise. We assume the following constant relative risk aversion (CRRA) utility function

(1)

with r > 0. Under the assumption of Expected Utility maximization, subject i with utility function u(x|r i) chooses A t over B t if the difference in expected utilities is positive, that is,

(2)

Following a standard Random Utility Model (RUM), we postulate normally-distributed noise. That is, each subject is characterized by a fixed risk parameter r i, but utility is perturbed by an error term εitN(0,σ2) with σ2 > 0. Thus, A t is chosen if

(3)

Define the binary choice indicator for trial t

Then the probability of a choice conditional on the risk-parameter r i is given by

(4)

where Φ is the standard normal cumulative distribution function.

To account for individual heterogeneity, we assume that the risk parameter is distributed over the population and we estimate the parameters of this distribution (e.g., Reference Butler and LoomesHarless & Camerer, 1994; Reference MoffattMoffatt, 2005; Reference MoffattMoffatt, 2015; Reference Harrison, Rutstrom, Plott and SmithHarrison & Rutstrom, 2008; Reference Bellemare, Kröger and van SoestBellemare et al., 2008; Reference Von Gaudecker, Van Soest and WengströmVon Gaudecker et al., 2011; Reference Conte, Hey and MoffattConte et al., 2011). This approach greatly reduces the degrees of freedom compared to individual-level estimates, avoiding possible overfitting problems (see Reference Conte, Hey and MoffattConte et al., 2011, for a more detailed discussion). In particular, we assume that the individual risk attitudes in the population are distributed log-normally in our subject pool according to

Hence, the log-likelihood of a sample given by the matrix Γ = (γit) consisting of T trials and N subjects is

(5)

where f(r|µ,η) is the density function of the risk parameter r.

In order to evaluate the integral in (5) we use the method of maximum simulated likelihood (MSL) (see Reference TrainTrain, 2003, for details), which approximates the integral above by an average using Halton draws (Reference HaltonHalton, 1960; Reference MoffattMoffatt, 2015).

Applying maximum likelihood to the resulting approximation yields the estimates (). Given those estimates, we compute the posterior expectation of each subject’s risk attitude conditional on their T choices, and obtain

(with ) as the estimated utility function of subject i.

Appendix B: List of Lotteries

Table B1 contains the 32 lottery pairs used for the utility estimation in the first part. Table B2 contains the 4 lottery pairs involving a dominated lottery which was used to check for violations of dominance in the first part. Table B3 contains all 60 lottery pairs ($-bets and P-bets) used for the preference reversal experiment in the second and third parts.

Table B1. Lottery pairs used for the utility estimation, first part.

Prob = Probability, Outc = Outcome, EV = Expected Value.

Table B2. Lottery pairs with a dominated lottery, first part.

Prob = Probability, Outc = Outcome, EV = Expected Value.

Table B3. (P,$) lottery pairs used in the evaluation (second part) and choice (third part) phases.

Prob = Probability, Outc = Outcome, EV = Expected Value.

Appendix C: Translated Instructions

[These are the written instructions given to subjects before the experiment. The original instructions were in German. Text in brackets […] was not displayed to subjects.]

General Instructions

Welcome! In this experiment you will be asked to make a series of decisions that will determine your earnings at the end of the experiment. The total duration of the experiment is about 1 hour. If you have a question, please let us know and we will answer your question. It is important that you read the instructions carefully before you make your decisions.

We now explain the general course of the experiment. The experiment consists of three parts. In each part you have to make multiple decisions. At the end of the experiment you will be asked to answer a questionnaire.

In each part, you can earn money. How much money you earn will depend on your decisions in that part and chance. Your earnings in one part of the experiment are independent of your earnings and decisions in the other parts. Your earnings in each part will be added up and you will be paid the total amount anonymously and in cash at the end of the experiment. In addition to this amount you will receive 4 for your participation in the experiment.

Below you will find further general information for the experiment. The specific instructions for each part will be shown on screen directly before the beginning of that part.

Instructions: Lotteries

In the three parts of the experiment you will be asked to make decisions about lotteries. Hence, we will now explain in detail what a lottery is.

A lottery consists of two potential outcomes, each of which will occur with a given probability. One of the two outcomes is always € 0 (zero). The other outcome will differ from lottery to lottery. If a lottery is played out, this means that you will receive exactly one of the two possible outcomes (in Euro).

In the experiment lotteries will be represented by tables as in the example below. The bottom cell illustrates the probability for the corresponding outcome in the top cell. The remaining probability with the outcome of € 0 will not be displayed.

Example: The table depicted above is an example of how we present a lottery. In this example, the lottery pays € 10 with a probability of 75%. Accordingly, the lottery pays € 0 with a probability of 25%. The second outcome is always € 0 and occurs with the remaining probability. Please note that this information is not repeated numerically on screen.

If a lottery is played out, this means that it will pay exactly one of the two outcomes. In the example above, the lottery pays € 10 with a probability of 75% and € 0 with the remaining probability of 25%. Please note that the lottery shown above is only an example. The lotteries in the experiment will have different outcomes and probabilities. If you have a question, please raise your hand. If you have no further questions, you may proceed to the comprehension questions on the next page.

Comprehension questions: Lotteries

Below you see examples of two lotteries, similar to the ones you will face later on in the experiment. Please note that these lotteries are only examples.

Please answer the following comprehension questions:

  1. 1. What is the probability that Lottery A pays € 10?

  2. 2. What is the probability that Lottery B pays € 0?

  3. 3. Which amount does Lottery A pay with a probability of 25%?

  4. 4. Which amount does Lottery B pay with a probability of 55%?

Once you have answered all comprehension questions, please raise your hand. An experimenter will then check your answers.

Translated onscreen instructions

[These are the instructions for each part, which were presented separately on screen, at the beginning of each part. The original instructions were in German. Text in brackets […] was not displayed to subjects.]

Welcome to this economic experiment. Thank you for supporting our research. Please note the following rules:

  1. 1. If you have questions, please raise your hand.

  2. 2. Please refrain from using any features of the computer that are not part of the experiment.

Instructions for part 1

Your decisions: In this part of the experiment you will be presented with a series of lottery pairs. Your task is to choose one of the two lotteries from each pair.

On the screen you will see a lottery pair (consisting of two lotteries) represented by two tables. One of the lotteries will be shown on the left and the other will be shown on the right. You choose one of the lotteries by pressing the left or right arrow key on your keyboard. These keys are marked with a yellow sticker. To choose the lottery on the left, press the left arrow key “←.” To choose the lottery on the right, press the right arrow key “→.” Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

There are no wrong or correct decisions. When you choose one of the lotteries, this simply shows that you prefer to play this lottery over the other lottery.

After you have made your decision, you will see the next lottery pair. In part 1 you will be presented with a total of 36 lottery pairs. After you have made a decision for each of the pairs, this part ends and we will start with the next part of the experiment.

Your earnings for part 1

After you have made a decision for each of the lottery pairs, the computer will randomly select one of the 36 lottery pairs. The computer then checks which of the two lotteries you have chosen for this randomly selected pair. The lottery you have chosen will be played out. The outcome of the lottery determines your earnings for part 1 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please let us know.

Instructions for part 2 [Price treatment]

Your decisions: In this part of the experiment you will be presented with a series of lotteries. When a lottery is presented to you on screen, you may simply assume that you own that lottery and are asked to sell it.

Your task is to state the lowest price at which you are still willing to sell the presented lottery instead of keeping the lottery and playing it out.

There is no wrong or correct answer when stating the lowest price at which you are still willing to sell the lottery. When you enter your selling price for the lottery, simply ask yourself “Is this really the lowest price at which I am still willing to sell the lottery instead of playing the lottery?”. Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

Please enter the lowest price at which you are still willing to sell the lottery in the form “EURO.CENTS.” Please note that you cannot enter a selling price that is larger than the highest outcome of the lottery.

After you have entered your selling price, the next lottery will be presented. In this part of the experiment you will see a total of 120 lotteries, presented in 20 rounds of 6 lotteries each. All rounds are independent. Once you have entered a selling price for each lottery in a round, the next round will start. Once you are done with all 20 rounds, you can continue with the next part of the experiment.

Your earnings for part 2 [Price treatment]

After you have entered your lowest selling price for each of the lotteries, the computer will randomly draw one of the 20 rounds. From this round, the computer will then randomly select two of the six lotteries. The computer then checks for which of the two lotteries you have entered the higher selling price (in case both prices are the same, the computer will randomly select one of the two lotteries with equal probability). This lottery will be played out and the outcome of that lottery determines your earnings for part 2 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Instructions for part 2 [Rank treatment]

Your decisions: In this part of the experiment you will be presented with a series of lotteries. When a lottery is presented to you on screen, you may simply assume, that you own that lottery and may play that lottery.

Your task is to order different lotteries according to your preference, that is, according to how much you would like to play them. In each round you will see six different lotteries on screen. Please order the lotteries as follows:

  • First, choose your first-ranked lottery, that is, the one of the six lotteries that you would like to play out the most.

  • Second, choose your second-ranked lottery, that is the second one that you would like to play out the most.

  • Third, choose your third-ranked lottery, that is the third one that you would like to play out the most.

  • Fourth, choose your fourth-ranked lottery, that is the fourth one that you would like to play out the most.

  • Fifth, choose your fifth-ranked lottery, that is the fifth one that you would like to play out the most.

  • Sixth, choose your sixth-ranked lottery, that is the one that you would like to play out the least.

To select a lottery simply click on the button below the lottery that you want to select. As soon as you assign a rank to a lottery, the corresponding rank (from 1 to 6) will be shown below that lottery.

In case you want to change the rank of the lotteries, please press the “Reset” button. This resets the ranking. After you have ranked the lotteries from rank 1 to rank 6, please press the “Continue” button to confirm your ranking and proceed to the next round.

Please note that there is no wrong or correct ranking. When ranking the lotteries, simply ask yourself which lottery you would like to play out the most, which one you would like the second, and so on. Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

In this part of the experiment you will see a total of 120 lotteries, presented in 20 rounds of 6 lotteries each. All rounds are independent, that is, you will have to submit 20 rankings of 6 lotteries by assigning ranks from 1 to 6. Once you are done with all 20 rounds, you can continue with the next part of the experiment.

Your earnings for part 2 [Rank treatment]

After you have ranked all lotteries, the computer will randomly draw one of the 20 rounds. From this round, the computer will then randomly select two of the six lotteries. The computer then will check which of the two lotteries you have ranked higher (that is, which one you want to play out more). This lottery will be played out and the outcome of that lottery determines your earnings for part 2 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Instructions for part 3

Your decisions: In this part of the experiment you will be presented with a series of lottery pairs. Similarly to part 1, your task is to choose one of the two lotteries from each pair. Please note that the lottery pairs are different from part 1.

On the screen you will see a lottery pair (consisting of two lotteries) represented by two tables. One of the lotteries will be shown on the left and the other will be shown on the right. You can choose one of the lotteries pressing the left or right arrow key on your keyboard. These keys are marked with a yellow sticker. To choose the lottery on the left, press the left arrow key “←.” To choose the lottery on the right, press the right arrow key “→.” Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

There are no wrong or correct decisions. When you choose one of the lotteries, this simply shows that you prefer to play this lottery over the other lottery.

After you have made your decision, you will see the next lottery pair. In part 3 you will be presented with a total of 60 lottery pairs. After you have made a decision for each of the pairs, this part ends and you can start the questionnaire.

Your earnings for part 3

After you have made a decision for each of the lottery pairs, the computer will randomly select one of the 60 lottery pairs. The computer then will check which of the two lotteries you have chosen for this randomly selected pair. The lottery you have chosen will be played out. The outcome of the lottery determines your earnings for part 3 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Appendix D Screenshots

The following pictures depict screenshots from the different phases. The pictures also include dashed frames which the subjects did not see and are added only to represent the Areas of Interests used for classifying the number of fixations.

Figure D.1: Example screenshot of the lottery choice phase (part 1 and 3).

Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.

Figure D.2: Example screenshot of the lottery evaluation phase in the Price treatment (part 2).

Note: The dashed frames around the outcome and probability are visualizations of the areas of interest and were not visible to subjects.

Figure D.3: Example screenshot of the lottery evaluation phase in the Rank treatment (part 2).

Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.

Footnotes

We thank Andreas Gloeckner and two anonymous referees for helpful comments. The authors gratefully acknowledge financial support from the German Research Foundation (DFG) under project Al-1169/4, part of the Research Unit “Psychoeconomics” (FOR 1882).

Prob = Probability, Outc = Outcome, EV = Expected Value.

Prob = Probability, Outc = Outcome, EV = Expected Value.

Prob = Probability, Outc = Outcome, EV = Expected Value.

1 One additional subject had to be excluded from the analysis due to poor eye-tracking data quality. An additional measurement was not completed because the subject took extremely long for her decisions and exceeded the allocated slot.

2 See Appendix B for a complete list of all lottery pairs used in each phase of the experiment.

3 No subject chose a strictly dominated lottery out of the pairs.

4 For the sake of clarity, we refer to the six lotteries presented simultaneously in the Rank treatment as a block also for the Price treatment, even though in the latter they were presented individually and sequentially.

5 The preference reversal phenomenon occurs independently of whether the choice phase precedes or follows the evaluation phase (e.g., Reference Alós-Ferrer, Granić, Kern and WagnerAlós-Ferrer et al., 2016).

6 Following the literature, repeated fixations within the same AOI were still counted as different fixations, i.e. not merged into one fixation.

7 In the Price treatment, in 129 out of 1,800 cases subjects gave both lotteries the same WTA, indicating indifference. Excluding these observations does not change the result quantitatively.

8 The rate of predicted reversals is the proportion of pairs where the $-bet was evaluated higher than the P-bet conditional on the P-bet being chosen. The rate of unpredicted reversals is the proportion of pairs where the P-bet was evaluated higher than the $-bet, conditional on the $-bet being chosen. In the tests below, the number of observations sometimes differs as reversal rates cannot be computed if a subject never chose the corresponding type of lotteries.

9 An alternative measure of attention is the overall duration of fixations. Both fixations and overall duration are often reported in eye-tracking studies and yield similar conclusions in our case.

10 Unsurprisingly, there are no significant differences between outcome/probability ratios in the choice phases across treatments (Price treatment .85, Rank treatment .85; MWW test, N = 59, z = −0.227, p = .8201).

11 The size of the AOIs used in the analyses was also always identical for all phases and treatments. However, the boxes around the actual numbers where slightly smaller in the Ranking phase. The distance between the two AOIs within a lottery was always at least 65 pixels, large enough to prevent fixation misallocation.

12 We thank an anonymous reviewer for this observation.

13 Of course, there are no significant differences between $-bet/P-bet ratios in the choice phases across treatments (Price treatment 1.00, Rank treatment 1.04; MWW test, N = 59, z = −1.228, p = .2194).

14 Random effects panel probit regressions on the likelihood of (predicted) reversals revealed no measurable significant effects of outcome/probability or $-bet/P-bet fixation ratios.

15 We chose to estimate subjects’ risk attitudes from a sequence of lottery choices instead of relying on alternatives such as the multiple price list (MPL) method (Reference Holt and LauryHolt & Laury, 2002) because the literature has pointed out a number of difficulties with the latter, e.g., imposing a correlation structure on the choice sequence (see, e.g., Reference Andersen, Harrison, Lau and RutströmAndersen et al., 2006), or the compromise effect (Reference Beauchamp, Benjamin, Laibson and ChabrisBeauchamp et al., 2019).

16 An agent with a risk propensity equal to the average in our sample would have a certainty equivalent of about $2.56 when facing a lottery paying $10 with 50% probability and zero otherwise.

References

Alós-Ferrer, C., Buckenmaier, J., & Garagnani, M. (2020). Stochastic Choice and Preference Reversals. Working Paper, University of Zurich.10.2139/ssrn.3748599CrossRefGoogle Scholar
Alós-Ferrer, C., Granić, D.-G., Kern, J., & Wagner, A. K. (2016). Preference Reversals: Time and Again. Journal of Risk and Uncertainty, 52(1), 6597.CrossRefGoogle Scholar
Alós-Ferrer, C., Jaudas, A., & Ritschel, A. (2019b). Effortful Bayesian Updating: A Pupil-dilation Study. Working Paper, University of Zurich.Google Scholar
Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2006). Elicitation Using Multiple Price List Formats. Experimental Economics, 9(4), 383405.CrossRefGoogle Scholar
Atkinson, A. C. (1996). The Usefulness of Optimum Experimental Designs. Journal of the Royal Statistical Society, 51(1), 5976.CrossRefGoogle Scholar
Bateman, I., Day, B., Loomes, G., & Sugden, R. (2007). Can Ranking Techniques Elicit Robust Values? Journal of Risk and Uncertainty, 34(1), 4966.10.1007/s11166-006-9003-4CrossRefGoogle Scholar
Bateman, I. J., et al. (2002). Economic Valuation with Stated Preference Techniques: A Manual. Cheltenham, United Kingdom: Edward Elgar.CrossRefGoogle Scholar
Beauchamp, J. P., Benjamin, D. J., Laibson, D. I., & Chabris, C. F. (2019). Measuring and Controlling for the Compromise Effect when Estimating Risk Preference Parameters. Experimental Economics, (pp. 131).Google Scholar
Bellemare, C., Kröger, S., & van Soest, A. (2008). Measuring Inequity Aversion in a Heterogeneous Population Using Experimental Decisions and Subjective Probabilities. Econometrica, 76(4), 815839.CrossRefGoogle Scholar
Butler, D. J. & Loomes, G. (2007). Imprecision as an Account of the Preference Reversal Phenomenon. American Economic Review, 97(1), 277297.CrossRefGoogle Scholar
Casey, J. T. (1991). Reversal of the Preference Reversal Phenomenon. Organizational Behavior and Human Decision Processes, 48(2), 224251.CrossRefGoogle Scholar
Casey, J. T. (1994). Buyers’ Pricing Behavior for Risky Alternatives: Encoding Processes and Preference Reversals. Management Science, 40(6), 730749.CrossRefGoogle Scholar
Conte, A., Hey, J. D., & Moffatt, P. G. (2011). Mixture Models of Choice Under Risk. Journal of Econometrics, 162(1), 7988.CrossRefGoogle Scholar
Cubitt, R. P., Munro, A., & Starmer, C. (2004). Testing Explanations of Preference Reversal. Economic Journal, 114(497), 709726.CrossRefGoogle Scholar
De Los Santos, , Babur, A. H. & Wildenbeest, M. R. (2012). Testing Models of Consumer Search Using Data on Web Browsing and Purchasing Behavior. American Economic Review, 102(6), 29552980.CrossRefGoogle Scholar
Devetag, G., Di Guida, S., & Polonio, L. (2016). An Eye-Tracking Study of Feature-Based Choice in One-Shot Games. Experimental Economics, 19(1), 177201.CrossRefGoogle Scholar
Fischer, G. W., Carmon, Z., Ariely, D., & Zauberman, G. (1999). Goal-Based Construction of Preferences: Task Goals and the Prominence Effect. Management Science, 45(8), 10571075.10.1287/mnsc.45.8.1057CrossRefGoogle Scholar
Ford, I., Torsney, B., & Wu, C. J. (1992). The Use of a Canonical Form in the Construction of Locally Optimal Designs for Non-Linear Problems. Journal of the Royal Statistical Society, 54(2), 569583.10.1111/j.2517-6161.1992.tb01897.xCrossRefGoogle Scholar
Glöckner, A., Fiedler, S., Hochman, G., Ayal, S., & Hilbig, B. E. (2012). Processing Differences Between Descriptions and Experience: A Comparative Analysis Using Eye-tracking and Physiological Measures. Frontiers in Psychology, 3 (173), 115.CrossRefGoogle ScholarPubMed
Glöckner, A. & Herbold, A.-K. (2011). An Eye-tracking Study on Information Processing in Risky Decisions: Evidence for Compensatory Strategies Based on Automatic Processes. Journal of Behavioral Decision Making, 24(1), 7198.CrossRefGoogle Scholar
Goldstein, W. M. & Einhorn, H. J. (1987). Expression Theory and the Preference Reversal Phenomena. Psychological Review, 94(2), 236254.CrossRefGoogle Scholar
Greiner, B. (2015). Subject Pool Recruitment Procedures: Organizing Experiments with ORSEE. Journal of the Economic Science Association, 1, 114125.CrossRefGoogle Scholar
Grether, D. M. & Plott, C. R. (1979). Theory of Choice and the Preference Reversal Phenomenon. American Economic Review, 69(4), 623638.Google Scholar
Halton, J. H. (1960). On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi-Dimensional Integrals. Numerische Mathematik, 2(1), 8490.CrossRefGoogle Scholar
Harless, D. W. & Camerer, C. F. (1994). The Predictive Utility of Generalized Expected Utility Theories. Econometrica, 62(6), 12511289.CrossRefGoogle Scholar
Harrison, G. & Rutstrom, E. (2008). Experimental Evidence on the Existence of Hypothetical Bias in Value Elicitation Methods. In Plott, C. R. & Smith, V. L. (Eds.), Handbook of Experimental Economics Results, volume 1, Part 5 chapter 81, (pp. 752CrossRefGoogle Scholar
Holt, C. A. & Laury, S. K. (2002). Risk Aversion and Incentive Effects. American Economic Review, 92(5), 16441655.CrossRefGoogle Scholar
Huber, J., Payne, J. W., & Puto, C. (1982). Adding Symmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. Journal of Consumer Research, 9(1), 9098.CrossRefGoogle Scholar
Kim, B. E., Seligman, D., & Kable, J. W. (2012). Preference Reversals in Decision Making under Risk are Accompanied by Changes in Attention to Different Attributes. Frontiers in Neuroscience, 6(109), 110.CrossRefGoogle ScholarPubMed
Knoepfle, D. T., Wang, J. T.-Y., & Camerer, C. F. (2009). Studying Learning in Games Using Eye-Tracking. Journal of the European Economic Association, 7(23), 388–398.CrossRefGoogle Scholar
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual Fixations and the Computation and Comparison of Value in Simple Choice. Nature Neuroscience, 13(10), 12921298.CrossRefGoogle ScholarPubMed
Krajbich, I., Lu, D., Camerer, C., & Rangel, A. (2012). The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions. Frontiers in Psychology, 3(Article 193), 118.CrossRefGoogle Scholar
Krajbich, I. & Rangel, A. (2011). Multialternative Drift-Diffusion Model Predicts the Relationship Between Visual Fixations and Choice in Value-Based Decisions. Proceedings of the National Academy of Sciences, 108(33), 1385213857.CrossRefGoogle ScholarPubMed
Le Meur, O. & Baccino, T. (2013). Methods for Comparing Scanpaths and Saliency Maps: Strengths and Weaknesses. Behavior Research Methods, 45, 251266.CrossRefGoogle ScholarPubMed
Lichtenstein, S. & Slovic, P. (1971). Reversals of Preference Between Bids and Choices in Gambling Decisions. Journal of Experimental Psychology, 89(1), 4655.CrossRefGoogle Scholar
Lipkus, I. M., Samsa, G., & Rimer, B. K. (2001). General Performance on a Numeracy Scale Among Highly Educated Samples. Medical Decision Making, 21(1), 3744.CrossRefGoogle ScholarPubMed
Luce, R. D. (1959). Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.Google Scholar
Ludwig, J., Jaudas, A., & Achtziger, A. (2020). The Role of Motivation and Volition in Economic Decisions: Evidence from Eye Movements and Pupillometry. Journal of Behavioral Decision Making, 33(2), 180195.CrossRefGoogle Scholar
McFadden, D. L. (2001). Economic Choices. American Economic Review, 91(3), 351378.CrossRefGoogle Scholar
Moffatt, P. G. (2005). Stochastic Choice and the Allocation of Cognitive Effort. Experimental Economics, 8(4), 369388.CrossRefGoogle Scholar
Moffatt, P. G. (2015). Experimetrics: Econometrics for Experimental Economics. London: Palgrave Macmillan.Google Scholar
Noguchi, T. & Stewart, N. (2014). In the Attraction, Compromise, and Similarity Effects, Alternatives are Repeatedly Compared in Pairs on Single Dimensions. Cognition, 132(1), 4456.CrossRefGoogle ScholarPubMed
Orquin, J. L. & Holmqvist, K. (2018). Threats to the Validity of Eye-movement Research in Psychology. Behavior Research Methods, 50, 16451656.CrossRefGoogle Scholar
Peirce, J. W. (2007). PsychoPy – Psychophysics Software in Python. Journal of Neuroscience Methods, 162(1), 813.CrossRefGoogle ScholarPubMed
Pettibone, J. C. (2012). Testing the Effect of Time Pressure on Asymetric Dominance and Compromise Decoys in Choice. Judgment and Decision Making, 7(4), 513523.CrossRefGoogle Scholar
Polonio, L. & Coricelli, G. (2019). Testing the Level of Consistency Between Choices and Beliefs in Games Using Eye-Tracking. Games and Economic Behavior, 113, 566586.CrossRefGoogle Scholar
Polonio, L., Di Guida, S., & Coricelli, G. (2015). Strategic Sophistication and Attention in Games: An Eye-Tracking Study. Games and Economic Behavior, 94, 8096.CrossRefGoogle Scholar
Ratcliff, R. (1978). A Theory of Memory Retrieval. Psychological Review, 85, 59108.CrossRefGoogle Scholar
Ratcliff, R. & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. Psychological Science, 9(5), 347356.CrossRefGoogle Scholar
Reutskaja, E., Nagel, R., Camerer, C. F., & Rangel, A. (2011). Search Dynamics in Consumer Choice under Time Pressure: An Eye-Tracking Study. American Economic Review, 101(2), 900926.CrossRefGoogle Scholar
Roberts, J. H. & Lattin, J. M. (1991). Development and Testing of a Model of Consideration Set Composition. Journal of Marketing Research, 28(4), 429440.CrossRefGoogle Scholar
Rubaltelli, E., Dickert, S., & Slovic, P. (2012). Response Mode, Compatibility, and Dual-processes in the Evaluation of Simple Gambles: An eye-tracking investigation. Judgment and Decision Making, 7(4), 427440.10.1017/S193029750000276XCrossRefGoogle Scholar
Salvucci, D. D. & Goldberg, J. H. (2000). Identifying Fixations and Saccades in Eye-tracking Protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (pp. 7178). New York, NY, USA: Association for Computing Machinery.CrossRefGoogle Scholar
Schmidt, U. & Hey, J. D. (2004). Are Preference Reversals Errors? An Experimental Investigation. Journal of Risk and Uncertainty, 29(3), 207218.CrossRefGoogle Scholar
Seidl, C. (2002). Preference Reversal. Journal of Economic Surveys, 16(5), 621655.CrossRefGoogle Scholar
Shadlen, M. N. & Kiani, R. (2013). Decision Making as a Window on Cognition. Neuron, 80, 791806.CrossRefGoogle ScholarPubMed
Shadlen, M. N. & Shohamy, D. (2016). Decision Making and Sequential Sampling from Memory. Neuron, 90, 927939.CrossRefGoogle ScholarPubMed
Silvey, S. D. (1980). Optimal Design: An Introduction to the Theory for Parameter Estimation, volume 1. New York: Chapman and Hall.CrossRefGoogle Scholar
Simonson, I. (1989). Choice Based on Reasons: The Case of Attraction and Compromise Effects. Journal of Consumer Research, 16(2), 158174.CrossRefGoogle Scholar
Thurstone, L. L. (1927). A Law of Comparative Judgement. Psychological Review, 34, 273286.CrossRefGoogle Scholar
Train, K. E. (2003). Discrete Choice Methods with Simulation. New York: Cambridge University Press.CrossRefGoogle Scholar
Tversky, A. (1972). Elimination by Aspects: A Theory of Choice. Psychological Review, 79(4), 281299.CrossRefGoogle Scholar
Tversky, A., Sattath, S., & Slovic, P. (1988). Contingent Weighting in Judgment and Choice. Psychological Review, 95(3), 371384.CrossRefGoogle Scholar
Tversky, A., Slovic, P., & Kahneman, D. (1990). The Causes of Preference Reversal. American Economic Review, 80(1), 204217.Google Scholar
Tversky, A. & Thaler, R. H. (1990). Anomalies: Preference Reversals. Journal of Economic Perspectives, 4(2), 201211.CrossRefGoogle Scholar
Usher, M. & McClelland, J. L. (2001). The Time Course of Perceptual Choice: The Leaky, Competing Accumulator Model. Psychological Review, 108(3), 550592.CrossRefGoogle ScholarPubMed
Vadillo, M. A., Street, C. N. H., Beesley, T., & Shanks, D. R. (2015). A Simple Algorithm for the Offline Recalibration of Eye-tracking Data Through Best-fitting Linear Transformation. Behavior Research Methods, 47(4), 13651376.CrossRefGoogle ScholarPubMed
Von Gaudecker, H.-M., Van Soest, A., & Wengström, E. (2011). Heterogeneity in Risky Choice Behavior in a Broad Population. American Economic Review, 101(2), 664694.CrossRefGoogle Scholar
Wang, J. T.-Y., Spezio, M., & Camerer, C. F. (2010). Pinocchio’s Pupil: Using Eyetracking and Pupil Dilation to Understand Truth Telling and Deception in Sender-Receiver Games. American Economic Review, 100(3), 9841007.CrossRefGoogle Scholar
Figure 0

Figure 1: Left: Proportion of $-Bets preferred over the paired P-bets for both treatments and both phases. Right: Proportion of predicted and unpredicted reversals for both treatments.

Figure 1

Figure 2: Average number of fixations on outcomes and probabilities in the choice and evaluation phases, for the Price treatments (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the outcome/probability ratios for the number of fixations in the evaluation phases of both treatments (one outlier outside the picture).

Figure 2

Figure 3: Heatmap for the choice phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. The heatmap is deduced by convolving the fixations (of all individuals and lotteries) by an isotropic bidimensional Gaussian function. The standard deviation of the Gaussian function was set according to Le Meur & Baccino (2013). In the actual choice screen, the lotteries were further apart and not labeled, and both the left-right position of lotteries and the top-bottom alignment of outcomes and probabilities were counterbalanced. Actual screenshots are depicted in the Appendix. The figure illustrates that, in general, more attention is devoted to probabilities than to outcomes. The analogous picture for Treatment Ranking displays similar features for the choice phase.

Figure 3

Table 1: Random Effects Panel Regression of the (log-transformed) Outcome/Probability Fixation Ratios.

Figure 4

Figure 4: Number of Fixations on the $-bet and P-bet in the choice and evaluation phase for the Price treatment (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the $-bet/P-bet ratios of fixations in the evaluation phases of both treatments.

Figure 5

Figure 5: Heatmap for the evaluation phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. Lotteries were evaluated individually and are presented here side-by-side for ease of comparison only. Below the lottery was the input field for the monetary evaluation (not part of AOIs for the analysis). Actual screenshots are depicted in the Appendix. The figure illustrates that, in this treatment, more attention was devoted to $-bets than to P-bets during monetary evaluation.

Figure 6

Table 2: Random Effects Panel Regression of Fixations on Overpricing.

Figure 7

Table B1. Lottery pairs used for the utility estimation, first part.

Figure 8

Table B2. Lottery pairs with a dominated lottery, first part.

Figure 9

Table B3. (P,$) lottery pairs used in the evaluation (second part) and choice (third part) phases.

Figure 10

Figure D.1: Example screenshot of the lottery choice phase (part 1 and 3).Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.

Figure 11

Figure D.2: Example screenshot of the lottery evaluation phase in the Price treatment (part 2).Note: The dashed frames around the outcome and probability are visualizations of the areas of interest and were not visible to subjects.

Figure 12

Figure D.3: Example screenshot of the lottery evaluation phase in the Rank treatment (part 2).Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.

Supplementary material: File

Alós-Ferrer et al. supplementary material

Alós-Ferrer et al. supplementary material
Download Alós-Ferrer et al. supplementary material(File)
File 293.9 KB