Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-01-07T11:01:40.011Z Has data issue: false hasContentIssue false

Opportunity cost neglect: a meta-analysis

Published online by Cambridge University Press:  01 January 2025

Allegra Maguire
Affiliation:
JEDILAB, Division of Economics, Department of Management and Engineering, Linköping University, 581 83 Linköping, Sweden
Emil Persson
Affiliation:
JEDILAB, Division of Economics, Department of Management and Engineering, Linköping University, 581 83 Linköping, Sweden
Gustav Tinghög*
Affiliation:
JEDILAB, Division of Economics, Department of Management and Engineering, Linköping University, 581 83 Linköping, Sweden Department of Health, Medical and Caring Sciences, The National Center for Priority Setting in Health Care, Linköping University, 581 83 Linköping, Sweden
Rights & Permissions [Opens in a new window]

Abstract

In a seminal paper, Frederick et al. (J Consum Res 36:553–561, 2009) showed that people’s willingness to purchase a consumer good declined dramatically when opportunity costs were made more salient (Cohen’s d = 0.45–0.85). This finding suggests that people normally do not pay sufficient attention to opportunity costs and as a result make poorer and less efficient decisions, both in private and public domains. To critically assess the strength of opportunity cost neglect, we carried out a systematic review and a meta-analysis including published and non-published experimental work. In total, 39 experimental studies were included in the meta-analysis (N = 14,005). The analysis shows a robust significant effect (Cohen’s d = 0.22; p < 0.001) of opportunity cost neglect across different domains, albeit the effect is considerably smaller than what was originally estimated by Frederick et al. (2009). Our findings highlight the importance of meta-analyses and replications of initial findings.

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s) 2023

1 Introduction

Opportunity costs are the foregone benefits associated with the option not chosen when making a decision between different alternatives (Thaler, Reference Thaler1980). For example, the opportunity cost of a college education is the income and experiences foregone by not working, and the opportunity cost of a public health program is the health foregone by spending less money elsewhere in the health sector. If opportunity costs are neglected or not fully attended to at the point of decision making, it means that not all relevant consequences of the choices we make are considered, and as a result, good or even superior alternatives may be overlooked. Ultimately this leads to a loss in welfare, and potentially even in human lives (Fetherstonhaugh et al., Reference Fetherstonhaugh, Slovic, Johnson and Friedrich1997). Especially when deviating from the consumer domain, there are many cases where misallocation of resources could lead to bad but preventable outcomes for society: from healthcare policies to decisions that impact climate change or possibly even pursuing a war. A central mechanism for why opportunity cost neglect arises is that people tend to be selective in their use of information, focusing on the most prominent feature of a decision (Legrenzi et al., Reference Legrenzi, Girotto and Johnson-Laird1993), which implies a tendency to restrict attention only to what is explicitly represented in one’s thought model—the focal alternative (Erlandsson et al., Reference Erlandsson, Lindkvist, Lundqvist, Andersson, Dickert, Slovic and Västfjäll2020; Persson et al., Reference Persson, Erlandsson, Slovic, Västfjäll and Tinghög2022; Slovic, Reference Slovic1975; Tversky et al., Reference Tversky, Sattath and Slovic1988).

In experimental settings, opportunity cost neglect is usually inferred from the observation that participants’ propensity to choose a given option (e.g., buy a cell phone, invest time and effort in education, donate to a charity) changes when the alternative usage of a scarce resource is made salient. In the prototypical task introduced by Frederick et al., (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009, Study 1), the proportion of participants who stated a willingness to purchase a special-priced DVD for $14.99 declined from 75 to 55% when people were reminded about the opportunity cost, i.e., that not buying the DVD implied that the $14.99 could be used for other purchases. The study by Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) was the first to show the effect of opportunity cost neglect in a behavioural experiment, and that the magnitude of the bias was substantial. Simply reminding people that money could have alternative usage decreased their willingness to purchase consumer products by almost 50% (from 73 to 37% in Study 2). Following Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) several studies have used similar experimental paradigms to explore the existence of opportunity cost neglect in different decision-making domains, such as public policy (see Aharoni et al., Reference Aharoni, Kleider-Offutt, Brosnan and Watzek2018, Reference Aharoni, Kleider-Offutt and Brosnan2020 for an application in criminal punishment recommendations and Persson & Tinghög, Reference Persson and Tinghög2020a, Reference Persson and Tinghög2020b for the health domain), in charitable giving (Moche et al., Reference Moche, Erlandsson, Andersson and Västfjäll2020), intertemporal choice (Read et al., Reference Read, Olivola and Hardisty2017; Spiller, Reference Spiller2019) and consumer choice (Greenberg & Spiller, Reference Greenberg and Spiller2016; Moche et al., Reference Moche, Erlandsson, Andersson and Västfjäll2020; Plantinga et al., Reference Plantinga, Krijnen, Zeelenberg and Breugelmans2018; Read et al., Reference Read, Olivola and Hardisty2017; Spiller, Reference Spiller2019; Weiss & Kivetz, Reference Weiss and Kivetz2019; Zhang et al., Reference Zhang, Ji and Li2017). Nonetheless, arguments have been put forward that opportunity costs might be overestimated, rather than neglected, when making choices under external constraints (Weiss & Kivetz, Reference Weiss and Kivetz2019). This is because people may exaggerate the benefits associated with alternative non-salient options as a consequence of loss aversion. Moreover, some studies that have found no opportunity cost neglect and even the opposite effect. For example, Persson and Tinghög (Reference Persson and Tinghög2020b) found that older people were more likely to spend when reminded about the opportunity cost in consumer choice, suggesting that age might be a moderating factor of the effect.

Despite the importance of opportunity cost neglect, both from a theoretical and practical point of view no study has to date systematically and critically addressed the strength of the effect, its occurrence across domains, and what boundary conditions moderate the effect. Therefore, we carried out both a systematic review and a meta-analysis to provide a critical and extensive overview of the phenomenon.

2 Methods

The study was preregistered. The preregistration together with data and analysis codes can be accessed via this repository https://osf.io/p4yjm/. Data collection started on 2020-08-04.

A systematic review was carried out, searching the literature for experimental studies investigating the effect on the choice of making alternatives and forgone benefits explicit. We also made open calls and contacted researchers for unpublished studies. In the meta-analysis, we included studies that had two or more conditions equivalent in content, but where one condition made the alternative usage of a scarce resource more salient. As specified in the pre-registration, studies were only included in the meta-analysis if they used a between-subjects design and reported statistics for the strength of the effect. We had initially planned to carry out a sensitivity analysis to evaluate the appropriateness of within-subjects designs (for future studies, based on comparison with our collected data from between-subject studies), but the number of studies using this procedure was too small. Only studies in English were included.

2.1 The original experimental paradigm

Figure 1 shows the original paradigm to study opportunity cost neglect introduced by Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009). In this paradigm, participants see a brief scenario that describes a possible purchase situation, and they are randomly assigned to either a control condition or an experimental condition. In the control condition, participants decide whether to buy a good or not. In the experimental condition, participants face the same decision, but opportunity costs are made more salient by adding “keep the money for other purchases” to the “do not buy” alternative.

Fig. 1 Original experimental paradigm. Scenario from Study 1A in Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009)

Opportunity cost neglect is then commonly inferred if a lower proportion of participants say they would buy the good (by selecting alternative A) in the experimental condition than in the control condition. This paradigm thus operates under the assumption that if people fully consider opportunity costs at the point of decision-making, adding the reminder (in the experimental condition) should have no effect on decisions. In contrast, if opportunity costs are routinely neglected, adding the reminder should make some people less willing to buy the good, presumably because they consider alternative uses of the available money more carefully.

Most subsequent studies on opportunity cost neglect have used some close variants of this original paradigm. Typically, the good and context in question are adapted but the main experimental manipulation (control vs. experimental condition via a money reminder) is retained. For example, Plantinga et al. (Reference Plantinga, Krijnen, Zeelenberg and Breugelmans2018) used a tablet, a movie ticket, and a concert ticket (each in a separate yes/no decision).

2.2 Studies included in the meta-analysis

Figure 2 shows the procedure for literature search and study inclusion. Based on our initial search we identified 1614 articles to be screened for potential inclusion in the meta-analysis. Most of these were excluded, typically because they were topically unrelated to the concept of opportunity cost neglect. After the screening phase, 162 articles remained for eligibility assessment. Out of these, 150 articles were excluded, most of them because they were non-experimental, or the experimental conditions differed on more than the one dimension we were interested in (salience of opportunity costs). In the end, 12 eligible articles satisfied all pre-registered inclusion/exclusion criteria and were included in the meta-analysis.

Fig. 2 Prisma flow diagram. Procedure for literature search and study inclusion with number of studies at each stage

In total, the meta-analysis covers 12 studies (articles) involving 39 experiments with a total of 12,093 subjects across 5 countries (see Appendix Table 1 for study characteristics). Seven of these were from unpublished studies: two from a conference summary (Chatterjee et al., Reference Chatterjee, Rai and Heath2011), two from a master thesis (Fritzell & Strand, Reference Fritzell and Strand2018) and three from data provided by the authors (Persson & Tinghög, Reference Persson and Tinghög2020a; Zhang, Reference Zhang2020). Some studies tested the same participant more than once (e.g., in two different domains), resulting in a total of N = 14,005 observations. Two-thirds of the experiments were conducted online with survey workers (e.g., M-Turk, Prolific). A few studies were conducted with a general population sample or experts in certain areas and the rest were non-specific student samples.

Most of the included studies used the same or slight modifications of the original design by Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009), either for replication purposes or to explore potential differences in opportunity cost neglect across domains or types of populations.

Zhang et al. (Reference Zhang, Ji and Li2017) compared two different populations (Canada and China) to test for the effect of sociocultural differences in considering opportunity costs and Plantinga et al. (Reference Plantinga, Krijnen, Zeelenberg and Breugelmans2018) tested whether participants with low socioeconomic status would be more prone to consider opportunity costs compared to their more wealthy counterparts. Other studies used the paradigm to test the effect in different domains. Moche et al. (Reference Moche, Erlandsson, Andersson and Västfjäll2020) focused on charitable giving, reminding participants about the opportunity costs of spending the donation money in other situations. Persson and Tinghög (Reference Persson and Tinghög2020b) tested opportunity cost neglect in general healthcare priority-setting decisions, using both participants from the general population and experts in healthcare decision-making. In this case, participants had to choose whether to invest in a new healthcare program, but in the experimental condition they were reminded that that money could be allocated to other programs. Persson and Tinghög (Reference Persson and Tinghög2020a) used a similar design and also focused on health care, but for more specific rationing decisions. Aharoni et al. (Reference Aharoni, Kleider-Offutt, Brosnan and Watzek2018) investigated criminal punishment judgment, explicating the costs of the convictions for the State, but also in terms of per-taxpayer costs.

Some studies focused on opportunity cost considerations in standard intertemporal choices or in time allocation decisions. Fritzell and Strand (Reference Fritzell and Strand2018), Zhang (Reference Zhang2020) and Chatterjee et al. (Reference Chatterjee, Rai and Heath2011) reminded participants that time could be spent on doing other activities. The latter study required participants to engage in trade-off thinking between time and money. Read et al. (Reference Read, Olivola and Hardisty2017) asked participants to choose between receiving a smaller sooner or larger later amount of money, and they varied the extent to which participants were reminded about foregone time-dated payments implied by the different options; e.g., choosing the larger later amount means not getting the smaller sooner amount. Finally, Greenberg and Spiller (Reference Greenberg and Spiller2016) investigated how the salience of opportunity costs influences repeated evaluations (ratings) of different options, e.g. two nights in a luxury hotel. Their items included both experience goods and standard consumption goods.

2.3 Statistical analysis

For each included study, we quantified the effect as the logarithm of the odds ratio of choice rates between the experimental group (in which opportunity costs are made more salient) and the control group. A positive effect indicates that subjects became less willing to consume/choose the presented option when alternatives were made more salient i.e., opportunity cost neglect. A negative effect, in contrast, indicates that people became more willing to consume/choose the presented option when alternatives were made salient. The unit of analysis was always individual participants at the experiment level. Thus, some participants contributed to more than one observation in our sample; e.g., when the same participant was tested in two different domains (two experiments) in the same study.

Most meta-analyses are carried out using two levels that identify the source of variability in the data at the participant level (Level 1) and at the study level (Level 2). Considering that, in our case, many papers (studies) included several experiments, we tested whether a multivariate three-level model, which takes account of the similarities of experiments coming from the same paper, would better fit the analysis than a canonical random-effects two-level model. The analysis showed the multivariate three-level model did not improve the fit of the model (p = 1.00). Therefore, a random-effects meta-analysis with DerSimonian–Laird random effects pooling method was carried out using the natural logarithm of the odds of the change in the rate of choices as effect size measure and its standard error.

3 Results

Figure 3 shows a forest plot including the 39 experiments included in the meta-analysis. The odds ratio is reported on the x axis, showing the change in the rate of choices. When opportunity costs are made more salient, the option that is positively highlighted is chosen, on average, 1.48 times more than the alternative (that is, for every 100 people that choose one option, 148 choose it when opportunity costs make the same option positively highlighted). Thus, the analysis showed a small effect, equivalent to a Cohen’s d of 0.22 (CI [0.15; 0.27], that is statistically significant (log odds = 0.39, z = 6.02, p < 0.0001, CI [0.26; 0.52]). Between-study heterogeneity was moderate (I 2 = 63.11%; Q(38) = 103.01, p < 0.0001). In addition, the figure shows that (i) only one study show a statistically significant effect opposite to the hypothesized direction, (ii) experiments with larger sample sizes, identifiable by the thickness of the squares, have effects closer to zero, and (iii) experiments from the original study by Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) stand out.

Fig. 3 Forest plot and summary of the effects in rate of change of the alternative. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

To assess for publication bias, that is, the tendency to find significant results in the scientific literature because they are easier to publish, we conducted a Regression Test and a Rank Correlation Test for funnel plot asymmetry. Both the Regression test (z = 2.56, p = 0.01) and the Rank Correlation Test (Kendall's tau = 0.28, p = 0.01) yielded significant results, thus indicating evidence of unpublished studies with either null effects or results in the opposite direction than hypothesized. This result is also visible in the sunset (power-enhanced) funnel plot (Fig. 4). The figure shows the distribution of effects for both published (in white) and unpublished (in black) studies depending on their power. Studies with higher statistical power reported effects closer to zero, that is, a rate change of 1. Without publication bias, we would expect to find effects with a symmetrical disposition around the vertical dotted line depicting the effect summarized by the meta-analysis. Instead, we find a skewed distribution, and only one study with the opposite direction of the effect.

Fig. 4 Sunset (power-enhanced) funnel plot with both published (white dots) and unpublished (black dots) studies. The x axis shows the strength of the effect, the y axis shows the standard error of the effect, and colours depict the statistical power of the studies. Power is computed at the study level assuming the meta-analytic summary effect as the true effect. Contours represent 95 and 99% confidence intervals for an effect equal to zero, meaning that dots on top and outside both contours, or inside the purple area, are an indication of an effect different from zero. The vertical dotted line depicts the main effect found in the meta-analysis. Medpower is the median power of all studies included. d33 and d66 indicate the true effect size necessary such that the median power of the studies would have been 33% or 66% respectively. E, O and pTES show the results of a test of excess significance (Ioannidis & Trikalinos, Reference Ioannidis and Trikalinos2007), and R-Index denotes expected replicability

We also carried out a leave-one-out sensitivity analysis to assess the change in the overall effect when eliminating one study at a time (Fig. 5). This analysis shows that the studies from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) were different from all other studies (p < 0.001). The heterogeneity of the pooled model decreases from an I 2 = 79.68% to I 2 = 54.96% when excluding the experiments from Frederick et al (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009).

Fig. 5 Leave-one-out analysis. Sensitivity analysis showing the change in the effect eliminating single groups of pooled studies. Unpublished studies are followed by an asterisk

Thus, experiments from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) seem to be outliers significantly biasing the effect. In support of this hypothesis is also the fact that excluding the study from the main analysis, the Regression Test and Rank Correlation Test for funnel plot asymmetry for publication bias was no longer significant (z = 0.85, p = 0.40; Kendall's tau = 0.13, p = 0.29). The resulting main effect was reduced to a log odds ratio of 0.29 (z = 5.05; p < 0.0001; CI [0.177; 0.40]), which reflects a rate change of 1.34 (CI [1.21; 1.70]). The result is still significant, but small since it is comparable to a Cohen’s d of 0.16 (CI [0.1; 0.22]).

3.1 Moderation models

We also examined the extent to which opportunity cost neglect could be moderated by factors discussed in the literature (domain, amount of money, age).Footnote 1 Full results from these analyses can be found in the Supplementary materials. Starting with the domain in which choices were made, five main domains were identified: consumer choices, prosocial behaviour/donations, intertemporal choice, policy interventions and opportunity cost of time. Figure 6 shows the forest plot for this subgroup analysis. Note that the intertemporal choice and donation domains contain experiments from a single study in each domain and should therefore be interpreted with caution. The effect for the consumer domain is the largest, with an estimated rate change of 1.75 when comparing choices across the experimental and control condition. However, excluding experiments from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009), the effect decreases to a rate change of 1.46 (CI [1.21; 1.77]), which is close to the general effect.

Fig. 6 Forest plot with domain moderation analysis. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

Comparing the different domains, the only domain that was significantly different from consumer choice was the intertemporal choice domain (β = − 0.38, z = − 2.23, p = 0.026; see Supplementary materials Table S1 for all contrasts). However, it should be noted that the manipulation in these studies was slightly different from the others. Read and colleagues made the lack of gains salient by explicating that nothing would be gained in a certain moment, without expanding the set of alternatives e.g., adding “use the money difference for other purchases” or “use the money right away for something you need as soon as possible”. It could be argued that the manipulation employed made opportunity costs less explicit compared to other experiments, leaving the reader the opportunity to think about implied alternatives without them being directly stated. When experiments from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) are excluded, the difference between consumer choice and intertemporal choice is no longer significant (z = − 0.2, p = 0.17).

Next, we investigated whether opportunity cost neglect was moderated by the mean amount of money involved in decisions made in the consumer domain (see Supplementary materials Table S2 and Fig. S1 for complete results from this analysis). Here, studies with alternatives between $500 and $1000 had a significantly enhanced effect compared to choices that involved less than $20 (z = 3.26, p = 0.001, β = 1.07). However, it should be noted that in the $500 and $1000 categories only studies from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) were present. In fact, the latter experiments differ from all others (z = 4.96, p < 0.001, β = 0.94).

In our final moderation analysis, we investigated moderation by mean age of participants (full results are in Supplementary materials Table S3 and Fig. S2). Age groups were created for values lower than 30, between 30 and 40, between 40 and 50, and over 50 years old. Since only studies that reported mean age or made data available were included, it should be noted that data from Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009) were not present in this analysis. Results showed that no group differed from the effect with mean age lower than 30 (p > 0.05); however, mean age may a too coarse index. Persson and Tinghög (Reference Persson and Tinghög2020b) found a correlation between participants’ age and the strength of the effect, therefore using mean age alone could hide the relation.

4 Discussion

For the first time, the phenomenon of opportunity cost neglect was quantitatively synthesized through a meta-analysis. We conducted an extensive search and identified 12 eligible studies, three of which were unpublished, consisting of 39 experiments with a total of 12,093 subjects.

The meta-analysis showed a small but robust mean effect (Cohen’s d = 0.22, p < 0.001). Taken together, our results indicate that the magnitude of opportunity cost neglect is likely much smaller than has been suggested in previous studies, including the seminal paper by Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009); perhaps because the initial studies had relatively small samples and thus lower statistical power, which typically inflates effect sizes of published findings (Button et al., Reference Button, Ioannidis, Mokrysz, Nosek, Flint, Robinson and Munafo2013). Still, it appears to be a robust phenomenon, carrying substantive potential to influence decision-making across the board, even at the smaller effect sizes obtained in this meta-analysis. When we control for publication bias, the estimated mean effect is equivalent to a Cohen’s d of 0.16. This is not a negligible effect, keeping in mind the fundamental role of opportunity cost in decision-making, both in consumer choice and at the policy level.

Going forward, there are important aspects of opportunity cost neglect that still have not been fully explored. We conducted moderation analysis by domain, but due to the relatively low number of studies outside the consumer domain, no strong conclusions could be drawn based on that analysis. We generally expected opportunity cost neglect to be stronger in public policy decisions than private spending, because opportunity costs are less salient in public spending (it is not out-of-pocket), and for some policy domains, like health care, people may be reluctant to engage in trade-off thinking at all because health is of special moral importance (e.g., sometimes seen as a ‘sacred value’; Persson et al., Reference Persson, Erlandsson, Slovic, Västfjäll and Tinghög2022; Tetlock, Reference Tetlock2003). However, the moderation analysis indicated the opposite pattern, with a smaller mean effect for the policy domain compared to the consumption domain, although the difference was not statistically significant (see Fig. 6 and Supplementary material Table S1). Moreover, there was some heterogeneity among the studies within the policy group; one study found no effect (Persson & Tinghög, Reference Persson and Tinghög2020a), one study found a moderate effect consistent with opportunity cost neglect (Persson & Tinghög, Reference Persson and Tinghög2020b), and another study found quite a large effect (Aharoni et al., Reference Aharoni, Kleider-Offutt, Brosnan and Watzek2018). There is clearly scope for more research on opportunity cost neglect in the public policy area.

Some additional limitations should be noted. First, the experimental literature on opportunity cost neglect is based on experimental studies that are primarily hypothetical (with the exception of one of the experiments in Frederick et al., Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009), and the results from our meta-analysis should be interpreted with this in mind. Second, even though most of the included studies used similar designs, the items and contexts varied substantially between some of the studies. For example, the studies on health policy naturally considered interventions that could save lives and cost millions of dollars, whereas studies in the consumer domain concerned standard consumption items like a smartphone, a tablet, a laptop, a movie ticket, and so on. Moreover, more than half of the experiments in the meta-analysis were in fact in the consumer domain (21 out of 39 experiments), with a lower number of studies in each of the other domains covered by the included studies. Thus, a cautious interpretation would be that the result for the consumer domain is robust and informative from a meta-analytic point of view, but for the other domains, we should be more cautious.

Concluding, this work highlights the importance of replication, preregistration, and meta-analysis to assess cumulative evidence for behavioural phenomena discovered in empirical work. On the back of the replication crisis in psychology, there is now an increasing awareness of methodological challenges across the social sciences. Findings from laboratory experiments are often particularly suited for meta-analysis because standardized protocols are commonly used and there is an increasing availability of open data. Meta-analysis facilitates the aggregation of the fragmented information provided by single studies into a more structured scientific consensus. The concept of opportunity cost neglect provides a picture of human decision-making as limited, but also with room for improvement. Taken at face value, if small manipulations of information can influence decision-making (as shown in the studies included in this meta-analysis), it should likewise be possible to boost or nudge people into making better decisions. This would primarily be relevant in the context of policymaking using, for example, decision aids to facilitate a more structured decision process with the goal of improving the allocation of resources in society.

Supplementary Information

The online version contains supplementary material available at https://doi.org/10.1007/s40881-023-00134-6.

Author contributions

GT & EP conceived the research idea. AM developed the research design, performed the calculations, and wrote the manuscript with revisions from EP and GT.

Funding

Open access funding provided by Linköping University. This work was supported by the Swedish Research Council [grant number: 2018.01755] and the Swedish Research Council for Health, Working Life and Welfare (Forte) [grant number 2020–00864]. Funders had no role in study design, data collection, analysis, decision to publish, or preparation of the manuscript.

Availability of data and materials (data transparency)

Analysis codes and the data used in this paper are available at the project’s OSF repository https://osf.io/p4yjm/.

Code availability

The preregistration, together with data and analysis codes can be accessed at https://osf.io/p4yjm/.

Declarations

Conflict of interest

All authors declare no competing interests that could have appeared to influence the submitted work.

Ethics approval

Not applicable. This is a meta-analysis of previously published and unpublished studies.

Consent to participate

Not applicable. This is a meta-analysis of previously published and unpublished studies.

Consent for publication

All authors consent to this publication.

Appendix

See Table 1.

Table 1 Studies included in the meta-analysis

Authors

Paper

Published

Domain

Sample size

Student sample

Setting

Participants' Country

Outcome (absolute percentage change)

Chatterjee et al. (Reference Chatterjee, Rai and Heath2011)

Opportunity Costs Neglect in the Tradeoff Between Time and Money

No

Time

182

Yes

Lab experiment

USA

19.5

Frederick et al. (Reference Frederick, Novemsky, Wang, Dhar and Nowlis2009)

Opportunity Cost Neglect

Yes

Consumer

845

Yes

Lab experiment

USA

25.5

Fritzell and Strand (Reference Fritzell and Strand2018)

The Opportunity Cost Neglect of Money and Time: The Role of Mental Budgeting

No

Consumer and Time

241

Yes

Lab experiment

Sweden

2.88

Greenberg and Spiller (Reference Greenberg and Spiller2016)

Opportunity Cost Neglect Attenuates the Effect of Choices on Preferences

Yes

Consumer

1507

No

Online experiment

USA

6.08

Read et al. (Reference Read, Olivola and Hardisty2017)

The value of nothing: Asymmetric attention to opportunity costs drives intertemporal decision making

Yes

Intertemporal choice

2088

No

Online experiment

UK

5.21

Persson and Tinghög (Reference Persson and Tinghög2020a, Reference Persson and Tinghög2020b)

Cost neglect in health care rationing decisions

No

Consumer and Policy

686

No

Online experiment

Sweden

4.76

Moche et al. (Reference Moche, Erlandsson, Andersson and Västfjäll2020)

Opportunity cost in monetary donation decisions

Yes

Donation

2397

No

Online experiment

Sweden

3.58

Persson and Tinghög (Reference Persson and Tinghög2020a, Reference Persson and Tinghög2020b)

Opportunity cost neglect in public policy

Yes

Consumer and Policy

951

No

Online experiment

Sweden

5.85

Plantinga et al. (Reference Plantinga, Krijnen, Zeelenberg and Breugelmans2018)

Evidence for Opportunity Cost Neglect in the Poor

Yes

Consumer

2276

No

Online experiment

USA

14.06

Zhang (Reference Zhang2020)

Opportunity Cost of Time

No

Time

351

No

Online experiment

-

1.64

Zhang et al. (Reference Zhang, Ji and Li2017)

Cultural Differences in Opportunity Cost Consideration

Yes

Time

240

Yes

Lab experiment

Canada and China

8.8

Aharoni et al. (Reference Aharoni, Kleider-Offutt, Brosnan and Watzek2018)

Justice at any cost? The impact of cost–benefit salience on criminal punishment judgments

Yes

Policy

181

Yes

Online experiment

USA

13.1

Footnotes

The replication material for the study is available at https://osf.io/p4yjm/.

1 When the study was pre-registered, we specified multiple moderation analyses to carry out. However, the moderation analysis for money ownership (own vs other money) was not carried out due to its complete overlap with consumer and policy domains, respectively. Furthermore, analyses on expertise in decision making, income levels, and the sensitivity analysis for checking the appropriateness of within-subjects designs were not run due to insufficient number of studies with available data.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

References

Aharoni, E., Kleider-Offutt, H. M., Brosnan, S. F., Watzek, J. (2018). Justice at any cost? The impact of cost-benefit salience on criminal punishment judgments. Behavioral Sciences & the Law, 37(1), 3860. 10.1002/bsl.2388CrossRefGoogle ScholarPubMed
Aharoni, E., Kleider-Offutt, H. M., Brosnan, S. F. (2020). The price of justice: Cost neglect increases criminal punishment recommendations. Legal and Criminological Psychology, 25(1), 4761. 10.1111/lcrp.12161CrossRefGoogle Scholar
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B., Flint, J., Robinson, E., Munafo, M. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365376. 10.1038/nrn3475CrossRefGoogle ScholarPubMed
Chatterjee, S., Rai, D., & Heath, T. (2011). Opportunity Costs Neglect in the Tradeoff Between Time and Money [Unpublished manuscript]. ACR North American Advances. http://www.acrwebsite.org/volumes/16014/volumes/v38/NA-38. Accessed Oct 2020.Google Scholar
Edney, L., Afzali, H. H. A., Karnon, J. (2019). Are the benefits of new health services greater than their opportunity costs? Australian Health Review, 43(5), 508510. 10.1071/AH18087CrossRefGoogle ScholarPubMed
Erlandsson, A., Lindkvist, A., Lundqvist, K., Andersson, P. A., Dickert, S., Slovic, P., Västfjäll, D. (2020). Moral preferences in helping dilemmas expressed by matching and forced choice. Judgment and Decision Making, 15(4), 452475. 10.1017/S1930297500007427CrossRefGoogle Scholar
Fetherstonhaugh, D., Slovic, P., Johnson, S., Friedrich, J. (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14(3), 283300. 10.1023/A:1007744326393CrossRefGoogle Scholar
Frederick, S., Novemsky, N., Wang, J., Dhar, R., Nowlis, S. (2009). Opportunity cost neglect. Journal of Consumer Research, 36(4), 553561. 10.1086/599764CrossRefGoogle Scholar
Fritzell, G., & Strand, L. (2018). The opportunity cost neglect of money and time: the role of mental budgeting [Unpublished manuscript]. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1219163&dswid=9820. Accessed Oct 2020.Google Scholar
Greenberg, A. E., Spiller, S. A. (2016). Opportunity cost neglect attenuates the effect of choices on preferences. Psychological Science, 27(1), 103113. 10.1177/0956797615608267CrossRefGoogle ScholarPubMed
Ioannidis, J. P., Trikalinos, T. A. (2007). An exploratory test for an excess of significant findings. Clinical Trials, 4(3), 245253. 10.1177/1740774507079441CrossRefGoogle ScholarPubMed
Legrenzi, P., Girotto, V., Johnson-Laird, P. N. (1993). Focusing in reasoning and decision making. Cognition, 49(1), 3766. 10.1016/0010-0277(93)90035-TCrossRefGoogle Scholar
Moche, H., Erlandsson, A., Andersson, D., Västfjäll, D. (2020). Opportunity cost in monetary donation decisions to non-identified and identified victims. Frontiers in Psychology, 10, 3035. 10.3389/fpsyg.2019.03035CrossRefGoogle Scholar
Persson, E., Erlandsson, A., Slovic, P., Västfjäll, D., Tinghög, G. (2022). The prominence effect in health-care priority setting. Judgment and Decision Making, 17(6), 13791391. 10.1017/S1930297500009463CrossRefGoogle Scholar
Persson, E., & Tinghög, G. (2020a). Cost neglect in health care rationing decisions (Unpublished manuscript).Google Scholar
Persson, E., Tinghög, G. (2020). Opportunity cost neglect in public policy. Journal of Economic Behavior & Organization, 170, 301312. 10.1016/j.jebo.2019.12.012CrossRefGoogle Scholar
Plantinga, A., Krijnen, J. M. T., Zeelenberg, M., Breugelmans, S. M. (2018). Evidence for opportunity cost neglect in the poor. Journal of Behavioral Decision Making, 31(1), 6573. 10.1002/bdm.2041CrossRefGoogle ScholarPubMed
Read, D., Olivola, C. Y., Hardisty, D. J. (2017). The value of nothing: Asymmetric attention to opportunity costs drives intertemporal decision making. Management Science, 63(12), 42774297. 10.1287/mnsc.2016.2547CrossRefGoogle Scholar
Slovic, P. (1975). Choice between equally valued alternatives. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 280287. 10.1037/0096-1523.1.3.280Google Scholar
Spiller, S. A. (2019). Opportunity cost neglect and consideration in the domain of time. Current Opinion in Psychology, 26, 98102. 10.1016/j.copsyc.2018.10.001CrossRefGoogle ScholarPubMed
Tetlock, P. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320324. 10.1016/S1364-6613(03)00135-9CrossRefGoogle ScholarPubMed
Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior & Organization, 1(1), 3960. 10.1016/0167-2681(80)90051-7CrossRefGoogle Scholar
Tversky, A., Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453458. 10.1126/science.7455683CrossRefGoogle ScholarPubMed
Tversky, A., Sattath, S., Slovic, P. (1988). Contingent weighting in judgment and choice. Psychological Review, 95(3),371. 10.1037/0033-295X.95.3.371CrossRefGoogle Scholar
Weiss, L., Kivetz, R. (2019). Opportunity cost overestimation. Journal of Marketing Research, 56(3), 518533. 10.1177/0022243718819474CrossRefGoogle Scholar
Zhang, N. (2020). Opportunity cost of time (Unpublished Manuscript).Google Scholar
Zhang, N., Ji, L.-J., Li, Y. (2017). Cultural differences in opportunity cost consideration. Frontiers in Psychology, 8, 45.Google ScholarPubMed
Zhao, C.-X., Jiang, C.-M., Zhou, L., Li, S., Rao, L.-L., Zheng, R. (2015). The hidden opportunity cost of time effect on intertemporal choice. Frontiers in Psychology, 6, 311. 10.3389/fpsyg.2015.00311CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1 Original experimental paradigm. Scenario from Study 1A in Frederick et al. (2009)

Figure 1

Fig. 2 Prisma flow diagram. Procedure for literature search and study inclusion with number of studies at each stage

Figure 2

Fig. 3 Forest plot and summary of the effects in rate of change of the alternative. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

Figure 3

Fig. 4 Sunset (power-enhanced) funnel plot with both published (white dots) and unpublished (black dots) studies. The x axis shows the strength of the effect, the y axis shows the standard error of the effect, and colours depict the statistical power of the studies. Power is computed at the study level assuming the meta-analytic summary effect as the true effect. Contours represent 95 and 99% confidence intervals for an effect equal to zero, meaning that dots on top and outside both contours, or inside the purple area, are an indication of an effect different from zero. The vertical dotted line depicts the main effect found in the meta-analysis. Medpower is the median power of all studies included. d33 and d66 indicate the true effect size necessary such that the median power of the studies would have been 33% or 66% respectively. E, O and pTES show the results of a test of excess significance (Ioannidis & Trikalinos, 2007), and R-Index denotes expected replicability

Figure 4

Fig. 5 Leave-one-out analysis. Sensitivity analysis showing the change in the effect eliminating single groups of pooled studies. Unpublished studies are followed by an asterisk

Figure 5

Fig. 6 Forest plot with domain moderation analysis. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

Figure 6

Table 1 Studies included in the meta-analysis

Supplementary material: File

Maguire et. al. supplementary material

Maguire et. al. supplementary material
Download Maguire et. al. supplementary material(File)
File 708.5 KB