We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Previous research demonstrates a large difference between decisions from description and decisions from experience, and also between decisions and probability judgment from experience. Comparison of decisions from description and from experience reveals a description–experience gap (Hertwig & Erev, 2009): higher sensitivity to rare events in decisions from description. Comparison of judgment and decisions from experience reveals the coexistence of overestimation and underweighting of rare events (Barron & Yechiam, 2009). The current review suggests that both sets of differences are examples of the J/DM separation paradox: While separated studies of judgment and decision making reveal oversensitivity to rare events, without the separation, these processes often lead to the opposite bias. Our analysis shows that the J/DM paradox can be the product of the fact that the separation of judgment from decisions making requires an explicit presentation of the rare events, and this mere presentation increases the apparent weighting of these events. In addition, our analysis suggests that feedback diminishes the mere presentation effect, but does not guarantee increase in rational behavior. When people can rely on accurate feedback, the main deviations from rational judgment and decision making can be captured with the reliance on the small samples hypothesis.
People revisit the restaurants they like and avoid the restaurants with which they had a poor experience. This tendency to approach alternatives believed to be good is usually adaptive but can lead to a systematic bias. Errors of underestimation (an alternative is believed to be worse than it is) will be less likely to be corrected than errors of overestimation (an alternative is believed to be better than it is). Denrell & March (2001) called this asymmetry in error correction the “Hot Stove Effect.” This chapter explains the basic logic behind the Hot Stove Effect and how this bias can explain a range of judgment biases. We review empirical studies that illustrate how risk aversion and mistrust can be explained by the Hot Stove Effect. We also explain why even a rational algorithm can be subject to the same bias.
The “Hot Stove Effect” pertains to an asymmetry in error corrections that affects a learner who estimates the quality of an option based on his or her experience with the option: errors of overestimation of the quality of an option are more likely to be corrected than errors of underestimation. In this chapter, we describe a “Collective Hot Stove Effect” which characterizes the dynamics of collective valuations rather than individual quality estimates. We analyze settings in which the collective valuation of options is updated sequentially based on additional samples of information. We focus on cases where the collective valuation of an option is more likely to be updated when it is higher than when it is lower. Just as the law-of-effect implies a Hot Stove Effect for individual learners, a Collective Hot Stove Effect emerges: errors of overestimation of the quality of an object by the collective valuation are more likely to be corrected than errors of underestimation. We test the unique predictions of our model in an online experiment and test assumptions and predictions of our model in analyses of large datasets of online ratings from popular websites (Amazon.com, Yelp.com, Goodreads.com, Weedmaps.com) comprising more than 160 million ratings.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.