We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The “Hot Stove Effect” pertains to an asymmetry in error corrections that affects a learner who estimates the quality of an option based on his or her experience with the option: errors of overestimation of the quality of an option are more likely to be corrected than errors of underestimation. In this chapter, we describe a “Collective Hot Stove Effect” which characterizes the dynamics of collective valuations rather than individual quality estimates. We analyze settings in which the collective valuation of options is updated sequentially based on additional samples of information. We focus on cases where the collective valuation of an option is more likely to be updated when it is higher than when it is lower. Just as the law-of-effect implies a Hot Stove Effect for individual learners, a Collective Hot Stove Effect emerges: errors of overestimation of the quality of an object by the collective valuation are more likely to be corrected than errors of underestimation. We test the unique predictions of our model in an online experiment and test assumptions and predictions of our model in analyses of large datasets of online ratings from popular websites (Amazon.com, Yelp.com, Goodreads.com, Weedmaps.com) comprising more than 160 million ratings.
Patients increasingly use physician rating websites to evaluate and choose potential healthcare providers. A sentiment analysis and machine learning approach can uniquely analyse written prose to quantitatively describe patients’ perspectives from interactions with their physicians.
Methods
Online written reviews and star scores were analysed from Healthgrades.com using a natural language processing sentiment analysis package. Demographics of otolaryngologists were compared and a multivariable regression for individual words was performed.
Results
This study analysed 18 546 online reviews of 1240 otolaryngologists across the USA. Younger otolaryngologists (aged less than 40 years) had higher sentiment and star scores compared with older otolaryngologists (p < 0.001). Male otolaryngologists had higher sentiment and star scores compared with female otolaryngologists (p < 0.001). ‘Confident’, ‘kind’, ‘recommend’ and ‘comfortable’ were words associated with positive reviews (p < 0.001).
Conclusion
Positive bedside manner was strongly reflected in better reviews, and younger age and male gender of the otolaryngologist were associated with better sentiment and star scores.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.