We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The British tended to deny that Darwinism had anything to say to philosophy, epistemology, or ethics. The Americans were far more appreciative of Darwinism, which supported strongly their approach to epistemology – Pragmatism. Today, on both sides of the Atlantic, there are enthusiasts for a Darwin-influenced philosophy, for instance one promoting a naturalistic Kantianism in epistemology and ethical nonrealism in moral discourse.
This chapter reviews the evidence behind the anti-misinformation interventions that have been designed and tested since misinformation research exploded in popularity around 2016. It focuses on four types of intervention: boosting skills or competences (media/digital literacy, critical thinking, and prebunking); nudging people by making changes to social media platforms’ choice architecture; debunking misinformation through fact-checking; and (automated) content labelling. These interventions have one of three goals: to improve relevant skills such as spotting manipulation techniques, source criticism, or lateral reading (in the case of boosting interventions and some content labels); to change people’s behavior, most commonly improving the quality of their sharing decisions (for nudges and most content labels); or to reduce misperceptions and misbeliefs (in the case of debunking). While many such interventions have been shown to work well in lab studies, there continues to be an evidence gap with respect to their effectiveness over time, and how well they work in real-life settings (such as on social media).
This chapter opens with a review of the dangers of misinformation and conspiracy theory beliefs. We then review the literature on debunking techniques, highlighting why debunking is largely ineffective at combatting QAnon and other conspiracy theories. Although corrections are largely ineffective, repeated corrections, warnings, and alternative accounts for misinformation can improve their effectiveness. In contrast to debunking, another approach is “prebunking”; trying to prevent conspiracies rather than counter them. Based on inoculation theory, Banas and Miller (2013) found that both fact-based and logic-based messages delivered before a conspiracy film helped build up participants’ resistance to that message. Next, this chapter discusses the role of media literacy in the QAnon—and other more general—conspiracies. Research has indicated that greater news media literacy relates negatively to beliefs in conspiracies (Craft et al., 2017). A brief discussion of how QAnon is similar or different from other groups is offered, along with some research questions for future study about QAnon specifically.
Edited by
Jonathan Fuqua, Conception Seminary College, Missouri,John Greco, Georgetown University, Washington DC,Tyler McNabb, Saint Francis University, Pennsylvania
Debunking arguments aim to undermine a belief based on epistemically problematic features of how the belief was originally formed or is currently held. They typically offer at least a partial genealogy for the belief and then point out epistemically problematic features of the genealogy. Many important scholars of religion – from Hume, Feuerbach, and Freud to contemporary scholars in the cognitive science of religion such as Boyer, Bering, and Norenzayan – have attempted to explain human religious belief naturalistically. Do their accounts debunk religious belief? This chapter presents a schema for debunking arguments, briefly summarizes several proposed explanations of religious belief, and outlines several epistemic principles that have been used in debunking arguments. Then, it presents three different debunking arguments for belief in gods and discusses several replies to those arguments, including the religious reasons reply, the classic Plantingean approach to defeat, and epistemic self-promotion.
Religion is relevant to all of us, whether we are believers or not. This book concerns two interrelated topics. First, how probable is God's existence? Should we not conclude that all divinities are human inventions? Second, what are the mental and social functions of endorsing religious beliefs? The answers to these questions are interdependent. If a religious belief were true, the fact that humans hold it might be explained by describing how its truth was discovered. If all religious beliefs are false, a different explanation is required. In this provocative book Herman Philipse combines philosophical investigations concerning the truth of religious convictions with empirical research on the origins and functions of religious beliefs. Numerous topics are discussed, such as the historical genesis of monotheisms out of polytheisms, how to explain Saul's conversion to Jesus, and whether any apologetic strategy of Christian philosophers is convincing. Universal atheism is the final conclusion.
In this culminating chapter, we draw conclusions and present criteria for the selection of conspiracy theories worthy of debunking. Specifically, we argue for the need to balance accessibility of the beliefs in memory and the likelihood that acceptance of them will elicit problematic behavior. Lastly, we propose possible ways of debunking the various conspiracy beliefs on which the book has focused.
Dividing moral questions into those of substantive ethics, what should I do, and those of metaethics, why should I do what I should do, the mechanistic/Darwinian approach has little novel to say at the level of substantive ethics. One possible exception is that it is doubted we have equal moral obligations to all humans indifferently. We have special obligations to our children and other family members, and more to our friends and our countrymen than to others. We have obligations to the starving poor in Africa, but charity begins at home. Metaethically, the Darwinian can offer no justification. That would be to violate the naturalistic fallacy, going from claims about matters to claims about values. For the Darwinian the world has not intrinsic value. This means that the Darwinian is a moral non-realist. It does not mean they have no substantive ethics, but that these are psychological not grounded in external supports, natural or non-natural (like Platonic forms or the will of God). We objectify morality, thinking substantive claims do have support, are objective, otherwise we would all begin to cheat and the whole system breaks down. Ultimately, however, face to face, Darwinism demands a dramatic rethinking of common sense and the assumption of the ages, at least in western civilization.
In recent years, interest in the psychology of fake news has rapidly increased. We outline the various interventions within psychological science aimed at countering the spread of fake news and misinformation online, focusing primarily on corrective (debunking) and pre-emptive (prebunking) approaches. We also offer a research agenda of open questions within the field of psychological science that relate to how and why fake news spreads and how best to counter it: the longevity of intervention effectiveness; the role of sources and source credibility; whether the sharing of fake news is best explained by the motivated cognition or the inattention accounts; and the complexities of developing psychometrically validated instruments to measure how interventions affect susceptibility to fake news at the individual level.
Moral skeptics maintain that we do not have moral knowledge. Traditionally they haven't argued via skeptical hypotheses like those provided by perceptual skeptics about the external world, such as Descartes' deceiving demon. But some believe this can be done by appealing to hypotheses like moral nihilism. Moreover, some claim that skeptical hypotheses have special force in the moral case. But I argue that skeptics have failed to specify an adequate skeptical scenario, which reveals a general lesson: such arguments are not a promising avenue for moral skeptics to take. They're ultimately weaker when applied to morality compared to perception.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.