1. Introduction
In recent years, concerns about misinformation in the media have skyrocketed. President Donald Trump has repeatedly claimed that various major news outlets including CNN, The New York Times, ABC News, and MSNBC are disseminating ‘fake news’ for political purposes (Coll, Reference Coll2017; Nelson, Reference Nelson2017; Wang, Reference Wang2017). But how do people judge whether the news from mainstream media networks contains true or false information? We examine this question based on a pre-registered survey experiment conducted during the first year of the Trump presidency. Our focus is on ‘ambiguous’ false information in the news – the kind of information major news networks could disseminate regardless of their intention. We define the content of news as ‘ambiguous’ if it provides no clear clues about its truth value or about its partisan slant.
Examining this question is important in the context of growing concerns about the polarization of American politics (e.g., Poole and Rosenthal, Reference Poole and Rosenthal1984; Layman et al., Reference Layman, Carsey and Menasce Horowitz2006; Cohn, Reference Cohn2014; Doherty, Reference Doherty2014) and the politicization of the mainstream media in the USA (McCright and Dunlap, Reference McCright and Dunlap2011; Levendusky, Reference Levendusky2013a; Mitchell et al., Reference Mitchell, Gottfried, Kiley and Eva Matsa2014). While Americans increasingly exhibit low trust in the media in general (Swift, Reference Swift2016; Silverman, Reference Silverman2017; Knight Foundation, 2018), a more concerning trend is that they trust the individual media sources that they use (Daniller et al., Reference Daniller, Allen, Tallevi and Mutz2017; Pennycook and Rand, Reference Pennycook and Rand2019a). If individuals automatically judge content from ideologically uncongenial sources as biased or untrustworthy, and content from congenial sources as undeniably factual, systematic errors in collective public opinion may arise (Flynn et al., Reference Flynn, Nyhan and Reifler2017). Indeed, these errors could have detrimental effects on important aspects of democracy, such as voting and political beliefs (Dellavigna and Kaplan, Reference Dellavigna and Kaplan2007).
Given these concerns, a growing number of scholars have studied misinformation about politics and policy in recent years (e.g., Nyhan and Reifler, Reference Nyhan and Reifler2010; Berinsky, Reference Berinsky2015; Bullock et al., Reference Bullock, Gerber, Hill and Huber2015; Prior et al., Reference Prior, Sood and Khanna2015; Swire et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017). The trustworthiness of mainstream media outlets has also become a popular topic of debate on social media and among journalists (e.g., Nazaryan, Reference Nazaryan2017; Rymel, Reference Rymel2017), and several studies have investigated whether and how people evaluate the credibility of mainstream sources (e.g., Carr et al., Reference Carr, Barnidge, Gu Lee and Jean Tsang2014; Kim, Reference Kim2015; Pennycook and Rand, Reference Pennycook and Rand2019a). Nevertheless, to the best of our knowledge, no previous research has directly examined the impact of misinformation that major news networks, such as CNN and Fox News, could intentionally or unintentionally be spreading and, more importantly, how partisanship and ideology moderate people's susceptibility to believing this sort of information.
To address this, we presented study participants – partisans (Democrats and Republicans) and ideologues (liberals and conservatives) – with a news article excerpt that varied by source shown (CNN, Fox News, or no source) and content (true or false information), and measured their perceived accuracy of the information contained in the article. Our goal was to investigate how the effects of these treatment variables vary by study participants' partisanship and ideology.
The results of our experiment suggest that people do not blindly judge articles based on news source, regardless of their own partisanship or ideology. Rather, we find that contrary to prevailing ideas about the increasing polarization of American people's trust in the media, as well as on American voters' growing propensity to engage in ‘partisan motivated reasoning,’ source cues are not as important as the information itself. In other words, there is little stopping American consumers of the mass media from trusting and believing the information they read, true and false alike, from any major news source. While this conclusion may lead us to be optimistic about individuals' ability to overlook partisan or ideological cues when processing information from the media, we argue that it underlines the importance of preventing any type of false content from reaching people in the first place.
2. Motivated reasoning, misinformation, and the media
Our experiment draws on several bodies of existing literature – studies on the theory of partisan motivated reasoning, research on the impacts of various media sources and news content on opinion formation, and literature on misinformation about politics and policy. In this section, we first introduce these studies and discuss our theoretical contributions. We then specify our hypothesis.
2.1 Literature review
Scholarship on partisan motivated reasoning suggests that people interpret evidence or information from ideologically congenial sources as stronger than information from opposing sources (e.g., Druckman et al., Reference Druckman, Peterson and Slothuus2013; Bolsen et al., Reference Bolsen, Druckman and Lomax Cook2014), or that exposure to congenial information reduces perceptions that the information is biased (Kelly, Reference Kelly2019). Several studies explore this phenomenon in the media by investigating how individuals respond to messages from different news networks. By presenting participants with identical written or televised news reports but varying the source attributed to the reports, these studies show that people use news network as a heuristic to shape their opinions about the content (Gussin and Baum, Reference Gussin and Baum2004; Baum and Gussin, Reference Baum and Gussin2005, Reference Baum and Gussin2007; Turner, Reference Turner2007; Levendusky, Reference Levendusky2013a, Reference Levendusky2013b; Druckman et al., Reference Druckman, Levendusky and McLain2015). Baum and Gussin (Reference Baum and Gussin2007), for example, demonstrate that participants who read a news transcript about the 2004 presidential campaign attributed to Fox News saw it as more favorable toward Bush, relative to Kerry, than participants who read an identical transcript attributed to CNN.
A separate but related body of research explores whether partisans and ideologues engage in motivated reasoning when processing corrections to political misperceptions. Berinsky (Reference Berinsky2015), for example, manipulates the source of rumor corrections concerning the Affordable Care Act (ACA), and finds that corrections from an ‘unexpected’ (Republican) individual who might otherwise be expected to oppose the ACA are most effective at correcting the rumors.Footnote 1 Nyhan and Reifler (Reference Nyhan and Reifler2013) examine the extent to which corrections succeed in reducing misperceptions about the 2012 presidential election by varying the news source of mock newspaper articles with misperceptions and corrections.Footnote 2 They find mixed effects with regard to ideology; for liberals, a correction to misinformation is just as persuasive when it comes from MSNBC as when it comes from Fox News, but for conservatives, a combination of MSNBC as the media outlet and a liberal think tank as the correction source is significantly less persuasive than any other combination of outlet and speaker source at reducing misperceptions.
While these studies often use widely-shared rumors or conspiracy theories as treatment materials, the literature directly testing whether media source impacts belief in false information is sparse. In a recent study on prior exposure and perceived accuracy of fake news on social media, Pennycook et al. (Reference Pennycook, Cannon and Rand2018) find that including a mainstream liberal or conservative source on a fake news article headline has no effect on participants' perceived accuracy of the claim made in the headline.Footnote 3 Here, the authors focus on demonstrably false news headlines published primarily by fake news websites.
2.2 Theoretical contributions
Despite the recent growth of research on these topics, the area in which all of these studies interact has not been investigated systematically. Specifically, we address the following important gaps in the literature:
First, while several studies show that people rely on news source (specifically, whether the source is politically congenial or not) to form their opinions and attitudes about the news content, this research generally does not focus on false information. Although some studies test how the source of corrections to political misinformation impacts belief in false content, there is little research evaluating whether the source impacts people's initial perceptions (i.e., before any corrected information is presented, in the case of false information) of whether the content of a news report is accurate. It therefore remains unclear whether the source (congenial or uncongenial) or content (true or false) of news articles drives individuals' initial perceptions of whether the news they encounter is credible.
Second, few existing studies examine individuals' perceptions of the source vs content credibility of mainstream media sources. As we noted above, the new studies that have begun to explore this question use fake news headlines as treatment materials (Reference Pennycook and RandPennycook and Rand, Reference Pennycook and Rand2019b; Pennycook et al., Reference Pennycook, Cannon and Rand2018) rather than the type of misleading content that major news outlets could actually spread.
To fill in these gaps in the literature, our experiment randomized not only the source and content of a news article, but also focused on ‘ambiguous’ false information in the mainstream media, which we mentioned briefly in the introduction. Pairing ambiguous content with major news outlets as the information source is an important test of partisan motivated reasoning in the mainstream media because, simply put, mainstream outlets do not systematically disseminate ‘fake news.’ Tabloid newspapers and malicious online media are most responsible for the deliberate spread of undeniably false information, or provocative news that people most likely regard as slanted. These outlets are motivated to generate intense short-term profit by providing content that maximizes partisan utility regardless of its truth value. In contrast, we have every reason to believe that major news networks avoid disseminating groundless news as much as possible, because doing so would damage their reputation as trustworthy news sources.
Nevertheless, mainstream sources have been known to unintentionally publish information that is false, or to intentionally or unintentionally exclude or alter certain details. For example, ABC News suspended one of its investigative journalists for publishing an inaccurate news report about Trump's involvement with Russia during the 2016 presidential campaign (Wang, Reference Wang2017). Likewise, MSNBC aired a false story about the Kate Steinle murder verdict in December 2017 (Darcy, Reference Darcy2017), and Fox News published a bogus report about Russian military power that it got from a tabloid newspaper. In fact, there are long lists of severe cases of misreporting by the mainstream media that have been compiled every year since 2013 (Mantzarlis, Reference Mantzarlis2018).
What makes this type of information so troubling is that when it is initially published, it often fails to provide any clues that it should be read with scrutiny. Furthermore, there is no guarantee that readers will be exposed to any subsequent corrections that may or may not be made by the original source.
Another problem with this ambiguous content is that because it is published by major news networks which, on the whole, try to pursue journalistic objectivity, it often lacks clear partisan cues that could alert readers about the trustworthiness of the information. While articles disseminated by fake news websites, citizen journalism, and other non-mainstream sources often have a clear partisan or ideological leaning, mainstream sources are more likely to publish – or try to publish – objective reports. When average news consumers encounter such content, they often cannot rely on their pre-existing partisan or ideological notions to judge what they read.
In sum, when the mainstream media publish ambiguous information about an issue which provides neither a clear indication that the information is false nor any specific partisan or ideological cues, it is difficult for people to know for sure whether they should view the information they encounter with suspicion.
2.3. Hypothesis
Consistent with the theory of motivated reasoning, we predict that people rely on heuristics to judge news articles, such as whether the report is published by an ideologically congenial source, to make their judgments. Specifically, we test the following hypothesis:Footnote 4
Hypothesis 1:
When the false statement is presented with the Fox News [CNN] header, Republicans and conservatives [Democrats and liberals], as compared to Democrats and liberals [Republicans and conservatives], are more likely to think that the statement is accurate.
We selected CNN and Fox News to use as our news sources for several reasons. First, Pew Research Center data show that CNN is the most favored news outlet among consistent liberals, while Fox News is the most favored outlet among consistent conservatives (Mitchell et al., Reference Mitchell, Gottfried, Kiley and Eva Matsa2014). Data from the same Pew survey also show that within ideological groups, ‘consistently liberal’ and ‘mostly liberal’ audiences trust CNN more than they distrust it, and distrust Fox News more than they trust it.Footnote 5 Likewise, ‘consistently conservative’ and ‘mostly conservative’ groups trust Fox News more than they distrust it, and distrust CNN more than they trust it. In addition, previous research finds that trust in media outlets including CNN and Fox News predicts individuals' vote intentions in the 2016 election with 88% accuracy (The Economist, 2016). Finally, we draw on existing experiments that use a label-switching approach with CNN and Fox News as media outlets on opposite sides of the ideological/partisan spectrum (Baum and Gussin, Reference Baum and Gussin2007; Turner, Reference Turner2007; Baum and Groeling, Reference Baum and Groeling2009).
We note that whether these sources actually disseminate false information is not important for the purpose of our research. What matters is whether American people think that the news content provided by these major media networks is believable. Given that President Trump tweeted, ‘Any negative polls are fake news, just like the CNN, ABC, NBC polls in the election’ (6 February 2017), it is sensible to assume that some American citizens believe that even major news networks provide false information, and that they rely on an article's source cue alone without carefully reading its content to make a judgment about the information's truth value.
3. Research design
We administered a randomized survey experiment on 21–22 July 2017.Footnote 6 The study participants were workers who had registered at the online marketplace, Amazon Mechanical Turk, or MTurk (http://www.mturk.com). Although survey samples obtained from MTurk are not probability samples, several studies have shown that the estimates of treatment effects in experiments obtained from MTurk participants mirror those of nationally representative samples (e.g., Horton et al., Reference Horton, Rand and Zeckhauser2011; Berinsky et al., Reference Berinsky, Huber, Lenz and Alvarez2012; Mullinix et al., Reference Mullinix, Leeper, Druckman and Freese2015; Coppock et al., Reference Coppock, Leeper and Mullinix2018). Most importantly, Coppock et al. (Reference Coppock, Leeper and Mullinix2018) recently replicated 27 existing studies using MTurk and show a high correspondence between the MTurk results and results based on nationally representative samples.Footnote 7
The total number of valid responses is 3,932. Since our treatment materials are news stories with only subtle differences, we expected that the treatment effects could be relatively small. For this reason, our sample size is larger than typical MTurk experiments on political misinformation.Footnote 8 Each participant was paid $0.70 to complete our survey, and the average response time was just over 4 min.
We note that MTurk samples tend to include a higher percentage of Democrats or liberals than the general population. While acknowledging this limitation in terms of external validity, our sample size is large enough to overcome a potential lack of statistical power and thus valid for the purpose of making causal inference within subgroups defined by participants' partisanship or ideology. Specifically, the number of participants within each group is 1,078 for conservatives, 2,081 for liberals, 1,079 for Republicans, and 2,107 for Democrats.Footnote 9
3.1 Treatment materials
All study participants were first asked to answer a series of basic demographic and attitudinal questions. Then, they were randomly exposed to one of six versions of an article excerpt on health care reform. The content of the article excerpt was either true information (control condition) or false information, and the source was either CNN or Fox News (as indicated by a large header across the top of the page), or was not included (control condition). The article title, author, and date were constant across all treatment groups. Consistent with our concept of ‘ambiguous’ false information, the article title and author name were intended to avoid cuing partisanship or ideology.
Within the content of news report, the first two sentences of the article were also constant across all treatment groups; the third and final sentence differed across treatment groups in that it contained either true or false information. Figure 1 shows the article excerpt containing false information with the Fox News header, and Figure 2 shows the article excerpt containing true information with the CNN header. The remaining four excerpts are shown in the Supplementary Materials.
In selecting an article topic to use for our experiment, we focused on the issue of health care reform. Debate over repealing and replacing the ACA permeated the news media following President Donald Trump's election, and it was particularly salient in the summer of 2017, when Republicans in Congress were fighting to pass bills that would redesign health care in the USA. Health care was – and continues to be – a complex topic for average Americans to understand; members of Congress offered dozens of amendments and revisions to these acts for months on end (Henry J. Kaiser Family Foundation, 2017; Park et al., Reference Park, Parlapiano and Sanger-Katz2017), which led to nearly constant news coverage of the health care debate by mainstream media sources in 2017.
To present participants with ambiguous but false information, we focused on one provision of the new health care bills proposed by the House and Senate that never changed: the under-26 coverage provision. Under the ACA, this provision stipulates that young adults can remain on their parents' health insurance plan until they turn 26 years old. The under-26 coverage provision has remained in place in all new versions of the health care bills proposed by the House and Senate, and we presented this claim in our true information materials. In our false information materials, we altered this information to state that in the new health care bills proposed by the House and Senate, young adults would lose coverage through their parents' health insurance plan when they turn 18. The exact text of the true and false information presented in our experiment is shown in the third sentence of the article excerpts in Figures 1 and 2.
There are two other notable reasons why we focused on the under-26 coverage provision. First, it is one of the few issues in the health care debate that enjoys wide bipartisan support (Kirzinger et al., Reference Kirzinger, Sugarman and Brodie2016). In order to estimate the effects of news source by partisanship or ideology, we sought to avoid eliciting strong partisan or ideological responses based on news content (e.g., Kelly, Reference Kelly2019). In other words, we tried to manipulate the content of the excerpts only with respect to whether the information was true or false. By presenting participants with false information about an issue for which individuals' pre-existing notions are not strongly divided by their partisanship or ideology, we isolate the impact of source on belief in the misinformation irrespective of whether the statement itself is congenial to one ideological group over another.Footnote 10
We also focused on this provision because public knowledge about the under-26 coverage provision is generally high. A nationally representative survey conducted by Gross et al. (Reference Gross, Stark, Krosnick, Pasek, Sood, Tompson, Agiesta and Junius2012) finds that 52% of Americans identify the under-26 coverage provision as part of the ACA with ‘high certainty,’ and 81% believe it is part of the ACA regardless of their certainty (Gross et al., Reference Gross, Stark, Krosnick, Pasek, Sood, Tompson, Agiesta and Junius2012). All other provisions included in this survey were correctly identified as part of the ACA with high certainty by less than 40% of participants. These findings suggest that on the whole, participants have a reasonably good understanding of this specific provision. This would imply that the magnitude of measurement error in our dependent variable (the perceived accuracy of the news content) is relatively small.
3.2 Outcome measure
After reading the article containing true or false information, all participants were first asked about their interest in reading the rest of the article.Footnote 11 They were then asked the following question to measure our outcome variable:
‘How accurate is the following statement? In the new health care bills proposed by the House and Senate, young adults would lose coverage through their parents’ health insurance plan when they turn 18.’
The response options were ‘very accurate’ (4), ‘somewhat accurate’ (3), ‘not very accurate’ (2), and ‘not at all accurate’ (1). Note that the statement is false. In the literature on misinformation, this question format is a common and standard way to measure how accurate study participants perceive a false statement to be (e.g., Nyhan and Reifler, Reference Nyhan and Reifler2010; Kuru et al., Reference Kuru, Pasek and Traugott2017; Pennycook and Rand, Reference Pennycook and Rand2017, Reference Pennycook and Rand2018, Reference Pennycook and Rand2019b; Pennycook et al., Reference Pennycook, Cannon and Rand2018).Footnote 12
Finally, the survey concluded with a question intended to assess the quality of our responses, as well as an opportunity to provide written feedback to the survey. Participants also saw a debriefing message about the nature of the study that differed by treatment condition (true vs false) and clarified that the information was fabricated for the purpose of research.
3.3 Statistical models
To conduct our analysis, we begin by dividing our samples by either ideology (liberal or conservative) or partisanship (Democrat or Republican) to explore whether the treatment effects differ by study participants' ideology or partisanship. Although the two measures are related, they comprise slightly different subsets of our sample.Footnote 13
We then run the following OLS regression model for each subset of participants:
where Y i is our measure of the perceived accuracy of the false statement.Footnote 14 The parameter for the base category (no source, true information) is b 0. The model includes three dichotomous variables, False (= 1 if the false information is presented, = 0 otherwise), CNN (= 1 if the CNN header is shown, = 0 otherwise), and Fox News (= 1 if the Fox News header is shown, = 0 otherwise), their interactions, and a random error term.
Finally, we compare the estimate of b 4 between Democrats (or liberals) and Republicans (or conservatives), and the estimate of b 5 between them, to test the hypothesis that perceived accuracy of the false information from a given source differs between participants with different political preferences. The coefficient b 4 measures the interaction effect of exposure to false information with the CNN header on participants' perceived accuracy of the false statement, whereas the coefficient b 5 measures the interaction effect of exposure to false information with the Fox News header.Footnote 15
For the ease of estimation, we run a model with all the variables in Equation 1, a variable measuring each respondent's partisanship (Democrat or Republican) or ideology (liberal or conservative), and its interaction with all the included variables. This triple-interaction model is mathematically equivalent to running two separate regression models and comparing the differences between the estimates for each variable included in Equation 1.
3.4 Exploratory analysis
In addition to the confirmatory analyses based on our pre-registered hypothesis, we undertake further exploratory analyses to examine the robustness of our findings and to explore the heterogeneity of the treatment effects among different types of participants. The results of these additional analyses are presented in the Supplementary Materials.
First, we added a set of additional pre-treatment demographic variables to the triple interaction model noted above. Adding these covariates improves the efficiency of our estimation.Footnote 16 More importantly, we can control the association between our main moderator, partisanship or ideology, and other demographic variables. Specifically, we added a set of dummy variables for participants' age group (18–24 [baseline], 25–34, 35–44, 45–54, 55 or older), gender (male [baseline], female, or non-binary), level of education (without a college/university degree [baseline], with a college/university degree), race (white [baseline], non-white), and region of residence (East North Central [baseline], East South Central, Middle Atlantic, Mountain, New England, Pacific, South Atlantic, West North Central, West South Central).
Second, we excluded participants who could have used search engines to check the accuracy of the statements. At the end of our survey, we included the following question: ‘It is essential for the validity of our research that we know whether participants looked up any information online during the study. Did you look up any information during the study? Please be honest; you will still be paid and you will not be penalized in any way if you did.’ About 6% of participants reported that they had looked up the information. We ran our analyses after excluding these participants.
Third, we focused on high-quality participants by excluding ‘speeders’ who completed the survey faster than the first quartile of the distribution of response time, which was 2 min. In other words, we excluded about 25% of participants who might have spent insufficient time reading the treatment materials and/or survey questions. Note that the median response time for the survey was 3 min, which, in a survey with 20 questions and an article excerpt, leaves little time to look up information on an online search engine.
Finally, we conducted some exploratory analyses based on other subgroups in our sample. Although some of the sample sizes are relatively small within these subgroups, they highlight some of the other characteristics that may help explain which individuals are more or less susceptible to believing false information from different news sources. Specifically, we subset our data by participants' level of interest in politics, their level of trust in the media, and their level of political knowledge (measured by five political knowledge test questions).
4. Results
Figure 3 presents the results of our regression analysis graphically, showing the average perceived accuracy of the false statement compared to the baseline control condition (true information, no source presented), broken down by partisanship (top) and ideology (bottom). The point estimates are indicated by the dots, while the 95% confidence intervals are indicated by the horizontal lines. The coefficients that are significant at the 0.05 level are shown in black, while insignificant coefficients are in grey. As the figure shows, exposure to false information (without the source cue) is the single variable showing the largest and most highly significant effect on individuals' rating of the false statement as accurate. Figure 3 also shows that this pattern is independent of participants' ideology or partisanship. Regardless of their political preferences, individuals exposed to false information are significantly more likely to rate the false statement as accurate than individuals exposed to true information.
The coefficients that are relevant to our hypothesis are, however, those for False × CNN (b 4 in the model) and False × Fox News (b 5 in the model) and, more importantly, the differences in these coefficients between subgroups of participants of a different ideology or partisanship. The results are contrary to our expectation in Hypothesis 1. Figure 3 shows that among participants who were exposed to false information originating from CNN or Fox News (compared to the baseline of no source), there are few significant differences by partisanship (top) or by ideology (bottom). Specifically, the coefficient estimates for the interaction of the false information condition and the CNN condition $\lpar {\widehat{{b_4}}} \rpar $, and the interaction of the false information condition and the Fox News condition $\lpar {\widehat{{b_5}}} \rpar $, are all small and statistically insignificant with one exception. As the top half of Figure 3 shows, Democrats are less likely to think that the false statement is accurate, compared to the baseline condition, when it is attributed to Fox News $(\widehat{{b_5}} = -0.20,\,p \lt 0.05)$. However, we are not able to reject the null hypothesis of no difference between Democrats and Republicans (the leftmost panel in Figure 3). In terms of the effect of exposure to false information originating from CNN, Republicans are not significantly less likely to think the false statement is accurate, as compared to the baseline condition.Footnote 17 Again, this result is contrary to our expectation.
As we noted above, we also undertake exploratory analyses to verify the robustness of our results and to further examine treatment effect heterogeneity among various subsets of participants. Overall, the results of these additional analyses (presented in the Supplementary Materials) suggest that our main results are robust to different model specifications and that there is no substantial treatment effect heterogeneity.
5. Conclusion
Our most consistent – but unexpected – finding is that mere exposure to ambiguous but false information explains study participants' rating of a false statement as accurate, regardless of the news source from which the information originates and of participants' political beliefs. These findings run contrary to existing studies that emphasize the role of partisan motivated reasoning in how individuals process information (e.g., Druckman et al., Reference Druckman, Peterson and Slothuus2013), and to research suggesting the important roles that heuristics (including news sources) play in opinion formation (e.g., Baum and Gussin, Reference Baum and Gussin2007). Rather, our experiment demonstrates that whether the news is from a congenial or uncongenial source is less important than the content itself (false or true information) in explaining study participants' belief in a false statement.
Our results add to a growing literature that questions the validity and applicability of the theory on partisan motivated reasoning. Leeper and Slothuus (Reference Leeper and Slothuus2014) argue that the extent to which motivated reasoning operates in the context of partisanship is contingent on citizens' individual predispositions and goals. In many cases, individuals lack the motivation or effort required of them to engage in motivated reasoning when they form opinions. Furthermore, Guess and Coppock (Reference Guess and Coppock2015) find that instead of engaging in motivated reasoning, individuals exposed to information about various contentious issues update their beliefs in parallel – regardless of their ideological predispositions. Similar to these studies, our findings suggest that partisan motivated reasoning is conditional on contexts and types of information.
Our results may also be consistent with Bullock's (Reference Bullock2011) finding that when individuals are exposed to both factual information and elite partisan cues, they are capable of adopting policy views that are independent of the views of partisan elites. Although Bullock's findings led him to be optimistic about partisans' ability to evaluate information regardless of its source, our findings tell a more cautionary tale. We argue that people tend to simply believe information uncritically when they are exposed to it, even false information (e.g., Hasher et al., Reference Hasher, Goldstein and Toppino1977; Gilbert et al., Reference Gilbert, Tafarodi and Malone1993). Pennycook et al., (Reference Pennycook, Cannon and Rand2018) recently found evidence of this phenomenon as it pertains to fake news. Their findings show that including a news source has no impact on participants' perceived accuracy of fake news headlines. Our research corroborates this account, and suggests that the power of exposure to false content may extend beyond fake news headlines to plausible, mainstream news articles.
In an era of increasing political polarization and supposedly low trust in the mass media, people still tend to believe what they read. Considering that holding false beliefs about current events, politicians, and policies can impact people's subsequent political behaviors and create macro-level errors in public opinion (Flynn et al., Reference Flynn, Nyhan and Reifler2017), this is a cause for concern. The implication of our research, therefore, is that major media outlets should attempt to prevent any false information from reaching readers or viewers in the first place, rather than emphasizing the sources of the information or focusing on who takes in information from which news outlet(s). Specifically, journalists have an obligation to ensure that all published material is fully supported by facts. Indeed, there is little that prevents Americans from swallowing information fed to them from mass media sources. For better or for worse, content appears to ‘trump’ source, partisanship, and ideology among American consumers of the mass media.
Supplementary materials
The supplementary materials for this article and a complete replication package can be found at https://doi.org/10.1017/S1468109919000082 and https://doi.org/10.7910/DVN/K1R14D.
This article was reviewed in the “Result Blind Review” category.
Katherine Clayton is a pre-doctoral research fellow in the Program of Quantitative Social Science at Dartmouth College, a graduate of the Dartmouth College Class of 2018, and an incoming doctoral student in political science at Stanford University. Her research focuses on political behavior in the U.S. and Europe.
Jase Davis is a Dartmouth College Class of 2018 graduate.
Kristen Hinckley is a Dartmouth College Class of 2017 graduate.
Yusaku Horiuchi is a Professor of Government and the Mitsui Professor of Japanese Studies at Dartmouth College. His research and teaching interests include comparative politics, political behavior, and political methodology.