Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-20T01:14:04.442Z Has data issue: false hasContentIssue false

Assessing the Credibility of Constitutional Experts

Published online by Cambridge University Press:  01 December 2022

Eileen Braman*
Affiliation:
Department of Political Science, Indiana University, Bloomington, Indiana, USA
Rights & Permissions [Opens in a new window]

Abstract

This study investigates how citizens assess the credibility of constitutional experts on matters of government authority. Analyses of data from two similarly designed experiments, conducted with national samples, reveal that partisanship, race, and level of education are significant predictors of survey respondents’ willingness to extend credibility to constitutional experts. The compatibility of the views expressed by experts with respondents’ own policy views on issues that are the subject of proposed government action is also important. Evidence shows that this consistency is more important in the decision that experts are credible than in decisions that they are not credible, suggesting that esteem motives are relevant in the decision to credit experts who express views congenial to our own that are distinct from social-identity motives scholars have theorized to be important in partisan resistance to expertise. The implications of findings for holding government officials accountable to constitutional limits on government authority are considered.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Law and Courts Organized Section of the American Political Science Association

Introduction

We live in a highly specialized society where citizens cannot be expected to have the knowledge or inclination to comprehend the implications of every socially relevant behavior. To deal with this, journalists commonly employ the opinions of experts on matters of public concern (Merkley Reference Merkley2020a). Expert commentary is often used by news media to aid their consumers in understanding complicated causal relationships, historical events, and factors that can be relevant in policy decisions of government officials. Where those who are knowledgeable in some recognized field tend to agree about the effects of some phenomenon of interest, journalists might even refer to expert consensus to alert readers to the weight of evidence regarding a particular claim (Johnston and Ballard Reference Johnston and Ballard2016; Merkley Reference Merkley2020a; Reference Merkley2020b).

One domain where reporters commonly employ expert commentary is on matters related to the novel assertion of authority by government actors, such as Congress and the President. Conceptions of procedural justice (Tyler Reference Tyler2021) in our constitutional democracy suggest citizens want to be confident that appropriate government actors are working to address matters of public concern and that they are doing so within the bounds of their rightful authority. Questions about which branch or level of government is empowered to act and address specific problems concerning immigration, mask mandates, or the debt ceiling can be challenging, because constitutional provisions regarding institutional powers are unclear. Many rules constraining the behavior of government actors involve esoteric constitutional provisions and opaque bodies of doctrine from past historical eras. Thus, journalists often cite law professors (or even political scientists) as constitutional experts to help citizens understand how legal constraints may relate to the actions of government officials. As such, these authorities can play an important role in our democratic system; they are often the first to signal a problem if office holders are taking actions that exceed the bounds of their legitimate authority.

According to longstanding research on persuasion, cited experts are high-credibility sources that can yield substantial influence in moving people’s opinions toward the evidence and arguments they express (Petty and Cacioppo Reference Petty and Cacioppo1986). Hovland et al. (Reference Hovland, Janis and Kelley1953) originally theorized that the power of persuasion for such authorities depends on the extent to which citizens believe their assertions are valid,Footnote 1 coupled with their perceived trustworthiness (Whitehead Reference Whitehead1968; Pornpitakpan Reference Pornpitakpan2004).

In our current polarized environment, constitutional experts may not be uniformly considered the high credibility sources they once were. Research on the acceptance of scientific findings and expertise indicates growing trends in anti-intellectualism prompted by populism (Nichols Reference Tom2017) and skepticism about the motives and methods of researchers that produce results contrary to our own worldview (Hornsey and Fielding Reference Hornsey and Fielding2017). Recent studies reveal that journalistic allusions to experts or even expert consensus does not necessarily sway citizens’ thinking in a direction consistent with the views of the expert sources who are cited. This is the case with respect to social-science authorities, such as economists (Johnston and Ballard Reference Johnston and Ballard2016), as well as hard-science experts with regard to issues such as global warming (e.g., Kahan et al. Reference Kahan, Jenkins‐Smith and Braman2011; Bolsen and Druckman Reference Bolsen and Druckman2018), vaccine effectiveness (Jolley and Douglas Reference Jolley and Douglas2017), and the safety of nuclear power and genetically modified organisms (Merkley Reference Merkley2020b).Footnote 2

In this age of academic skepticism, where political figures on both sides of the political aisle are constantly touting alternative versions of what the constitution forbids and requires, it is not at all clear citizens will see cited legal experts as credible on issues of national authority. Research on affective partisanship shows that citizens think of copartisans as team members, and often demonstrate antipathy toward members of partisan outgroups (Iyengar et al. Reference Iyengar, Lelkes, Levendusky, Malhotra and Westwood2019). Do we see experts on constitutional authority in a similar light depending on whether they express opinions compatible with our predispositions?

Legal interpretation, even at the highest levels, is not hard science. There are no objective truths in the same sense. Although constitutional rules are designed to guide and constrain the behavior of government officials and institutions, they are entirely socially constructed and interpreted by judicial officials with differing worldviews. As Bybee (Reference Bybee2010) effectively observes, citizens have an increasingly sophisticated understanding of the role of personal factors in the interpretation of judges. Under such circumstances, what proprietary knowledge do constitutional scholars possess in the minds of citizens? What determines whether individuals find constitutional scholars credible experts regarding questions of government authority?

I delve into these questions using evidence from two similarly constructed experiments administered to nationally representative samples: Timed Shared Experiments in the Social Sciences (TESS) and the Cooperative Congressional Elections Study (CCES). I conduct the inquiry to discover the factors that are important when people credit and discredit cited authorities. I also ask what groups are most likely to express uncertainty about constitutional experts. Exploring assessments in this way helps shed light on different reasons individuals engage in motived reasoning about constitutional expertise based on their agreement with the content of views expressed by experts that is distinct from other factors that have been found to be important in assessments of expertise, such as partisanship and degree of educational attainment.

Thinking about credibility of constitutional experts

There have been many studies looking at individual cognition with respect to expertise in the last decade across several disciplines, including psychology, political science, and sociology. A good proportion of these studies are specifically interested in citizens’ thinking with respect to climate change (Hornesy et al. Reference Hornsey, Harris, Bain and Fielding2016), but there are also studies that consider how citizens respond to social scientists’ input about policy matters (Johnston and Ballard Reference Johnston and Ballard2016) and scientific consensus with regard to other policy-relevant issues (e.g., Merkley Reference Merkley2020b; see also Milosh et al. Reference Milosh, Painter, Van Dijcke and Wright2020 for example on emerging work on coronavirus disease 2019).

Druckman and McGrath (Reference Druckman and McGrath2019) offer a process-based argument as to why people may resist authority. Specifically, the authors argue that, as people’s ability to directly assess information decreases, the complexity of processing (and, thus, the points for sincere questioning) increases. As such, parsing directionally motivated skepticism from the differing requirements of evidence that could be necessary to sway people’s opinion is difficult. Druckman and McGrath (Reference Druckman and McGrath2019) point out that when evidence is presented by an intermediary, people must first determine whether they find the expert credible before they assess the weight that the expert’s argument and evidence should have in their judgments. Their point is well taken, but perhaps contemplates cognitive processes that are more independent in theory than in practice. There is experimental evidence, for instance, from one of the earliest studies on motivated reasoning (Lord et al. Reference Lord, Ross and Lepper1979) that people tend to find studies that produce evidence contrary to their opinions about the death penalty as less effectively executed than individuals given identical studies where they are led to believe the results are consistent with their opinions. This suggests that decisions about expert credibility and the soundness of scientific methods may be inexorably tied to whether the proffered evidence is compatible with one’s prior beliefs.

Although there are studies in law and psychology exploring the persuasiveness of expert witnesses in the context of civil and criminal litigation, there are scant studies on the role of legal experts may have in shaping public opinion. One exception is a study by Simon and Scurich (Reference Simon and Scurich2013), who looked at the influence legal experts can have in shaping individual judgments about the factors that are relevant in judges’ decisions in particular cases. The authors also investigated what shapes people’s confidence in legal experts, and found that legal commentators do not shape individuals’ evaluations of jurists’ decisional processes as much as people’s agreement with the outcome of those decisions themselves. Moreover, people’s attitudes about the competence and reliability of legal commentators are determined by whether experts express a view that is consistent with their preferred outcome.Footnote 3 These findings suggest that compatibility with the views legal analysts express can have a substantial influence on citizens’ assessments of expert credibility.

Another factor that has been found to be important in the evaluation of experts is political orientation. Studies have demonstrated that Democrats and liberals tend to defer to expertise more than Republicans and conservatives (Blank and Shaw Reference Blank and Shaw2015 [role of conservatism on dampening influence of expertise]; Shen and Gromet Reference Shen and Gromet2015; Johnston and Ballard Reference Johnston and Ballard2016 [using a combined measure of partisanship and ideology]; Suhay and Druckman Reference Suhay and Druckman2015; Bolsen and Druckman Reference Bolsen and Druckman2018). One might suspect that the rhetoric of former President Donald Trump plays an important role in this observed disparity. Although he may certainly have exaggerated partisan differences in citizens’ reliance on experts, evidence of partisan (and/or ideological) bias in the interpretation of expertise predate the former president becoming a major focal point in American politics (Gauchet Reference Gauchat2012; Suhay Reference Suhay and Nussbaum2017).

There are also studies demonstrating that, where findings threaten the values of those who identify with the Democratic party, Democrats seem to be less likely to defer to expertise (Kraft, Lodge, Taber Reference Kraft, Lodge and Taber2015; Nesbitt, Cooper, and Garret Reference Nisbet, Kathryn and Kelly Garrett2015). These studies exploit a problem with some of the extant research on the acceptance of scientific evidence. Often, the agreement with findings of a particular study (or scientific consensus on a policy matter) is conflated with partisanship. For instance, Republicans who have been found to be more resistant to scientific evidence also tend to be against the specific policy measures involved with reducing global warming or requiring citizens to wear masks. In the current study, by randomly assigning participants to conditions where experts express views consistent (or inconsistent) with their own, as well as controlling for partisan identity in my analyses of credibility assessments, I can distinguish the relative influence of agreement and partisanship on evaluations of constitutional experts. This is important to separate the causal influence of these distinct factors, but also because different goals may be relevant for individuals engaged in motivated reasoning based on their agreement with the views authorities express versus motivated reasoning that is attributable to partisan identity.

The studies by Ross, Lord, and Lepper (Reference Lord, Ross and Lepper1979) and Simon and Scurich (Reference Simon and Scurich2011; Reference Simon and Scurich2013) mentioned previously, suggest that people tend to find research with findings they agree with more credible than research with which they disagree. This sort of motivated reasoning serves self-esteem (Kunda Reference Kunda1990) by helping individuals believe that their personal attitudes are correct or based on sound reasoning. These cognitions can bolster self-perception by helping people feel more intelligent. Concluding that scientists who conduct research that supports our attiitudes are competent, skilled, and effective also serves individuals’ ego by attributing positive traits to others who have expressed similar viewpoints (Brewer Reference Brewer2007).

Other researchers, including Kahan et al. (Reference Kahan, Jenkins‐Smith and Braman2011), have suggested that people engage in motivated reasoning about expertise for entirely different reasons: to feel close to relevant social groups in an effort to solidify valued social identities. Although bolstering social identity by attempting to forge close bonds with similar others can also enhance esteem, the mechanism for doing so is more indirect. Hornsey and Fielding (Reference Hornsey and Fielding2017) elaborate on this idea with respect to partisanship:

[T]o the extent that people identify with a certain group – and to the extent this group prescribes antiscientific beliefs – internalization of antiscientific views is likely. …In the last three decades… scientific debates have been caught up in political debates. …Republicans sought to discredit the moral integrity of the scientific community. …For a self-identified Republican, the motivation would be to look favorably on their party’s views and to dismiss or dispute contradictory evidence. Given this it is perhaps not surprising that trust in science has declined since the 1970s and that this decline is attributed exclusively to conservatives [citations omitted].

As such, parsing the influences that are a result of agreement with the views authorities express from those that are consequence of partisanship is theoretically important.Footnote 4

Another factor that could influence individuals’ assessments of experts is their education, particularly where, as in this study, the constitutional experts cited are identified as academics. This may be true for different reasons. First, people who have attained high levels of education might recognize the value of knowledge gained through degree programs (or there may be an element of sunk costs, where they are more likely to value such knowledge because they have gone through the effort of attaining higher levels of education themselves). A second reason education could be important is that scholars have identified anti-intellectualism as a factor that is relevant in the evaluation of expertise (e.g., Merkley Reference Merkley2020b). These twin mechanisms suggest two distinct ways education might matter. If highly educated people value expert knowledge more, they should be more likely to say that experts are credible; alternatively, if those with less education act against elitism, they should be more likely to dismiss experts as not credible. To be sure, this is a fine distinction, we might be able to explore the analyses by looking at factors that influence each type of assessment.

There are several other demographic factors that could be relevant in the evaluation of expertise. People from traditionally privileged groups (i.e., whites and/or men) could be more likely to recognize expert authority than those from nontraditional backgrounds (ex. non-whites, women). This could be the case with constitutional authorities, because certain demographics remain underrepresented in the higher echelons of the legal hierarchy where constitutional experts reside (Kay Reference Kay1991; Hurwitz and Lanier Reference Hurwitz and Lanier2008). Hornsey and Fielding (Reference Hornsey and Fielding2017) suggest that this type of acceptance (or rejection) of expertise could be rooted in a kind of system justification motivation. Alternatively, some might expect women to be particularly deferential to expert authority based on traditional gender stereotypes (Anderson et al. Reference Anderson, Scheufele, Brossard and Corley2012). A final characteristic that could be important in the assessment of expertise is age. Younger individuals may be more likely to defer to experts than older ones because of their relative lack of knowledge and experience.

Design

Data for this inquiry come from two similarly designed experiments looking at citizen assessments of national legislative and unilateral executive authority. Both experiments involve a hypothetical article about proposed government action. Participants were not given a source for the article, but each scenario began with a date and “AP” to simulate the style used by the Associated Press news service. The TESS study of congressional authority took place in September 2016, and the experiments involving executive authority were included on the 2017 CCES. The Appendix includes treatments and sample characteristics for participants in each study.Footnote 5

Both experiments were originally deigned to answer a somewhat different question about how rules and political context interact in the minds of citizens to shape views of the appropriate use of government authority (Braman Reference Braman2021). Each experiment was conducted to discover how participants’ policy views and their feelings about relevant political actors combined with the majority support for the proposed action and the expert commentary about compliance with rules to influence assessments of the appropriateness of state action.

Experimental scenarios employed a consensus of experts about the exercise of national authority to address a matter of public concern. Right after participants were exposed to treatments where experts expressed a view that was divided, consistent, or inconsistent with their expressed policy preferences, survey respondents were asked whether they thought such experts were credible regarding the question of government authority. As detailed below, there is some very interesting variance to explain in how study participants answered this question. Moreover, participants were randomly assigned to conditions that can be slightly retooled to fit this inquiry. These answers about expert credibility are appropriate for exploration as a dependent variable to shed light on the question of how citizens think about constitutional authorities.

The experiment on assessments of legislative authority involved a 2 × 3 × 2 design where the issue that was the subject of congressional action, expert consensus, and the level of public support for proposed measures were manipulated. Participants in the TESS administration read one of two articles that stated Congress was considering an action that would either impose limits on the shipment of guns in interstate commerce or cut off federal funds to states that allowed undocumented students to receive in-state tuition benefits.Footnote 6 Prior to reading the article about congressional authority, to gage participants’ policy views on the issues that were the subject of the scenarios, survey respondents were asked questions related to their support for limiting gun access and the restriction of benefits to U.S. citizens and those in the country legally.

Each article stated that national legislative action could interfere with state prerogatives in our federal system of government, specifically pitting congressional authority against states’ rights. What participants were told about expert consensus concerning congressional authority to take each action was manipulated in the experiment. One third of participants were told that the consensus among constitutional experts was that Congress was acting within its authority under Article I, one third were told that the expert opinion on the issue was divided, and one third were told that legal experts agreed that the action would infringe on states’ reserved powers in our federal system. This consensus was exemplified with a quote from an authority identified as a constitutional law professor at George Washington University Law School.Footnote 7 The level of public support for such measures was also manipulated in the article. Half of participants read an article stating that 85% of the population supported such measures, and the other half were told only 15% of the population supported the proposed action.Footnote 8

After reading the hypothetical article referencing expert consensus, participants were asked whether they thought constitutional law scholars were credible on the issue of government authority related the proposed measure, and whether they thought the action was an appropriate exercise of legislative authority. The question about expert credibility is the dependent variable analyzed in this inquiry. Respondents were given three options to indicate whether they thought experts were credible: Yes, no, and do not know.

The design of an experiment on executive authority deployed on the CCES in the autumn of 2017 was purposefully quite similar. That administration involved a 3 × 2 design where participants read an article stating that President Trump was considering issuing an executive order that would take away federal funding from sanctuary cities that declined to report nondocumented citizens to federal authorities. Before participants read the article, they were asked about their support for sanctuary cities. Again, one third of participants read an article stating the consensus among experts was that the President had authority to do so, one third were told experts agreed he did not have such authority, and one third were told expert opinion was divided on the question of authority. The quote exemplifying consensus was attributed to a professor who was “a noted authority on government powers.” Once more, half of participants were told 85% percent of the population supported such measures and the other half that only 15% of the population supported the action. As in the study of legislative authority, after reading the article, participants were asked whether they thought constitutional experts were credible on this matter of government authority using the same three response options.

For the purposes of this study, I am interested in how individuals respond to experts who express views that are either consistent or inconsistent with their policy views on proposed government action controlling for other relevant demographic variables. Participants in both administrations were randomly assigned to one of three expert consensus conditions (clear authority consensus, divided consensus, and clearly no authority consensus). I combine these categories with the variable reflecting whether survey respondents agreed or disagreed with the policy implications of the proposed measures described in the articles about gun control, in-state tuition benefits, and sanctuary cities. This results in three new categories across administrations reflecting participants who agreed with the consensus opinion cited in the article for the condition to which they were assigned (n = 267 TESS; n = 156 CESS), participants where whether they agreed or disagreed was unclear because they were in the divided consensus treatment (n = 254 TESS; n = 164 CCES) and, participants who disagreed with the consensus opinion expressed in the article (n = 273 TESS; 156 CCES).

I use dichotomous versions of assignment to each of these agreement conditions to see how participants in each group respond in relation to one another in the analyses presented. This should tell us if there are similar factors at play in deciding if an expert is credible or not. Moreover, a high proportion of respondents expressed uncertainty about constitutional experts; analyzing these responses will help us understand whether there are specific factors that contribute to that assessment.

Results

Table 1 sets forth the distribution of responses to the credibility question across administrations. In each sample, responses were quite similar. A plurality of respondents indicated that they agreed the cited expert was credible. Approximately one fifth of respondents said the expert was not credible, and approximately a third of respondents said they were not sure. Thus, here, as in previous studies (Blank and Shaw Reference Blank and Shaw2015), more people deemed the experts cited in the article credible than not credible. Appendix Tables A1 and A2 demonstrate this is also true across partisanship and agreement conditions in both samples.

Table 1. Expert Credibility Ratings Across Experimental Administrations

It is also worth noting that the proportion of respondents who express uncertainty about the credibility of experts is not negligible comprising more than a third of all participants in both samples. This suggests that individuals who find experts credible are not just a mirror image of those who say legal experts are not credible. Different factors may be relevant in each assessment. Moreover, there might be systemic factors that explain who is likely to express uncertainty about expertise. To explore this, I run probit models on each of these distinct responses (credible, not credible, and do not know) in addition to an ordered probit model on the three-point response variable reflecting growing skepticism about expertise.

As I conduct the multivariate analyses of judgments about expertise in several different ways, each model answers a different question about the cognitive processes involved in the judgments of the credibility of constitutional authorities. I aim to answer three distinct questions in the analyses: what variables explain skepticism about constitutional experts; are the same factors relevant in crediting experts and dismissing their views, and who is most likely to express uncertainty about constitutional expertise?

Question 1: What variables explain skepticism about constitutional experts?

First, I employ an ordered probit model to see what leads to skepticism about expertise. The dependent variable in the analyses was the three-point response to whether the constitutional authorities are credible experts regarding the question of government authority described in the article. Responses were coded to reflect increasing skepticism: 1 for yes, 2 for do not know, and 3 for no. I employ each agreement category as a dichotomous variable to see how individuals in different categories respond to questions about expertise in relation to one another. Participants who agreed with the expert consensus in the condition to which they were assigned are the excluded category in these models.Footnote 9

I also include measures of participant partisanshipFootnote 10 (coded 1-7, with higher numbers representing strong Republican identity), race (1 = white, 0 = nonwhite), sex (1 = female, 0 = male), birth year (higher numbers reflect participants who are younger), and level of education (higher numbers represent higher levels of education). The public support manipulation in each experiment was coded dichotomously (0 for 85% support; 1 for 15%). In the models for legislative authority, I also included a control for whether the respondent was answering questions about gun control measures or limits on in-state tuition benefits for undocumented students (coded as 0 and 1, respectively). The results for the ordered model regarding skepticism are shown in Table 2.

Table 2. Ordered Probit Skepticism Across Administrations

Note: +p < 0.10, *p < 0.05, **p < 0.01, ***p < 0.001.

The results demonstrate that participants who are in conditions where they disagree with the expert consensus are significantly more skeptical of experts than the excluded category of those who agree with the consensus across administrations. Marginal effects indicate that differences are substantively important. In the executive authority administration, for instance, participants in conditions where they disagree with the expert opinion regarding sanctuary cities are 13% less likely to say experts are credible and 9% more likely they are not credible than participants in conditions where they agree with experts. Those in the divided consensus category also tend to be more skeptical of experts than those who agree. This difference is significant in the administration regarding executive but not legislative authority.

The models also reveal that partisanship is significant in the assessment of experts, with Republicans judging experts with more skepticism than Democrats. In the executive authority scenario, strong Republicans are 3% less likely to deem experts credible and 2% more likely to say they are not credible than Republicans who are not strong identifiers. Moreover, white participants are 11% more likely to say experts are credible and 7% less likely they are not credible than nonwhite participants. Education also plays a significant role in judgments, with participants who are more educated expressing less skepticism about expertise. There is also evidence that younger individuals defer more to experts (in the CCES sample, the birth year variable is marginally significant), and participants who evaluated gun control treatment were more skeptical about experts than those who read about the restriction of in-state benefits in the study about legislative authority.

Question 2: Are the same factors relevant in crediting expert opinion and dismissing their views?

A second question one may ask in the context of this inquiry is whether the same factors are relevant in deeming experts credible as in dismissing experts as not credible. To probe this question, I run identical probit models on each of these response categories in Tables 3 and 4.Footnote 11 In Table 3, the dependent variable was coded 1 if the respondent deemed the expert credible and 0 otherwise. In Table 4, the dependent variable was 1 if the respondent said the expert was not credible, and 0 otherwise.

Table 3. Probit Credible Response Across Administrations

Note: +p < 0.10, *p < 0.05, **p < 0.01, ***p < 0.001.

Table 4. Probit Not Credible Responses Across Administrations

Note: +p < 0.10, *p < 0.05, **p < 0.01, ***p < 0.001.

Looking across the results for the two responses, similar demographic factors appear to come into play in both judgments. Republicans are significantly less likely than Democrats to say experts are credible, and also significantly more likely to say they are not credible. Those who are educated were more likely to say experts are credible and less likely to dismiss them as not credible. Race is also important; it is significant for both responses in the TESS administration and marginally significant in the CCES models. White participants tended to be more trusting of experts than nonwhite participants, and less likely to dismiss their views.

One particularly interesting finding looking across these models is that participants’ agreement with the expert consensus in the condition to which they were assigned is more important in judging experts to be credible than in saying they are not credible.Footnote 12 This particular result is interesting and consistent with findings about self-esteem, suggesting that attributing positive attributes to people who are like you is more satisfying than attributing negative attributes to those who are not (Brewer Reference Brewer2007). Viewed in this light, attributing credibility to those who share our views makes us feel better about ourselves, but dismissing the opinions of those who disagree does not serve the same purpose as effectively (see, Clark and Evans Reference Clark and Evans2014 on the role of consistent high-credibility messages in self validation).

To get a better idea of the substantive importance of variables in the analyses, Table 5 sets forth the predicted probabilities for credible and not-credible responses in the executive authority scenario. The results demonstrate that participants who agreed with experts act distinctly from those who disagreed with the expert consensus and participants in unclear conditions, and the predicted probabilities for the latter two categories were quite similar. We observe that participants in conditions where they agreed with experts had a 0.56 probability of saying they are credible and 0.13 probability of saying they are not credible, holding demographic variables at their means, compared with 0.43 and 0.21, respectively, for those in conditions where they disagreed with the opinions expressed by constitutional authorities.

Table 5. Predicted Probabilities for Executive Authority Scenario

Note: Predicted probabilities are with demographic variables at their means and condition variables set to agree (disagree = 0, unclear = 0), disagree (disagree = 1, unclear = 0), or unclear (disagree = 0, unclear = 1), as indicated.

When considering demographic factors, partisanship had a substantial effect on assessments of expertise. Strong Democrats who agreed with experts had a 0.64 probability of deeming experts credible versus 0.45 for strong Republicans, holding other demographic variables at their means. Thus, predictions indicate strong Republicans are approximately 20% less likely to say experts are credible and more than twice as likely to say they are not credible (0.20 vs. 0.08) than strong Democrats, even when they agreed with their views. White paticipants who agreed with expert views had a 0.60 probability of agreeing that experts are credible compared with 0.50 for nonwhite participants.

Education also has a powerful influence on assessments of constitutional expertise. Participants with postgraduate education had the highest probability (0.77) of finding experts credible when they agreed with the views that authorities were expressing. Moreover, even when experts were expressing views that highly educated individuals did not agree with, they were still substantially more likely than not (0.66) to deem them credible. Those without a high-school diploma had the lowest likelihood of deciding experts were credible, particularly when they disagreed with the views the experts were expressing (0.21). This group was also most likely to respond that experts were not credible when they disagreed with the views the authorities expressed (0.38).

Question 3: Who is most likely to express uncertainty about expertise?

Table 6 displays the probit models for do-not-know responses across administrations to explore if there were systematic factors that explained the expression of uncertainty about expertise. Responses were coded 1 for participants who said they did not know if experts were credible, and 0 otherwise.

Table 6. Probit “Do Not Know” Response Across Administrations

Note: +p < 0.10, *p < 0.05, **p < 0.01, ***p < 0.001.

The results demonstrate that, although those who disagreed with the experts were somewhat more likely to express uncertainty about expertise, there were no significant differences in terms of respondents’ agreement with the consensus conditions they were in. Moreover, those who identified as Democrats and Republicans seemed to respond with do not know similarly. However, we do observe that a pair of demographic features are relevant. Women in both samples are more likely to respond with uncertainty about the credibility of experts, as are those who are less educated. Women who agreed with experts have a 0.35 probability of responding that they did not know if the experts were credible in the executive authority scenario compared with 0.23 for men. This is interesting because women are also less likely than men to give definitive answers, responding that experts are either credible or not credible, although the differences were not significant. Taken together, this perhaps demonstrates that men tend to be more decisive in their thinking about the credibility of experts.

Participants who did not have a high school diploma and agreed with the views experts expressed are approximately twice as likely to respond that they did not know if experts were credible (0.41) compared with participants who had graduate education (0.21). Thus, the analyses demonstrate that participants who were less educated were both more skeptical and more likely to express uncertainty about the expertise of constitutional authorities.

Conclusions

This study examined how citizens assess the credibility of constitutional experts on matters of government authority. Findings were strikingly consistent across experiments conducted on 2016 TESS and 2017 CCES samples, using scenarios involving two governmental institutions and three different issues. This pattern of results contributes to the overall confidence in findings about the factors and processes relevant in the evaluation of constitutional authorities from this inquiry. The analyses showed that demographic factors, including partisanship, race, and level of education, were significant predictors of survey respondents’ willingness to extend credibility to constitutional experts. Republicans, nonwhite participants, and those who were less educated all expressed more skepticism about expertise. The compatibility of the views expressed by experts with respondents’ own policy views on the issues that were the subject of proposed government action was also important.

Looking at relevant factors in each response revealed that agreement with the views expressed by experts was more important in the decision that experts were credible than in decisions they were not credible. This comports with research demonstrating that such agreement serves our self-esteem more effectively by attributing positive characteristics to those who are like us than giving negative attributes to those who are not. This may be especially true when evaluating experts. By deeming experts who share our views credible, those views themselves take on greater validity. Deciding that experts who do not share our views are not credible does not have that same self-validating effect. This finding helps distinguish the purpose of motivated reasoning that is due to agreement with the views expressed by experts from social-identity benefits of motivated reasoning that have been theorized to be relevant in motivated judgements about experts driven by partisanship.

Although partisanship and agreement with the views expressed by experts both had considerable effects on assessments of expertise, differences between participants with the highest and lowest levels of education in our samples were the most pronounced. The reason education has such a substantial influence on such judgments could be because there is pressure from above and below, so to speak. Highly educated individuals are more likely to credit experts because of their own experience in academic environments, and those who are less educated are more likely to say they are not credible because of a growing degree of anti-intellectualism identified by scholars such Merkley (Reference Merkley2020b). Evidence here supports the notion that both mechanisms are at play. Moreover, the analysis of do-not-know responses suggest a third dynamic, where those who are less educated are not only more skeptical, but also more likely to express uncertainty about expertise than those who have achieved higher levels of education.

Expressing uncertainty about expertise did not seem tied to any passive-aggressive disagreement with the views authorities were expressing. There was no significant pattern between agreement conditions and the likelihood of giving an uncertain response about expert credibility. We observed that women were significantly more likely to express uncertainty about the credibility of constitutional experts than men rather than defer to experts, as gender stereotypes might suggest, or question their credibility, similar to other groups (nonwhite participants) that have been traditionally underrepresented in legal academia where the constitutional experts that are cited by journalists commonly reside.

Considering the implications of these findings for the constraining power of constitutional rules in our current political environment is important. One may question how findings about the credibility of constitutional experts influence the persuasive weight of arguments and the evidence they provide. The findings here suggest the relationship is somewhat complex. Although the distribution of responses regarding expert credibility across the TESS and CCES samples was quite similar, with more than 50% of participants saying they did not think (or did not know if) the cited experts were credible, the analyses of participants’ ultimate judgments about the legitimacy of the government action indicated that expert consensus significantly influenced participants’ assessments in the legislative context, although its influence was not significant when evaluating unilateral executive authority (Braman Reference Braman2021).Footnote 13 However, there was evidence in the assessments of the legitimacy of both congressional and executive authority that participants were heeding experts in some respect, because legitimacy ratings were the highest when constitutional experts agreed that Congress and the President had the authority to take the proposed action, and the lowest when the experts agreed they did not.

These results suggest a bit of a disconnect between participants’ expressed assessments of expert credibility and the persuasive influence of the views that those authorities profess. In their review of research on resistance to scientific authorities, Hornsey and Fielding (Reference Hornsey and Fielding2017) suggest that such resistance could serve an independent value expressive function for individuals. The various reasons we have to assess constitutional authorities in a biased manner may slant our assessments, such that they do not accurately reflect the influence of expert information in our own judgments. The suggestion that this may cause some people to misjudge the influence of expert information in their own decision-making is an intriguing idea that seems worthy of further investigation.

Finally, the unique context in which the opinions of such authorities are relevant should be acknowledged. Citizens are not judges, but widespread acquiescence to official assertions of authority that may be of questionable constitutional validity can certainly add to the appearance of appropriateness. Under such circumstances, public opinion can have real implications for government authority. We rely on the media to inform us about official acts we cannot observe directly. The media, in turn, relies on constitutional experts to alert citizens about dubious assertions of power on the part of government actors.

Findings here suggest that our willingness to credit experts so that we may heed such warnings depends in no small part on our opinions, our partisanship, and the extent to which we share the academic training of authorities that are cited. Ultimately, if we want constitutional rules to constrain government institutions in the manner they were designed, there must be a way for individuals to recognize, even socially constructed, limits on state officials that are not so closely tied to our backgrounds and predispositions. In the absence of such limits, the only thing keeping us in check in the current political environment may be the fact that we are so evenly divided. If Congress or the President takes some questionable action, as many people are likely to see the action as legitimate as illegitimate. Taken to its extreme, official authority that depends so critically on political context may not have any real limits at all.

Acknowledgments

Sincere thanks to Ted Carmines, Jamie Druckman, Christopher Krewson, Logan Strother, participants at the 2021 Southern Meeting of Political Science Association, and the American Politics Workshop at Indiana University. The author also acknowledges the editor and anonymous reviewers at the Journal of Law and Courts for their extremely useful comments. Data from the study were obtained through a 2016 experiment fielded by Time-shared Experiments in the Social Sciences and the Indiana Team Module of the 2017 Cooperative Congressional Elections Study with the help of the Center for the Study of American Politics.

Data Availability Statement

All replication materials are available on the Journal of Law and Courts Dataverse archive.

Supplementary Materials

To view supplementary material for this article, please visit https://doi.org/10.1017/jlc.2022.4.

Footnotes

1 Often referred to as expertise or competence, this aspect of source credibility is commonly conferred via special knowledge or training.

2 Balanced reporting of expert opinion has been found to create a false perception that there is division among experts when there is widespread consensus (Koehler Reference Koehler2016).

3 The three cases that Simon and Scurich (Reference Simon and Scurich2011) investigated did not involve government powers. Moreover, the authors were primarily interested in experimental treatment effects; thus, they did not control for demographic factors, such as partisanship or education, in their analyses of the credibility afforded to experts.

4 To do this, the investigation involved both experimental and observational hypotheses. As set forth below, participants were randomly assigned to conditions where expert opinion was divided or where participants agreed (or disagreed) with the consensus expressed by constitutional experts. Thus, controlling for partisanship and other relevant demographic factors mentioned herein is not strictly necessary to test experimental treatment effects. Still, as these demographic variables were measured in both survey instruments and are theoretically relevant to this specific inquiry, including them in my models allows me to observe their influence on credibility assessments as well. This is quite common in experimental research in political science (e.g., Nelson Clawson and Oxley Reference Nelson, Clawson and Oxley1997; Green and Gerber Reference Gerber and Green2000).

5 The 2016 TESS experiment had 801 participants. The 2017 CCES team module (n = 1000) was divided into participants who took part in an experiment on executive authority and one on judicial authority. Experts were not cited in the experiment on judicial authority; thus, participants who took part in that study were not included herein. The demographics of the 490 participants in the executive authority experiment are included in the Appendix.

6 Each proposed action was hypothetical, so there is no legal consensus per se on the specific measures, although the constitutional arguments for each were apt. The ideological implications of each proposed action were purposefully varied, so as a policy matter, Democrats should be more likely to support federal action with respect to gun control and less likely to support Congressional action with regard to the restriction of benefits to undocumented students. After the participants read the scenario, they were debriefed and informed that the article and scenario were hypothetical. Both TESS and CCESS instruments went through relevant institutional review board approvals as a condition of being sent into the field.

7 Arguably, participants might find constitutional experts more credible if a high-prestige law school, such as Harvard or Yale, were invoked. Journalists might not always have access to experts from the most prestigious echelons of academia. Although not generally considered one of the nation’s top ten, George Washington University has a law school that is nationally recognized. Moreover, due to its location in our nation’s capital, its professors are commonly cited in the media. At any rate, this is a constant for all participants across conditions in the experiment involving legislative action.

8 Majority support for action was theorized to influence participants’ assessments of the legitimacy of the government authority that was the subject of each scenario. Although not much reason to think majority support would influence assessments of expert credibility, I controlled for its influence in the analyses.

9 The Appendix includes supplementary regression models using a three-level consistency measure to reflect agreement with expert consensus for each of the regression models presented herein. The results are consistent entirely across both operationalizations.

10 I would love to be able to parse the distinct effects of partisanship, ideology, and feelings about President Trump in this inquiry. As one might expect, measures of presidential approval are highly correlated with both partisanship (r = 0.71 and 0.68) and ideology (r = 0.61 and 0.68) in both experimental administrations. Moreover, partisanship and ideology are correlated with one another (r = 0.61 and 0.66) such that the correlation precludes their inclusion in the same model. Indeed, if I run separate models using each as an independent variable, they are all significant in credibility assessments, but if I include them together, the effects of one (or two) of these closely related variables tend to wash out in the presence of the others. I chose to focus on partisanship, which to some extent reflects both affinity toward political actors and views of the appropriate role government in society, but acknowledge that these three concepts are theoretically distinct.

11 I chose to do separate probit models for each response rather than a multinomial probit of the three-level variable, because the analyses are more closely related to my theoretical question. Because I am fundamentally interested in how people think about credibility, I am interested in which factors are important in each individual response (yes, no, do not know) rather than the relative risk ratio of a certain type of respondent answering one way or another.

12 This is also true for the consistency variable in the supplemental regressions (Appendix Tables A3 and A4), which achieves conventional levels of statistical significance in the analyses of credible responses across both experiments, but not in the analyses of not-credible responses (marginally significant across samples).

13 The expert consensus manipulation did not achieve conventional levels of significance in the analysis of assessments of executive authority where factors, such as satisfaction with President Trump and participants’ opinions about sanctuary cities, played a significant role in judgments.

References

Anderson, Ashley A., Scheufele, Dietram A., Brossard, Dominique, and Corley, Elizabeth A.. 2012. “The role of media and deference to scientific authority in cultivating trust in sources of information about emerging technologies.” International Journal of Public Opinion Research 24 (2): 225237.CrossRefGoogle Scholar
Blank, Joshua M., and Shaw, Daron. 2015. “Does partisanship shape attitudes toward science and public policy? The case for ideology and religion.” The Annals of the American Academy of Political and Social Science 658 (1): 1835.CrossRefGoogle Scholar
Bolsen, Toby, and Druckman, James N.. 2018. “Do partisanship and politicization undermine the impact of a scientific consensus message about climate change?Group Processes & Intergroup Relations 21 (3): 389402.CrossRefGoogle Scholar
Braman, Eileen. 2021. “Thinking About Government Authority: Constitutional Considerations and Political Context in Citizens’ Assessments of Judicial, Legislative, and Executive Action,American Journal of Political Science 65 (2):489–404.CrossRefGoogle Scholar
Brewer, Marilynn B. 2007. “The importance of being we: Human nature and intergroup relations.” American Psychologist 62 (8): 728.CrossRefGoogle ScholarPubMed
Bybee, Keith J. 2010. All Judges Are Political Except When They Are Not: Acceptable Hypocrisies and the Rule of Law. Stanford CA: Stanford University Press.Google Scholar
Clark, Jason K., and Evans, Abigail T.. 2014. “Source credibility and persuasion: The role of message position in self-validation.” Personality and Social Psychology Bulletin 40 (8): 10241036.CrossRefGoogle ScholarPubMed
Druckman, James N., and McGrath, Mary C.. 2019. “The evidence for motivated reasoning in climate change preference formation.” Nature Climate Change 9 (2): 111119.CrossRefGoogle Scholar
Gauchat, Gordan. 2012. “Politicization of science in the public sphere: A study of public trust in the United States, 1974 to 2010.” American Sociological Review 77 (2): 167187.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P.. 2000. “The effects of canvassing, telephone calls, and direct mail on voter turnout: A field experiment.” American Political Science Review 94 (3): 653663.CrossRefGoogle Scholar
Hornsey, Matthew J., Harris, Emily A., Bain, Paul G., and Fielding, Kelly S.. 2016. “Meta-analyses of the determinants and outcomes of belief in climate change.” Nature Climate Change 6 (6): 622626.CrossRefGoogle Scholar
Hornsey, Matthew J., and Fielding, Kelly S.. 2017. “Attitude roots and Jiu Jitsu persuasion: Understanding and overcoming the motivated rejection of science.” American Psychologist 72 (5): 459.CrossRefGoogle ScholarPubMed
Hovland, Carl I., Janis, Irving Lester, and Kelley, Harold H.. 1953. Communication and Persuasion. New Haven CT: Yale University Press.Google Scholar
Hurwitz, Mark S., and Lanier, Drew Nobel. 2008. “Diversity in state and federal appellate courts: Change and continuity across 20 years.” Justice System Journal 29 (1): 4770.Google Scholar
Iyengar, Shanto, Lelkes, Yphtach, Levendusky, Matthew, Malhotra, Neil, and Westwood, Sean J.. 2019. “The origins and consequences of affective polarization in the United States.” Annual Review of Political Science 22: 129146.CrossRefGoogle Scholar
Johnston, Christopher D., and Ballard, Andrew O.. 2016. “Economists and public opinion: Expert consensus and economic policy judgments.” The Journal of Politics 78 (2): 443456.CrossRefGoogle Scholar
Jolley, Daniel, and Douglas, Karen M.. 2017. “Prevention is better than cure: Addressing anti‐vaccine conspiracy theories.” Journal of Applied Social Psychology 47 (8): 459469.CrossRefGoogle Scholar
Kahan, Dan M., Jenkins‐Smith, Hank, and Braman, Donald. 2011. “Cultural cognition of scientific consensus.” Journal of Risk Research 14 (2): 147174.CrossRefGoogle Scholar
Kay, Herma H. 1991. “The future of women law professors.” Iowa Law Review 77 (1991): 5.Google Scholar
Koehler, Derek J. 2016. “Can journalistic “false balance” distort public perception of consensus in expert opinion?Journal of Experimental Psychology: Applied 22 (1): 24.Google ScholarPubMed
Kraft, Patrick W., Lodge, Milton, and Taber, Charles S.. 2015. “Why people “don’t trust the evidence” motivated reasoning and scientific beliefs.” The Annals of the American Academy of Political and Social Science 658 (1): 121133.CrossRefGoogle Scholar
Kunda, Ziva. 1990. “The case for motivated reasoning.” Psychological Bulletin 108 (3): 480.CrossRefGoogle ScholarPubMed
Lord, Charles G., Ross, Lee, and Lepper, Mark R.. 1979. “Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence.” Journal of Personality and Social Psychology 37 (11): 2098.CrossRefGoogle Scholar
Merkley, Eric. 2020a. “Are experts (news) worthy? Balance, conflict, and mass media coverage of expert consensus.” Political Communication 37 (4): 120.CrossRefGoogle Scholar
Merkley, Eric. 2020b. “Anti-intellectualism, populism, and motivated resistance to expert consensus.” Public Opinion Quarterly 84 (1): 2448.CrossRefGoogle Scholar
Milosh, Maria, Painter, Marcus, Van Dijcke, David, and Wright, Austin L.. 2020. Unmasking Partisanship: How Polarization Influences Public Responses to Collective Risk. University of Chicago, Becker Friedman Institute for Economics Working Paper.Google Scholar
Nelson, Thomas E., Clawson, Rosalee A., and Oxley, Zoe M.. 1997. “Media framing of a civil liberties conflict and its effect on tolerance.” American Political Science Review 91 (3): 567583.CrossRefGoogle Scholar
Tom, Nichols. 2017. The death of expertise: The campaign against established knowledge and why it matters. New York: Oxford University Press.Google Scholar
Nisbet, Erik C., Kathryn, E.Cooper, and Kelly Garrett, R.. 2015. “The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science“. The ANNALS of the American Academy of Political and Social Science 658(1): 3666.CrossRefGoogle Scholar
Petty, Richard E., and Cacioppo, John T.. 1986. “The elaboration likelihood model of persuasion.” In Communication and Persuasion. New York: Springer.CrossRefGoogle Scholar
Pornpitakpan, Chanthika. 2004. “The persuasiveness of source credibility: A critical review of five decades’ evidence.” Journal of Applied Social Psychology 34 (2): 243281.CrossRefGoogle Scholar
Shen, Francis X., and Gromet, Dena M.. 2015.“Red states, blue states, and brain states: Issue framing, partisanship, and the future of neurolaw in the United States.” The Annals of the American Academy of Political and Social Science 658 (1): 86101.CrossRefGoogle Scholar
Simon, Dan, and Scurich, Nicholas. 2011. “Lay judgments of judicial decision making.” Journal of Empirical Legal Studies 8 (4): 709727.CrossRefGoogle Scholar
Simon, Dan, and Scurich, Nicholas. 2013. “The effect of legal expert commentary on lay judgments of judicial decision making.” Journal of Empirical Legal Studies 10 (4): 797814.CrossRefGoogle Scholar
Suhay, Elizabeth. 2017. “The politics of scientific knowledge.” In Oxford Research Encyclopedia of Communication, ed. Nussbaum, Jon. New York: Oxford.Google Scholar
Suhay, Elizabeth, and Druckman, James N.. 2015. “The politics of science: Political values and the production, communication, and reception of scientific knowledge.” The Annals of the American Academy of Political and Social Science 658 (1): 615.CrossRefGoogle Scholar
Tyler, Tom R. 2021. Why People Obey the Law. Princeton NJ: Princeton University Press.CrossRefGoogle Scholar
Whitehead, Jack L. Jr. 1968. “Factors of source credibility.” Quarterly Journal of Speech 54 (1) 5963.CrossRefGoogle Scholar
Figure 0

Table 1. Expert Credibility Ratings Across Experimental Administrations

Figure 1

Table 2. Ordered Probit Skepticism Across Administrations

Figure 2

Table 3. Probit Credible Response Across Administrations

Figure 3

Table 4. Probit Not Credible Responses Across Administrations

Figure 4

Table 5. Predicted Probabilities for Executive Authority Scenario

Figure 5

Table 6. Probit “Do Not Know” Response Across Administrations

Supplementary material: File

Braman et al. supplementary material

Braman et al. supplementary material

Download Braman et al. supplementary material(File)
File 37.5 KB