Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-24T04:05:41.116Z Has data issue: false hasContentIssue false

The Influence of Public Sentiment on Supreme Court Opinion Clarity

Published online by Cambridge University Press:  01 January 2024

Rights & Permissions [Opens in a new window]

Abstract

We examine whether public opinion leads Supreme Court justices to alter the content of their opinions. We argue that when justices anticipate public opposition to their decisions, they write clearer opinions. We develop a novel measure of opinion clarity based on multifaceted textual readability scores, which we validate using human raters. We examine an aggregate time series analysis of the influence of public mood on opinion clarity and an individual-level sample of Supreme Court cases paired with issue-specific public opinion polls. The empirical results from both models show that justices write clearer opinions when their rulings contradict popular sentiment. These results suggest public opinion influences the Court, and suggest that future scholarship should analyze how public opinion influences the written content of decision makers’ policies.

Type
Articles
Copyright
© 2016 Law and Society Association.

When the Supreme Court makes a decision contrary to public opinion, justices are likely to worry the Court will lose public support. So, what are justices to do? One option, of course, is to move the policy content of the opinion closer to public sentiment. Yet, we know that justices seek, among other things, ideological goals (Reference Epstein and KnightEpstein and Knight 1998) and would prefer to effectuate them when feasible. Another option, then, is to seek their policy goals while mitigating the possible loss of public support. It is on this perspective we focus. We argue that justices, when they rule contrary to public opinion, will vary the clarity of majority opinions in an effort to maintain public support as best they can. While the Court has a deep reservoir of diffuse support, frequent counter-majoritarian decisions could leave it at risk (Reference Gibson, Caldeira and SpenceGibson et al. 2003: 365). By writing a clear opinion when ruling against public sentiment, justices can better inform the public why they so decided, and thereby manage any immediate loss of support they might suffer—or, think they might suffer (see, e.g., Nelson, N.d.).

We develop a measure of opinion clarity based on automated textual readability scores that we validate using human raters. Our results show public opinion strongly influences the content of Court opinions. Importantly, we analyze both macro- and case-level public opinion, providing broad-based support for our findings. In one approach, we compile an aggregate data set that includes Court decisions from 1952 to 2011, and execute a time series analysis that scrutinizes opinion clarity as a function of yearly changes in public mood. In a second approach, we rely on issue-specific public opinion polls that directly relate to individual Supreme Court cases (Reference MarshallMarshall 1989, Reference Marshall2008). Using these micro-level data, we analyze the content of specific majority opinions to determine how public opinion influences Supreme Court opinion clarity. Both empirical analyses offer considerable support for our argument that justices write clearer opinions when they deviate from public sentiment. What is more, our measure of opinion clarity is one scholars who study other institutions could employ.

These findings are important for a number of reasons. First, it is the content of the Supreme Court's opinions that influence society's behavior. Actors within society look to those opinions to determine whether they can engage in particular behaviors (Reference Spriggs and HansfordSpriggs and Hansford 2001). “[S]cholars, practitioners, lower court judges, bureaucrats, and the public closely analyze judicial opinions, dissecting their content in an endeavor to understand the doctrinal development of the law” (Reference Corley, Collins and CalvinCorley et al. 2011: 31). People must understand the content of opinions and, as such, scholars should understand the factors that influence those opinions. Our results speak to how the Court crafts the content of those opinions.

Second, the results address the Court as one institution in a broader political system where justices know they do not necessarily have the last word. That is, our approach shows how the Court is tied into a larger network of actors and audiences in the American political and legal system (Reference BaumBaum 2006). Rather than focus on how justices influence others, we show how others (i.e., the public) can influence justices. At the same time, knowing justices intentionally alter the language of their opinions to overcome audience-based obstacles tells us something that speaks to broader normative debates about democratic control. Justices appear to do what they can to overcome obstacles from public opinion. So, while public opinion seems to influence their behavior, justices appear able to circumvent the constraints of public opinion by tailoring their messages. For those interested in ensuring more accountability of judges, these results suggest such control is perhaps more difficult than previously believed.

Third, understanding how the Court alters its opinions can inform us about how the Court acquires and maintains judicial legitimacy. To be sure, we do not directly address legitimacy in this paper, but our results generate potential research avenues by which to study it. Legitimacy allows justices to accomplish their broader goals and protect the Court's institutional authority (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Gibson and CaldeiraGibson and Caldeira 2011; Reference Ura and WohlfarthUra and Wohlfarth 2010). The Court lacks the capacity to execute its own opinions. Its reason and logic are the foundations of its support. Given the Court's power ultimately comes from its legitimacyand that sustained negative attention and unpopular decisions can erode public support for the Court (Reference Durr, Martin and WolbrechtDurr et al. 2000)justices should avoid repeatedly calling that legitimacy into question. By writing different kinds of opinions, justices can avoid negative attention and may even be able to enhance the Court's legitimacy.

Finally, our results provide an answer to the question whether public opinion influences justices. The strategic model, perhaps the most influential model of judicial decision making, suggests justices are likely to anticipate public reactions to their decisions (among other considerations) and moderate their behavior accordingly (Reference Epstein and KnightEpstein and Knight 1998). Yet, empirical support for that theoretical claim has been mixed. Our findings suggest public opinion does in fact influence how justices behave.

A Theory of Strategic Opinion Clarity

The strategic model of judicial decision making suggests justices should be mindful of public opinion when making decisions (e.g., Bryan and Kromphardt (forthcoming); Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Enns and WohlfarthEnns and Wohlfarth 2013; Reference McGuire and StimsonMcGuire and Stimson 2004). This is the case because frequent rulings against the public could cause the Court to lose legitimacy. The Court's legitimacy is the foundation of its support. As Justice Frankfurter once claimed: “The Court's authority…rests on sustained public confidence in its moral sanction” (Reference CaldeiraCaldeira, 1986: 1209). A consistent pattern of shirking public opinion could damage the Court's legitimacy. Reference CaldeiraCaldeira (1986) finds, in part, the Court's legitimacy decreases as it strikes more federal laws and sides with criminal defendants. Related work shows courts that systematically ignore stare decisis can jeopardize their institutional legitimacy (see, e.g., Reference Zink, Spriggs and ScottZink et al. 2009). Reference Bartels and JohnstonBartels and Johnston (2013) suggest ideologues who oppose specific Court decisions are more likely to challenge the Court's legitimacy than those who approve of its decisions (cf., Reference Gibson and NelsonGibson and Nelson 2015). Collectively, these results suggest the public may respond negatively to Court decisions they dislike.

What can the Court do to protect its (immediate or long term) support when it rules against the public? We theorize that when ruling against public opinion, justices will enhance the clarity of majority opinions. By writing clearer opinions, justices can attempt to minimize the loss of support they might suffer—or think they might suffer—from jilting the public. And, while we believe justices know they have strong institutional support, they surely must be concerned about managing that goodwill and support. As Reference Gibson, Caldeira and SpenceGibson et al. (2003) state, such goodwill is not limitless. Justices must be concerned about replenishing it after drawing it down. Opinion clarity can help the Court mitigate attacks on its legitimacy.Footnote 1

Scholars have argued that opinion clarity influences the public. As Reference Vickrey, Denton and JeffersonVickrey et al. (2012) put it: “The challenge for the nation's judges…is to make sure that the public understands what is expressed in a supreme court opinion…[O]pinions serve as the court's voice because rulings communicate not only to lawyers, but also to the public…” (74). The role of opinion clarity here is critical. Clarity “is crucial in order to demonstrate fairness, ensure public and media understanding of the role of the court, and encourage acceptance of high court judgments. Effective communication starts with a well-reasoned and well-written opinion” (78). Similarly, Reference Benson and KesslerBenson and Kessler (1987) find plain legal writing is more credible and persuasive than “legalese.” The authors conducted an experiment in which they showed respondents legal briefs and petitions for rehearing that employed common language and those that contained legalese. Respondents who read legalese were significantly more likely to think the brief was unpersuasive, the writer was unconvincing, and the writer was unbelievable. The authors further demonstrate arguments presented in legalese suffer 20 percent less persuasive power than a brief with simple text and 32 percent less for a plainly worded rehearing petition. We believe justices have a sense of this dynamic. And surely, they must know when ruling against public opinion, they already have given the public a target. Why enhance risk by writing an unclear opinion the public will find to be less persuasive, less convincing, and less believable? We suspect they do not. We suspect they write clearer opinions in such instances.

Indeed, empirical evidence confirms our general belief that justices alter the content of their opinions in anticipation of negative reactions from various audiences. For example, Reference Black, Owens and BrookhartBlack et al. (2015) find the Court is more likely to cite foreign sources of law—to expand the debate and provide additional reasons for their decisions—when they render controversial decisions. Reference Corley, Howard and NixonCorley et al. (2005) show the Court is more likely to cite the Federalist Papers in controversial opinions. Nelson (N.d.) shows after the influx of television advertising in judicial campaigns, elected judges began to write opinions that were easier to read. The logic is simple. When judges had more to fear from the public, they performed “better.” This finding is consistent with our argument: when justices decide cases with outcomes against the public's broad policy preferences—and therefore, have more to fear from public reaction—they write clearer opinions. Finally, in a recent book-length treatment, Reference BlackBlack et al. (2016) find that justices alter the clarity of their opinions out of concern for how lower federal courts, the states, the public, and administrative agencies will respond.

In addition to scholarly support for our argument, recent comments by judges themselves corroborate our belief that judges use opinion language, in part, to manage public support. Justice Thomas once remarked: “We're there to write opinions that some busy person or somebody at their kitchen table can read and say, ‘I don't agree with a word he said, but I understand what he said’” (Reference FriedersdorfFriedersdorf 2013) (emphasis supplied). Similarly, Judge Steve Leben of the Kansas Court of Appeals recommends judges:

…explain things so that a layperson can understand them, whether it's an oral ruling or a written opinion. A person involved in a court proceeding is more likely to accept a court decision that he or she can understand, and the failure to explain legal concepts to the layperson leads to an unnecessary lack of understanding of what judges do (Reference LebenLeben 2011: 54) (Emphasis Supplied).

As the previous quote suggests, our argument about the use of opinion clarity is also related to literature on legitimacy and procedural fairness. Scholarship shows procedural fairness can facilitate legitimacy. Even losers in proceedings believe institutions to be legitimate when they believe they received fair procedural treatment. For example, Reference Casper, Tyler and FisherCasper et al. (1988) find procedural and distributive fairness influenced how defendants evaluated their treatment by the judicial system, independent of their sentences. Reference Sunshine and TylerSunshine and Tyler (2003) find the fairness of police procedures has a strong influence on police legitimacy. Like the positive effect of procedural fairness, opinion clarity can stanch the Court's bleeding when it rules against public opinion. When the Court explains more clearly why it ruled the way it did, the public might feel treated more fairly than if justices wrote an obfuscated opinion. Clarity can help to communicate the basis for the decision, explain better why the Court ruled the way it did, and, as a result, minimize the loss of support for having ruled against the public. Indeed, Reference Vickrey, Denton and JeffersonVickrey et al. (2012) make precisely this point, stating: “Litigants, especially losing litigants, care less about the length of opinions and more about clarity and the scope or soundness of the reasoning” (76) (Emphasis supplied). Opinion clarity can help shore up support for the Court, even among those dissatisfied.

To be sure, a fountain of scholarship suggests the Court's legitimacy is unlikely to diminish seriously by a single “bad” decision (e.g., Reference Gibson, Caldeira and SpenceGibson et al. 2003). Yet, even such scholarship recognizes judicial carelessness with public opinion might diminish the Court's legitimacy. Indeed, Reference Gibson, Caldeira and SpenceGibson et al. (2003) state: “A few rainless months do not seriously deplete a reservoir. A sustained drought, however, can exhaust the supply of water” (365). So, justices are likely to want to manage negative reactions. To prevent erosion of public confidence, they should want to take steps to justify and mitigate decisions against public opinion.

Perhaps more importantly, even if a single decision does not actually reduce the Court's legitimacy, justices are likely to be concerned it might. Despite scholarship showing public support for the Court is resilient, justices still are likely to fear backlash. The mere threat of widespread negative scrutiny by the mass public regularly shapes policymakers’ decisions (e.g., Reference ArnoldArnold 1990). Just as members of Congress are often “running scared” (Reference JacobsonJacobson 1987), justices might worry about possible negative consequences of their opinions and try to manage them with opinion clarity.

Of course, one might respond that few citizens actually read Supreme Court opinions, or they have an outdated perception of the Court. To this response, we make the following arguments. First, our theory does not hinge on whether the public actually reads opinions; all that matters is justices believe they might. A dormant public can be alerted by politicians and their actions, thereby inducing widespread public attention. Indeed, politicians regularly make decisions based on the threat their actions could receive significant attention. As (Reference KeyKey 1961: 266) explains of policymakers:

Even though few questions attract wide attention, those who decide may consciously adhere to the doctrine that they should proceed as if their every act were certain to be emblazoned on the front pages … and to command universal attention.

(Reference ArnoldArnold 1990: 68) makes a similar argument in his study of congressional policymaking:

Latent or unfocused opinions can quickly be transformed into intense and very real opinions with enormous political repercussions. Inattentiveness and lack of information today should not be confused with indifference tomorrow.

And existing literature shows in nonsalient cases, justices are concerned their decisions could trigger rebuke from an otherwise dormant public (Reference Casillas, Enns and WohlfarthCasillas et al. 2011). In other words, even if the public does not read every decision, justices certainly might worry they might.

Second, the media often lift passages directly from Court opinions, so it is likely many members of the public are in fact exposed to, and read, portions of Court opinions. An existing study shows the media borrow nontrivial amounts of the Supreme Court's opinions when reporting on them (Zilis N.d.). Specifically, the New York Times quoted 69 percent of salient opinions between the 1980 and 2008 terms, thus suggesting the public is directly exposed to some opinion language.

Third, even if the public does not read the Court's opinions, legal and political elites do—and the logic of our argument remains the same under this context. After all, elite explanation to the public likely turns on the content of the Court's opinion. By writing a clearer opinion, justices make the “translation” from elite to public smoother. In fact, existing scholarship suggests elites must respond to the way the Court frames arguments in its opinions (Reference WedekingWedeking 2010). A clearer opinion might make it easier for the media to report on the Court's decision—and a clearer opinion might allow the media to portray the Court's decision closer to how the Court would like it portrayed.Footnote 2

We recognize members of the public are more concerned about a case's outcome than its clarity. Our argument is not that opinion clarity can overcome this. Rather, we believe justices should perceive—for all of the reasons described above—that enhanced opinion clarity is an especially important attribute of their decision given the potentially negative effects of ruling against the public.Footnote 3 Indeed, would anyone claim a poorly written counter-majoritarian opinion would trigger the same public response as a well-crafted counter-majoritarian opinion? We suspect not. In fact, the remarks from the judges quoted earlier suggest judges and justices care about clarity.Footnote 4 In short, opinion clarity is not a get-out-of-jail-free card for justices. It is, however, a tool they likely believe is useful to attempt to mitigate negative public support in the face of an unpopular opinion.

Measuring Opinion Clarity

To determine whether justices craft clearer opinions when they rule against public opinion, we must construct a dependent variable that reflects the clarity of opinions. Legal clarity can, of course, take a number of different forms. Reference Owens and WedekingOwens and Wedeking (2011) identify three types of opinion clarity: doctrinal, cognitive, and rhetorical. While all three no doubt share similarities, they are distinct constructs that represent different phenomena. Doctrinal clarity is perhaps the oldest and most well-known of the three, as it focuses on “how the Court's specific treatment of doctrine [in an issue area] has remained stable or inconsistent…over time” (Reference Owens and WedekingOwens and Wedeking 2011: 1038). Cognitive clarity, on which Reference Owens and WedekingOwens and Wedeking (2011) focus, emphasizes the clarity of the ideas that are expressed. Rhetorical clarity focuses on the clarity of the external communication as it is understood by others. Depending on the goals of the research, any one of them might be appropriate for measuring clarity. Our theory focuses on how the Court decides to communicate with external, nonjudicial audiences that include both elected officials and the public. This communicative element turns on whether external audiences can understand and comprehend the content of the Court's opinion. For our purposes here, we believe rhetorical clarity is the more appropriate measurement approach.Footnote 5

We examine rhetorical clarity rather than cognitive clarity for a host of reasons. For starters, our theory does not argue the Court writes opinions with simpler ideas when it rules against public opinion; rather, we argue justices will simplify the presentation of their decisions when they vote against public opinion. Cognitive clarity represents the structure and clarity of the ideas in the mind of the justice that is expressing them. Rhetorical clarity, on the other hand, focuses on the clarity of the external communication as it is understood by others. Indeed, it is important to understand a rhetorically clear opinion is not guaranteed to be cognitively clear (and vice versa). In fact, it can be the opposite. Rhetorical clarity draws from an ability to communicate facts to others, but it does so without necessarily having a direct correspondence to the complexity of the underlying ideas. For example, some people excel at explaining complex ideas in an easy-to-understand manner while some people can make the simplest idea unclear. Given that our theory focuses on the Court and how a general audience will understand opinions, we believe our choice of rhetorical clarity as the dependent variable is the theoretically correct one. And, Justice Thomas's quote above supports us.Footnote 6

Creating the Opinion Clarity Measure

To examine opinion clarity, we exploit a range of computer-generated readability scores to analyze the text of Supreme Court majority opinions. Computer-generated scores are desirable for a number of reasons. They are easily replicated, they are objective, and they are efficient, allowing researchers to examine—and make comparisons among—a large number of long documents (e.g., court opinions). And, just as important, as we demonstrate below, they correlate strongly with how humans interact with court opinions. Scholars and policymakers use readability scores in various contexts to measure the degree of difficulty in reading a text (Reference DuBayDuBay 2004). For example, the Flesch-Kincaid Grade Level examines a text's average sentence length and the average number of syllables per word. Other measures, of which there are dozens, look at the number of letters in a word, the number of words with only one syllable, or the number of words with at least three (or six) syllables in them.

Rather than rely upon a single indicator, we take an approach that captures key commonalities among existing measures while also avoiding sensitivity to a unique aspect of any single measure. We use the R package koRpus to calculate 19 separate readability measures for every orally argued Supreme Court majority opinion from 1946 to 2012.Footnote 7 These 19 distinct formulas yield a total of 28 measures (i.e., some formulas produce more than one readability score).Footnote 8 Figure 1 identifies the general types of inputs that go into the calculation of the scores.

Figure 1. Readability Formula Inputs.

The words, sentences, characters, and syllables columns indicate that a formula calculates the total number of these items (e.g., total number of characters). The final three columns are for indicator variables that count the frequency of, for example, words with at least three syllables. Taking these variables as input, the readability formulas perform a variety of arithmetic functions to produce a single score for a given text. As one example, the Flesch-Kincaid Grade Level is computed as follows:

0.39×Total WordsTotal Sentences+11.8×Total SyllablesTotal Words15.59.

We then subjected these distinct measures to a Principal Component Analysis, which returned a single principal component that explained 77 percent of the variance in the data. This measure—Opinion Clarity—is our dependent variable. We code it such that texts with low readability (i.e., are harder to read) receive smaller scores while texts that are easier to read receive larger scores. In other words, the larger the value, the clearer the text. The measure has a mean of 0 and a standard deviation of 4.6. With a range that stretches between −44.0 (very difficult to read) and +24.5 (very easy to read), it has considerable variation. In terms of its distribution, our measure takes on the general shape of a normal distribution.

Validating the Opinion Clarity Measure

Because our dependent variable is unique, we sought to verify that it validly measures the readability of legal texts. So, we had 72 undergraduate students rate eight excerpts from the legal reasoning portion of Supreme Court majority opinions. The excerpts varied between 170 and 300 words in length, with an average length of 222 words. Four of these excerpts were of low readability and the other four were of high readability.Footnote 9 To ensure the raters had enough background and context when reading them, each excerpt was preceded by a short paragraph offering information about the facts and dispute in the case.

After reading each text, the raters answered objective multiple choice comprehension questions and subjective rating questions about the text. The objective questions involved factual queries. For example, in Santa Fe Independent School District v. Doe (2000), a case that examined school prayer, respondents read a segment from the opinion. We then asked them why the Court said the student prayer was not “private speech.” They had four options from which to choose (this full example is in the online appendix). For the subjective rating questions, we asked raters to evaluate how clear they believed the text was, how well written the excerpt was, the ease or difficulty of understanding the excerpt, and whether they knew all of the words in the excerpt. Thus, we had multiple indicators for each rater of both an objective (e.g., whether they correctly answered content questions) and subjective (e.g., how clear they believed the text was) nature.

We then combined these objective and subjective ratings to create a Rater Readability factor score. We followed Reference JonesJones et al. (2005), who argue that readability is comprised of: (a) comprehension; (b) time to complete and answer questions about the reading; and (c) how an individual subjectively perceives the text. We estimated an exploratory factor analysis model with six variables. This included the four subjective ratings (i.e., clarity, quality of the writing, ease of understanding, and difficulty with words), an objective measure of the number of correct responses to our comprehension questions, and the number of minutes it took a rater to read the text and answer the questions. The Cronbach's alpha for these six items is 0.77. The factor analysis model returned a single factor with an eigenvalue greater than one.

We next estimated a linear regression model with Rater Readability as our dependent variable. Our main independent variable was Opinion Difficulty, which was the automated readability score for each of the opinion excerpts.Footnote 10 As noted above, small values indicate low readability and large values indicate high readability. If our approach provides a valid indicator of readability for humans, we should recover a positive relationship between Opinion Difficulty and Rater Readability. We do.

The results suggest our raters did, in fact, perceive differences among our excerpts. We find a positive and statistically significant relationship between Opinion Difficulty and the rater's readability for the excerpt (p < 0.01). In other words, excerpts identified as more challenging and less readable by our computer-generated measure yielded systematically lower comprehension levels among our human raters than clearer excerpts. The substantive magnitude of the relationship is reasonably strong, too. When comparing a highly readable excerpt with one that is highly unreadable, we estimate a change in comprehension equivalent to about 1.25 standard deviations in our rater readability measure. This is substantively equivalent to jumping between the 30th and 75th percentile in Rater Readability.

Having operationalized and validated our key theoretical concept of opinion clarity, we turn our attention to analyzing how justices alter opinion clarity when they expect greater opposition to their decisions. We take a two-pronged empirical approach. First, we employ an aggregate time series analysis to examine how general changes in popular sentiment lead to changes in clarity. While we recognize public mood is broad, the approach we take is the best possible given existing data—and it is consistent with existing literature (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Flemming and WoodFlemming and Wood 1997; Reference Giles, Blackstone and ViningGiles et al. 2008; Reference McGuire and StimsonMcGuire and Stimson 2004; Reference Mishler and SheehanMishler and Sheehan 1993). Second, we then conduct an individual case-level analysis that uses issue-specific public opinion polls taken before corresponding Supreme Court decisions (Reference MarshallMarshall 1989, Reference Marshall2008) to demonstrate how justices write clearer opinions when they rule against public opinion in specific cases.

An Aggregate Analysis of Public Opinion and Clarity

We first focus on a macroanalysis of how general public mood influences Court opinions. An aggregate focus offers several benefits. It enables us to connect our analysis to the predominant analytical strategy (and measures) used in prior research. That is, most literature on the Supreme Court-public opinion relationship utilizes an aggregate indicator of the public's policy mood (Reference StimsonStimson 1991). This measure of public opinion (described below) only varies with respect to time, and thus is ideally suited for macroanalyses predicting the term-level, net-content of Court decision making.Footnote 11 What is more, a macroanalysis offers the best means to model the autocorrelation inherent in aggregate policy mood's variance and the potential for a dynamic effect of public opinion on the Court.

We test the argument that justices write clearer majority opinions when they anticipate public opposition to their decisions, using data from the 1952–2011 Court terms.Footnote 12 By analyzing a time series, we examine majority opinions across the range of issues on the docket. We expect that as public opinion becomes more liberal, justices write clearer opinions among their conservative decisions. Similarly, as public opinion becomes more conservative, justices write clearer opinions among their liberal decisions. We construct two aggregate time series of the average clarity of the Court's majority opinions each term: one series examines the Court's conservative decisions over time; the other examines its liberal decisions.Footnote 13 We separate the Court's decisions into two time series models because that approach offers the most effective modeling strategy at the aggregate level to estimate how shifts in public opinion over time predict changes in the average level of opinion clarity (a variable without an inherent ideological dimension).Footnote 14

Opinion Clarity

Our dependent variable is the mean readability score of the Court's majority opinions decided each term, using the composite index we described above.

Public Mood

Our primary covariate is yearly public mood, as measured (and updated) by Reference StimsonStimson (1991, Reference Stimson1999).Footnote 15 Public mood is a longitudinal indicator of how the public's preference for more or less government shifts over time. It is an aggregate reflection of the general tenor of public opinion (and preference over desired public policy) on the standard liberal-conservative dimension. Public Mood is the most predominant indicator of public opinion in literature that examines public opinion and the Supreme Court (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Enns and WohlfarthEnns and Wohlfarth 2013; Reference Epstein and MartinEpstein and Martin 2011; Reference Giles, Blackstone and ViningGiles et al. 2008; Reference McGuire and StimsonMcGuire and Stimson 2004; Reference Mishler and SheehanMishler and Sheehan 1993), and is currently the most reliable aggregate measure of the public's general political orientation. Larger values of Public Mood reflect a more liberal public while smaller values reflect a more conservative public. We expect justices anticipate a greater prospect of public opposition to their conservative (liberal) decisions as public opinion becomes more liberal (conservative). Thus, among their conservative decisions, justices will write clearer opinions as public opinion becomes more liberal. Conversely, among their liberal decisions, justices will write clearer opinions as public opinion becomes more conservative. That is, we expect a positive relationship between Public Mood and our dependent variable when analyzing the conservative decision time series, and a negative relationship when analyzing the liberal decision time series.

Average Case Complexity

We include a control variable to account for the possibility that as cases become more (less) complex over time, opinions may have become less (more) clear. Average Case Complexity reflects, for each Supreme Court term, the average number of legal issues per case, as identified by the Supreme Court Database. That is, we first identify the number of legal issues addressed by the Court in each case,Footnote 16 compute the sum of those legal issues for the duration of each term, and then divide that sum by the total number of decisions issued by the justices during that term. For example, in the 2000 term, the Court issued 23 conservative decisions (involving constitutional or statutory provisions) and addressed a total of 30 issues among those cases. Thus, Average Case Complexity in the 2000 term would equal 1.304 for the conservative decision time series.Footnote 17

Civil Liberties Docket

We account for the potential that shifts in the issue composition of the Court's docket over time affect the average degree of opinion clarity. In particular, we expect that a greater proportion of (noncriminal procedure) civil liberties and rights cases on the docket will produce an average opinion clarity score that is less clear. We measure Civil Liberties Docket as the percentage of cases decided each term that primarily involve a civil liberties issue, excluding criminal procedure cases.Footnote 18 Consistent with the unit of analysis described above, we compute separate civil liberties time series among conservative and liberal decisions.

Separation of Powers Constraint

We also account for the potential that greater ideological divergence between the Court and Congress might lead justices to obfuscate opinions (Reference Owens, Wedeking and WohlfarthOwens et al. 2013). Using the Judicial Common Space (Reference EpsteinEpstein et al. 2007), we include a predictor that accounts for the ideological divergence between the Court and Congress. More specifically, when the median justice on the Court is either more liberal or more conservative than both chamber medians in Congress, we measure SOP Constraint as the absolute value of the ideological distance between the Court and the closest of the two chamber medians. If the median justice falls ideologically between the House and Senate chamber medians, SOP Constraint equals 0.Footnote 19

Methods and Results

Prior to estimating our models, we “prewhitened” our time series predictors by filtering them with ARIMA(p,d,q) noise models so all series (seemingly) reflect white noise (Reference Box and JenkinsBox and Jenkins 1976). This step filters out the error aggregation process within each time series to ensure our inferences are not affected by serial correlation and each series’ dependence on its own past values. That is, for each predictor, we first modeled the serial correlation inherent in the time series, extracted the residuals from that model, and then used those white-noise residuals as our (filtered) time series predictor in a standard regression model. Employing prewhitened time series predictors ensures the model is balanced and that the data are i.i.d. What is more, prewhitened filtering represents a conservative analytical approach in the time series literature (e.g., Reference Clarke and StewartClarke and Stewart 1994; Reference Granger and NewboldGranger and Newbold 1974), and offers the most stringent statistical test in the present analysis. Indeed, as Reference Box-Steffensmeier, DeBoef and LinBox-Steffensmeier et al. (2004) state, modeling prewhitened series will actually “err on the side of null findings” (525). From a substantive perspective, our statistical models will enable us to examine specifically whether “innovations” in public opinion (that are not driven by its own prior values) have an impact on Supreme Court opinion clarity (see, e.g., Reference MacKuen, Erikson and StimsonMacKuen et al. 1989).

Specifically, we filtered the Opinion Clarity time series using an ARIMA(0,1,1) filter for the liberal series and an ARIMA(0,1,2) filter for the conservative series, as their error aggregation exhibits long-term temporal dependence best represented by an integrated process that requires first-differencing along with a moving average error component.Footnote 20 Next, the error aggregation process of the Public Mood time series exhibits short-term temporal dependence that is best represented by a first-order autoregressive noise model to yield a white noise series (i.e., an AR(1) filter). The Average Case Complexity time series requires an ARIMA(1,0,1) filter to generate white noise series, among both liberal and conservative decisions. The error aggregation process in the Civil Liberties Docket predictor is best filtered using an ARIMA(1,0,1) model, among both the liberal and conservative time series. Lastly, we filtered the SOP Constraint time series using an ARIMA(1,0,0) noise model.Footnote 21

With prewhitened time series in hand, we turn to our statistical models. We employ OLS and estimate three time series regression models. To examine the relationship between Opinion Clarity and Public Mood over time, we first present a baseline model that estimates the simple binary relationship. Next, we consider a second model specification that accounts for changes in the average case context by including the Average Case Complexity and Civil Liberties Docket control predictors. Last, the third model specification includes all control predictors by adding the SOP Constraint indicator.Footnote 22

Table 1 presents our results. Across every model specification, Public Mood exhibits the expected impact on Opinion Clarity. When looking at the time series of conservative decisions, the statistically significant, positive coefficients suggest the mean opinion clarity score becomes clearer when public opinion becomes more liberal. This result is consistent across multiple specifications, including a simple baseline model and models that control for case complexity, docket composition, and SOP constraints over time. Looking, next, at the time series of liberal decisions, Public Mood again displays the expected coefficients across all model specifications. As public opinion shifts in a conservative direction, the average liberal opinion becomes increasingly clear.Footnote 23

Table 1. The Aggregate Impact of Public Opinion on Supreme Court Majority Opinion Clarity, 1952–2011

Notes: Table entries are OLS coefficients with standard errors in parentheses. **p < 0.05; *p < 0.10; (one-tailed). The dependent variable represents the annual average Supreme Court majority opinion readability score each term (among decisions involving a constitutional provision or federal statute), 1952–2011, with larger values reflecting more clarity. All variables have been “prewhitened” with ARIMA(p,d,q) filters to yield white noise time series.

What is more, the magnitude of Public Mood's effect on Opinion Clarity suggests it is a substantively meaningful predictor of clarity. As Figure 2 shows, when viewing the conservative decision time series and statistical results from model 1(c), a shift from the minimum to maximum level of liberalism in Public Mood exhibits an expected change of nearly 2.00 units on the prewhitened opinion clarity scale.Footnote 24 That is, a shift in Public Mood can generate a change in opinion clarity that exceeds 1.50 standard deviations. Likewise, when viewing the liberal decision time series (in model 2(c)), a shift from the minimum to maximum level of conservatism in Public Mood also yields a similar expected change of approximately 1.70 units on the clarity scale. When viewing the control predictors, the results suggest that, among the Court's liberal decisions, greater (average) issue complexity and a greater proportion of (noncriminal procedure) civil liberties and rights cases both lead to an average opinion clarity score that is less clear.

Figure 2. The Impact of Public Opinion on Supreme Court Opinion Clarity. Estimates Reflect the Predicted Level of Opinion Clarity Across the Range of Public Mood, with Larger Clarity Scores Representing More Readable Opinions. The Vertical Whiskers Denote 95-Percent Confidence Intervals. Predicted Effects Among Conservative and Liberal Decisions are Computed Using Regression Results from Models 1(c) and 2(c), respectively. Differences Across Values are Statistically Significant.

An Individual-Level Analysis of Public Opinion and Clarity

The strength of the last section's aggregate analysis is that it utilizes a general indicator of public opinion that predicts aggregate opinion language across the range of issues on the Court's docket. Yet, the general public mood measure is precisely that—a general indicator. As such, it cannot fully capture differences in public opinion across specific issues the Court faces. So, this section utilizes issue-specific polling data to examine the clarity of individual cases. By using such targeted data, we offer a more precise match between Court behavior and public opinion, and can offer further support for our argument.

Of course, as scholars who study public opinion and the Supreme Court know, issue-specific (and temporally appropriate) public opinion data are scarce. Reference MarshallMarshall (2008) put it best: “Unfortunately, no published index of scientific, nationwide polls that match Supreme Court decisions exists” (29). Fortunately for us, Marshall performed an exhaustive search for polls that match public opinion with issues in Supreme Court cases (Reference MarshallMarshall 1989, Reference Marshall2008). Marshall identified polls in sources by searching for key words such as “Supreme Court” or key words from the issues discussed in particular Court opinions. He scoured many sources to find these matches, including the Roper Archive of polls, the published polls of Gallup Poll, “The Polls” section in Public Opinion Quarterly, and other various newspaper or magazine polls. If the case had multiple polls, Marshall selected the poll closest in time to the Court's decision. All polls are national samples, with each poll having at least 600 respondents, though many have far larger sample sizes. For a complete discussion and list of his criteria and thorough explanations, see (Reference MarshallMarshall 2008: 29–33) and (Reference MarshallMarshall 1989: 75–77), respectively. For our individual case-level analysis, we use Marshall's poll question-case matches among polls that preceded relevant Court decisions.

We have 106 poll questions matched to specific issues decided in Supreme Court cases that span the 1946–2004 terms.Footnote 25 Importantly, these 106 observations involve a wide range of legal issue areas. There are 26 observations in criminal procedure, 23 observations in civil rights, 24 observations in First Amendment, 12 observations in privacy, and the remaining observations spread across issues such as due process, unions, economic activity, judicial power, and federalism. While the bulk of our observations come from cases that primarily involve issues of civil rights and liberties, we note the majority of the modern Court's docket also has focused on those cases.

Opinion Clarity

Our dependent variable, Opinion Clarity, represents the composite readability score of each majority opinion in our sample.

Inconsistent With Public Opinion

Our main covariate of interest in this analysis measures whether the Court rules contrary to prevailing public opinion, as determined by Marshall's polls. We employ Marshall's measure of an “inconsistent decision,” reflecting when a Court decision “disagreed in substance with a poll majority (or plurality)” (Reference MarshallMarshall 2008: 31). Marshall's measure is appropriate because it captures the essence of our theoretical argument—the Court is concerned about the clarity of its opinions when it decides against an oppositional body larger than the supporting body. Therefore, we operationalize Inconsistent With Public Opinion as a dichotomous measure, with observations coded as 1 if there was more opposition than support for the Court's position; 0 otherwise (i.e., it is 0 when the Court rules consistent with public opinion or the poll margin was within the margin of error). We have no theoretical reason to expect the Court to consider the precise size of the opposition once it exceeds the majority of the public. That is, we have no reason to expect that justices will write a clearer opinion with 70 percent opposition compared to, say, 60 percent opposition.Footnote 26

For an example coding of a case, consider Clinton v. City of New York (1998), which struck down the line-item veto. As Reference MarshallMarshall (2008) reports: “Gallup Poll asked respondents: ‘As you may know, Congress recently approved legislation called the line item veto, which for the first time allows the President to veto some items in a spending bill without vetoing the entire bill. Do you generally favor or oppose the line item veto?’ A 65-to-24 percent majority favored the line item veto, [hence] Clinton v. City of New York was coded as ‘inconsistent’” (Reference MarshallMarshall 2008: 31).Footnote 27

Controls

To ensure the robustness of our empirical tests, we also include a number of control variables likely to influence the clarity of Court opinions. As described above, we utilize the total number of legal issues addressed in each case (according to the Supreme Court Database) to measure Case Complexity, though the results are substantively consistent when substituting the number of amicus briefs filed in each case as the indicator of case complexity. We also examine whether the decision was supported by a minimum winning coalition or the full complement of justices, as the degree of consensus and need to compromise with majority members might lead opinions to become less or more clear. We code Minimum Winning Coalition as 1 if the majority coalition was minimum winning; 0 otherwise. We code Unanimous Decision as 1 if no justices in the case registered a dissent; 0 otherwise. Next, we control for Judicial Review—cases where the Court's opinion struck down a federal or state statute, or local ordinance as unconstitutional. We code this variable as 1 if, according to the Supreme Court Database, the Court struck down a law as unconstitutional; 0 otherwise. We also control for another change in the legal status quo by accounting for when the Court Alters Precedent, coded as 1 if the Supreme Court Database so declares; 0 otherwise.

Next, we account for the separation of powers dynamic. We measure SOP Constraint as the absolute value of the distance between the median justice on the Court and the closest chamber median. When the median justice falls between the House and Senate chamber medians, SOP Constraint equals 0. When the median is more liberal or conservative than the House and Senate medians, SOP Constraint equals the absolute value of the distance between that justice and the closest pivot.Footnote 28 Next, following Reference Owens and WedekingOwens and Wedeking (2011), we account for variance in opinion clarity across different legal issue areas on the Court's docket. Thus, we include fixed effects for the primary issue area, specifying the criminal procedure category as the baseline.Footnote 29 Last, we include fixed effects for the majority opinion author to account for differences in the writing styles of individual justices.

Methods and Results

We fit OLS models with robust standard errors (but, the significant impact of public opinion on opinion clarity does not change if we instead use classical standard errors). Our results appear in Table 2, and they support our hypothesis. Model 1 shows the bivariate relationship between ruling against public sentiment and opinion clarity. It is statistically significant and positively signed, indicating that when the Court issues a ruling inconsistent with public opinion, the Court delivers a significantly clearer opinion.

Table 2. An Individual-Level View of the Impact of Public Sentiment on Supreme Court Majority Opinion Content

Notes: Table entries are OLS regression estimates with robust standard errors in parentheses. **p < 0.05; *p < 0.10; (one-tailed). The dependent variable represents the Supreme Court majority opinion readability score for each case, with larger values reflecting more clarity. The sample of Court cases and public opinion polls come from Marshall (Reference Marshall1989, Reference Marshall2008), among those where polls temporally precede the Court's decision. Model 3 includes, but does not display, fixed effects for the majority opinion author (among those justices who wrote at least two opinions in the sample) and primary issue area of each case.

Models 2 and 3 check the robustness of this result by first including the control predictors (but without fixed effects controls), and then adding the issue area and majority opinion writer fixed effects, respectively. This last model allows us to control for the possibility that opinion clarity is driven by idiosyncratic factors—related to different legal issue contexts and justices’ writing styles—that are unrelated to our theoretical argument.Footnote 30 As the table reveals, the public opinion measure continues to be statistically significant across all models and has a magnitude that is statistically indistinguishable—via a Wald test—from the simple bivariate model. These results are robust to a host of alternative model specifications that include a litany of other potential controls not shown here. Controlling for case salience, ideological direction of decision, case disposition (i.e., reverse/affirm), or court term—to name only a few—does not change our results (additional details are available in the online appendix).

To help understand the magnitude of the estimated relationship between Inconsistent With Public Opinion and the clarity of each majority opinion, we estimated predicted values using the empirical results from Model 2. Specifically, when the Court decides a case consistent with public opinion (while holding all other predictors at their median values), its opinion readability is approximately −0.32. When the Court makes a decision that is inconsistent with public sentiment, however, the readability of the opinion is approximately +1.31, which is above the mean and indicates a substantially clearer opinion—more than three-eighths of a standard deviation increase across the sample of opinion readability. Thus, a decision inconsistent with prevailing public opinion polling yields an expected level of clarity approximately 1.63 units clearer compared to a decision that conforms to public sentiment.

Among the control predictors, the results in models 2 and 3 both suggest opinions in more complex cases—those addressing a greater number of legal issues—are significantly less clear. When the Court addresses a single legal issue in a case (while holding all other predictors at their median values), the predicted opinion clarity score is −0.32. Yet, the predicted level of opinion clarity decreases to −2.10 when increasing Case Complexity to five legal issues (i.e., one standard deviation above the sample mean).

One further point bears emphasis. The reader might be concerned about the role of political salience. The cases we examined in the individual-level analysis are among the most salient on the Court's docket. For instance, 89 of the 106 cases appeared on the front-page of the New York Times (Reference Epstein and SegalEpstein and Segal 2000); 76 of 106 appear on Congressional Quarterly's list of landmark cases. Because pollsters typically only write questions on the most politically important issues of the day, we were limited by necessity to look only at predominantly salient cases in the individual-level analysis. A potential consequence of this case selection is that the limited sample may reflect the upper-bound of public opinion's impact on opinion clarity, at least to the extent that one should expect justices to have a greater incentive to be strategic in these cases. Nevertheless, we split our aggregate time-series data into salient versus nonsalient cases. Though we do not have sufficient data to make inferences about salient cases—there are not enough liberally and conservatively decided salient cases to compute an aggregate each term—we do have enough data on nonsalient cases. When we examine only nonsalient cases, the results are substantively the same as what we present above, suggesting our results are not solely confined to salient cases. (See the online appendix for these aggregate-level results of the nonsalient cases.)

In short, whether examining the impact of public opinion on Supreme Court opinion clarity at either the aggregate or case level, the empirical results suggest justices write opinions with an eye toward anticipated public opinion.

Conclusion

Scholars have paid considerable attention to the relationship between the Court and public opinion but the results have been mixed. Surprisingly, there has been little attention devoted to how public opinion influences the Court's opinion content. We test a novel theory of how public opinion should affect opinion content. Our findings offer something new. They show public opinion does in fact influence the Supreme Court in systematic ways. In this capacity, the results have the potential to re-frame a recurring debate. Indeed, the strategic model of judicial decision making argues justices are likely to respond to public opinion. Our results support that theoretical claim—in part. While scholars have long examined various Court behaviors (e.g., voting) for evidence of the public's influence, perhaps we need to pay more attention to the content of the majority coalition's opinion language (see, e.g., Reference BlackBlack et al. 2016).

The consequences of these findings are important. They suggest justices are aware of their interdependence and employ strategies to evade obstruction. By writing clearer opinions in the face of public opposition, justices aggressively seek out their goals. Writing clearer opinions become all the more important if the public perceives the Court in political terms (Reference Bartels and JohnstonBartels and Johnston 2012). Thus, while public opinion influences judicial behavior, justices appear to respond in an effort to accomplish their broader goals. And while opinion clarity will not give the justices freedom to do whatever they wish, it is something they seem to use to mitigate possible negative responses to their counter-majoritarian opinions.

For those who support enhanced judicial accountability (and, we suppose, for those who oppose it), these findings are bittersweet. Yes, public opinion can influence the Court's behavior. Our results suggest justices do indeed alter how they write opinions as a consequence of changing public mood. But therein lies the rub: public mood seems to have an effect on justices’ opinion content, but scholars disagree whether it has an effect on their votes. To be sure, the jury is still out on whether and to what extent public opinion influences justices’ votes, but the influence of public opinion might just be an example where the packaging seems to change, but the product does not.

While these results do not speak directly to judicial legitimacy, we suspect they might indirectly relate to it. If justices can alter the content of their opinions to avoid or mitigate public rebuke, it stands to reason they could alter it so as to enhance the Court's reputation. Do justices, for example, garner more support for the Court when they speak in positive tones? When they write more legalistically? When they are collegial to one another in separate opinions? These factors might lead to enhanced legitimacy. So too could negative language harm the Court's reputation. So, though we do not examine legitimacy here, we hope future scholars analyze the link between opinion content and legitimacy.

Finally, we believe the approach we used may extend beyond the Court. The public may influence bureaucratic outputs, such as agency regulations, in terms of their clarity and its relationship to interpretation and compliance. Even though bureaucrats (like Supreme Court justices) are not elected, those who oversee and fund their decisions are directly subject to popular will. So the indirect electoral connection exists there as well. Whether bureaucrats adjust by altering the clarity of their policies is an empirical question to be tested. It is also worth emphasizing, in this vein, that our approach to measuring clarity could be adopted elsewhere. We uncovered strong evidence that our automated readability scores tap into what people actually perceive as the clarity and readability of sophisticated texts such as legal opinions. We have every reason to believe other scholars who analyze policymakers and complex texts could adopt our strategy as well.

Cases Cited

Santa Fe Independent School District v. Doe, 530 U.S. 290 (2000).

Planned Parenthood of Southeastern Pennsylvania v. Casey, 505 U.S. 833 (1992).

Footnotes

We thank Thomas Marshall for sharing his public opinion poll data and workshop participants at the University of Maryland, College Park for helpful feedback. We are responsible for all interpretations and errors.

1 In an important article, Reference Staton and VanbergStaton and Vanberg (2008) theorize high levels of clarity can help the detection of noncompliance and may therefore aid courts seeking to push executive actors toward compliance by exposing their noncompliance. But, they argue, this strategy only works for courts with high levels of legitimacy. That is, courts with low levels of legitimacy will tend towards ambiguity so as to mask noncompliance. Opportunistic officials facing an illegitimate court will simply ignore the court. The court, in turn, will not want such noncompliance exposed. We find this argument compelling. Nevertheless, we do not here examine how the Court uses clarity to compel compliance—we only examine how it uses clarity to manage public support.

2 This could be important given the variety of media sources (e.g., sensationalist versus sober) used by citizens (Reference Johnston and BartelsJohnston and Bartels 2010).

3 Justices may generally perceive opinion clarity as a desirable quality that may affect how relevant audiences perceive their opinions (e.g., Reference BaumBaum 2006). The enhanced scrutiny and attention that will accompany a counter-majoritarian decision should magnify this incentive.

4 At any rate, if one believes the bottom line is the only thing that matters to the public or elites, then studying opinion content would be superfluous. It would also ignore the hundreds of books and articles, many of them written by judges and justices, that bespeak the importance of clear writing, to say nothing of legal education in this country, which focuses on reading and dissecting legal opinions.

5 We note that policymakers use a method very close to our definition of clarity to evaluate judicial performance and make recommendations to voters. For example, some states that use retention elections to retain state court judges have created judicial performance evaluations to help voters determine whether to retain those judges. See http://www.americanbar.org/content/dam/aba/publications/judicial_division/aba_blackletterguidelines_jpe_wcom.authcheckdam.pdf.

6 We estimated the effects of cognitive clarity and found them to be statistically nonsignificant. For the reasons we stated earlier, however, we believe rhetorical clarity is the more relevant analysis for this exercise.

7 We downloaded these opinions from LexisNexis as text files. Prior to processing them in koRpus, we edited them with an R script to ensure we were only analyzing the opinion content (as opposed to the opinion syllabus, headnotes, or other additional information contained within LexisNexis-formatted opinions).

8 Formulas for all of the measures used can be found on pages 78–84 of the document for koRpus (version 0.05-5, dated 1/27/15).

9 We employed 16 different levels of readability (from 16 different excerpts) in the examples. These 16 different values ranged from −23 to −15 on the difficult-to-read end and from +9 to +15 on the easy-to-read end of the Opinion Clarity measure. See the online appendix for additional details and examples.

10 We also included a series of controls. See the online appendix for additional details.

11 This analytical strategy necessarily holds individual-level factors constant. However, we further address such effects in the next section, as we leverage available issue-specific opinion polls and match them with the content of individual Court decisions. And, while we have developed a micro-level theory to inform a macroanalysis, we keep inferences in this section at the macro level. See Reference KramerKramer (1983) for the classic account of the virtues of macro-level analysis (as it relates to the individual level) (see also, Reference Erikson, MacKuen and StimsonErikson et al. 2002).

12 We begin the analysis in 1952 because the Public Mood time series begins in that year.

13 We use the “decisionDirection” variable in the Supreme Court Database to identify liberal and conservative decisions. See http://scdb.wustl.edu/.

14 We examine the Court's opinions that involve constitutional or federal statutory provisions—those most relevant for the theory. We identify these cases using the “lawType” variable in the Supreme Court Database, aggregating those decisions where “lawType” is coded as 1, 2, or 3 among the Court's signed opinions and judgments (i.e., where the Database's “decisionType” variable is coded as 1 or 7). Thus, we include federal and state cases that involved federal legal issues, and omit cases about which the public is unlikely to care—those involving “Court rules,” “other” cases, cases involving “infrequently litigated statutes,” cases that involved “state or local law or regulation,” and cases with “no legal provision.” We note, however, that our results are generally robust to including most of these cases, such as those decided by per curiam opinion (following oral argument) and those involving infrequently litigated states. See the online appendix for more details.

15 Given that the public mood indictor is measured based on the calendar year, we match it with the corresponding Supreme Court term so there is a nine-month lag prior to the start of the term. This ensures changes in public opinion temporally precede the justice's decisions and opinion writing (see, e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011). We use updated estimates of public mood (2/13/12 data release) retrieved from: http://www.unc.edu/~cogginse/Policy_Mood.html.

16 We utilize the maximum number of legal issues in a docket using the “caseIssuesId” variable in the Case Centered Data Organized by Legal Provision (with split votes).

17 Alternatively, following recent scholarship suggesting that greater amici participation leads to greater case complexity (Reference CollinsCollins 2008), one could attempt to tap into case complexity using the average number of amicus briefs submitted each term. Specifically, we removed the temporal trend inherent in amicus participation and generated a differenced measure of the change in amicus activity from one term to the next. All subsequent empirical results are robust (and substantively consistent) when substituting this alternative indicator of complexity.

18 We utilize the “issueArea” variable in the Supreme Court Database and include all cases primarily involving an issue of civil rights, first amendment, due process, or privacy. We exclude criminal procedure issues because Reference Owens and WedekingOwens and Wedeking (2011) show those opinions are generally clearer than other civil liberties issues. But, the impact of public mood does not change when including criminal procedure issues in this control predictor.

19 We should note that our sample size for the aggregate model is reduced when including the SOP Constraint variable due to missingness with the JCS scores in the last few terms of the sample.

20 The results of the Augmented Dickey Fuller (with various specified lag lengths), Phillips-Perron, and DF-GLS (across 10 lags) unit root tests all suggest that the series is nonstationary. And, the results of the KPSS stationarity test (across 10 lags) is consistent with this conclusion. What is more, the autocorrelation function (ACF) and partial autocorrelation function (PACF) both exhibit evidence consistent with this error diagnosis. The result of a Ljung-Box white noise test confirms that the Opinion Clarity time series filtered with an ARIMA(0,1,1) noise model among liberal decisions, and an ARIMA(0,1,2) model among conservative decisions, yields white noise residuals.

21 The diagnosis of Public Mood with an AR(1), Civil Liberties Docket as an ARMA(1,1), SOP Constraint with an AR(1), and Average Case Complexity as an ARMA(1,1) error aggregation pattern is consistent with the visual evidence apparent in each series’ ACF and PACF. And, a Ljung-Box white noise test confirms that each filtered time series is seemingly white noise.

22 One might also estimate these model specifications while including a lagged dependent variable (LDV), thus enabling Public Mood to have a dynamic impact on opinion clarity (i.e., a change in public opinion at time t might affect opinion clarity across future time periods). The subsequent results and inferences are consistent when including the LDV.

23 We also considered alternative modeling strategies to evaluate the robustness of our empirical results across both the conservative and liberal time series. These alternative models yield substantively consistent results. Specifically, the empirical results are consistent when fractionally differencing the Opinion Clarity dependent variable instead of filtering it with an ARIMA(0,1,1) noise model (see, e.g., Reference HoskingHosking 1981; Reference Tsay and ChungTsay and Chung 2000). See the online appendix for these results.

24 The prewhitened Opinion Clarity scale exhibits a range of −2.45 to +2.62, with a standard deviation of 1.23 units.

25 We exclude poll questions that were matched to denials of certiorari. The 106 observations in our dataset do include four cases that appear more than once because Marshall collected multiple poll questions for a case if a case addressed multiple issues. For example, Planned Parenthood v. Casey (1992) represents four observations in Marshall's original data because it discussed four issues: (1) informed consent; (2) husband notification; (3) one-parent consent for minors’ abortions; and (4) 24-hour waiting rule. Our substantive findings do not differ if we exclude those multiple observations for a single case.

26 The subsequent results are substantively consistent, though, when substituting the logged magnitude of public opposition (given the measure's skewed distribution) for those observations when there was more opposition than support for the Court's position. See the online appendix for these results.

27 For the distribution of our 106 observations, we adapt Marshall's data, where 37 observations were labeled as “inconsistent” with public opinion, and 69 observations were labeled as either unclear or consistent with public opinion. We combine “unclear” with “consistent” because if there were unclear poll results, which usually meant multiple conflicting polls, then the Court could easily write an opinion as if the supporting body was larger than the opposition body.

28 Our results are substantively consistent if we use alternate legislative pivots and/or the median of the case's majority coalition.

29 We use the “issueArea” variable in the Supreme Court Database to identify the primary issue area within each case.

30 The online appendix contains a full table of results for the fixed effects (and figures of the predicted values). We also considered a number of alternative random-effects approaches (in the online appendix) to account for the effect of issue area and opinion author. We obtain identical substantive results regarding the impact of public opinion in these models.

References

Arnold, R. Douglas (1990) The Logic of Congressional Action. New Haven: Yale Univ. Press.Google Scholar
Bartels, Brandon L. & Johnston, Christopher D. (2012) “Political Justice? Perceptions of Politicization and Public Preferences Toward the Supreme Court Appointment Process,” 76 Public Opinion Q. 105–16.CrossRefGoogle Scholar
Bartels, Brandon L. & Johnston, Christopher D. (2013) “On the Ideological Foundations of Supreme Court Legitimacy in the American Public,” 57 American J. of Political Science 184–99.CrossRefGoogle Scholar
Baum, Lawrence (2006) Judges and Their Audiences: A Perspective on Judicial Behavior. Princeton: Princeton Univ. Press.CrossRefGoogle Scholar
Benson, Robert W. & Kessler, Joan B. (1987) “Legalese v. Plain Langauge: An Empirical Study of Persuasion and Credibility in Appellate Brief Writing,” 20 Loyola of Los Angeles Law Rev. 301–22.Google Scholar
Black, Ryan C., Owens, Ryan J., & Brookhart, Jennifer L. (2015) “We Are the World: The U.S. Supreme Court's Use of Foreign Sources of Law.” British J. of Political Science. FirstView Article Available at: http://dx.doi.org/10.1017/S0007123414000490 (accessed 2 June 2016).Google Scholar
Black, Ryan C., et al. (2016) US Supreme Court Opinions and their Audiences. Cambridge Univ. Press.CrossRefGoogle Scholar
Box, George E.P. & Jenkins, Gwilym M. (1976) Time Series Analysis: Forecasting and Control. San Francisco: Holden-Day.Google Scholar
Box-Steffensmeier, Janet M., DeBoef, Suzanna, & Lin, Tse-Min (2004) “The Dynamics of the Partisan Gender Gap,” 98 American Political Science Rev. 515–28.CrossRefGoogle Scholar
Bryan, Amanda C., & Christopher D. Kromphardt. (forthcoming). “Public Opinion, Public Support, and Counter-Attitudinal Voting on the U.S. Supreme Court.” Justice System Journal.Google Scholar
Caldeira, Gregory A. (1986) “Neither the Purse Nor the Sword: Dynamics of Public Confidence in the Supreme Court,” 80 American Political Science Rev. 1209–226.CrossRefGoogle Scholar
Casillas, Christopher J., Enns, Peter K., & Wohlfarth, Patrick C. (2011) “How Public Opinion Constrains the U.S. Supreme Court,” 55 American J. of Political Science 7488.CrossRefGoogle Scholar
Casper, Jonathan D., Tyler, Tom, & Fisher, Bonnie (1988) “Procedural Justice in Felony Cases,” 22 Law & Society Rev. 483508.CrossRefGoogle Scholar
Clarke, Harold D. & Stewart, Marianne C. (1994) “Prospections, Retrospections and Rationality: The 'Bankers’ Model of Presidential Approval Reconsidered,” 38 American J. of Political Science 1104–123.CrossRefGoogle Scholar
Collins, Paul M. Jr (2008) “Amici Curiae and Dissensus on the U.S. Supreme Court,” 5 J. of Empirical Legal Studies 143–70.CrossRefGoogle Scholar
Corley, Pamela C., Collins, Paul M. Jr., & Calvin, Bryan (2011) “Lower Court Influence on U.S. Supreme Court Opinion Content,” 73 J. of Politics 3144.CrossRefGoogle Scholar
Corley, Pamela C., Howard, Robert M., & Nixon, David C. (2005) “The Supreme Court and Opinion Content: The Use of the Federalist Papers.” 58 Political Research Q. 329–40.CrossRefGoogle Scholar
DuBay, William H (2004) The Principles of Readability. Impact Information, Costa Mesa, California. Available at: http://www.impact-information.com/impactinfo/readability02.pdf (accessed 2 June 2016).Google Scholar
Durr, Robert H., Martin, Andrew D., & Wolbrecht, Christina (2000) “Ideological Divergence and Public Support for the Supreme Court,” 44 American J. of Political Science 768–76.CrossRefGoogle Scholar
Enns, Peter K. & Wohlfarth, Patrick C. (2013) “The Swing Justice.” 75 J. of Politics 1089–107.CrossRefGoogle Scholar
Epstein, Lee & Martin, Andrew D. (2011) “Does Public Opinion Influence the Supreme Court? Possibly Yes (But We're Not Sure Why),” 13 University of Pennsylvania J. of Constitutional Law 263–81.Google Scholar
Epstein, Lee & Knight, Jack (1998) The Choices Justices Make. Washington: CQ Press.Google Scholar
Epstein, Lee & Segal, Jeffrey A. (2000) “Measuring Issue Salience,” 44 American J. of Political Science 6683.CrossRefGoogle Scholar
Epstein, Lee, et al. (2007) “The Judicial Common Space,” 23 J. of Law, Economics, & Organization 303–25.CrossRefGoogle Scholar
Erikson, Robert S., MacKuen, Michael B., & Stimson, James A. (2002) The Macro Polity. New York: Cambridge Univ. Press.Google Scholar
Flemming, Roy B. & Wood, B. Dan (1997) “The Public and the Supreme Court: Individual Justice Responsiveness to American Policy Moods,” 41 American J. of Political Science 468–98.CrossRefGoogle Scholar
Friedersdorf, Conor (2013) “Why Clarence Thomas Uses Simple Words in His Opinions.” The Atlantic February 20, 2013. Available at: http://www.theatlantic.com/national/archive/2013/02/why-clarence-thomas-uses-simple-words-in-his-opinions/273326/ (accessed 2 June 2016).Google Scholar
Gibson, James L. & Caldeira, Gregory A. (2011) “Has Legal Realism Damaged the Legitimacy of the U.S. Supreme Court?45 Law & Society Rev. 195219.CrossRefGoogle Scholar
Gibson, James L., Caldeira, Gregory A., & Spence, Lester Kenyatta (2003) “Measuring Attitudes toward the United States Supreme Court,” 47 American J. of Political Science 354–67.CrossRefGoogle Scholar
Gibson, James L. & Nelson, Michael J. (2015) “Is the U.S. Supreme Court's Legitimacy Grounded in Performance Satisfaction and Ideology?59 American J. of Political Science 162–74.CrossRefGoogle Scholar
Giles, Michael W., Blackstone, Bethany, & Vining, Richard L. (2008) “The Supreme Court in American Democracy: Unreavling the Linkages Between Public Opinion and Judicial Decision Making,” 70 J. of Politics 293306.CrossRefGoogle Scholar
Granger, Clive W.J. & Newbold, Paul (1974) “Spurious Regressions in Econometrics,” 26 J. of Econometrics 1045–066.Google Scholar
Hosking, Jonathan R.M. (1981) “Fractional Differencing,” 68 Biometrika 165–76.CrossRefGoogle Scholar
Jacobson, Gary C. (1987) “The Marginals Never Vanished: Incumbency and Competition in Elections to the U.S. House of Representatives, 1952–1982,” 31 American J. of Political Science 126–41.CrossRefGoogle Scholar
Johnston, Christopher D. & Bartels, Brandon L. (2010) “Sensationalism and Sobriety Differential Media Exposure and Attitudes Toward American Courts,” 74 Public Opinion Q. 260–85.CrossRefGoogle Scholar
Jones, Douglas, et al. (2005) “Measuring Human Readability Of Machine Generated Text: Three Case Studies In Speech Recognition And Machine Translation,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2005), pp. v/1009–v/1012 Vol. 5. doi: 10.1109/ICASSP.2005.1416477.CrossRefGoogle Scholar
Key, V.O. Jr (1961) Public Opinion and American Democracy. New York: Alfred A. Knopf.Google Scholar
Kramer, Gerald H. (1983) “The Ecological Fallacy Revisited: Aggregate- versus Individual-Level Findings on Economics and Elections, and Sociotropic Voting,” 77 American Political Science Rev. 92111.CrossRefGoogle Scholar
Leben, Steve (2011) “An Expectation of Empathy,” 51 Washburn Law J. 4959.Google Scholar
MacKuen, Michael B., Erikson, Robert S., & Stimson, James A. (1989) “Macropartisanship,” 83 American Political Science Rev. 1125–142.CrossRefGoogle Scholar
Marshall, Thomas R. (1989) Public Opinion and the Supreme Court. Boston: Unwin Hyman.Google Scholar
Marshall, Thomas R. (2008) Public Opinion and the Rehnquist Court. Albany: State Univ. of New York Press.Google Scholar
McGuire, Kevin T. & Stimson, James A. (2004) “The Least Dangerous Branch Revisited: New Evidence on Supreme Court Responsiveness to Public Preferences,” 66 J. of Politics 1018–035.CrossRefGoogle Scholar
Mishler, William & Sheehan, Reginald S. (1993) “The Suprme Court as a Countermajoritarian Institution? The Impact of Public Opinion on Supreme Court Decisions,” 87 American Political Science Rev. 87101.CrossRefGoogle Scholar
Nelson, Michael J. (N.d.) “Elections and Explanations: Judicial Elections and the Readability of Judicial Opinions,” Unpublished paper. Available at: http://mjnelson.org/papers/NelsonReadabilityAugust2013.pdf (accessed 2 June 2016).Google Scholar
Owens, Ryan J. & Wedeking, Justin P. (2011) “Justices and Legal Clarity: Analyzing the Complexity of Supreme Court Opinions,” 45 Law & Society Rev. 1027–061.CrossRefGoogle Scholar
Owens, Ryan J., Wedeking, Justin P., & Wohlfarth, Patrick C. (2013) “How the Supreme Court Alters Opinion Language to Evade Congressional Review,” 1 J. of Law and Courts 3559.CrossRefGoogle Scholar
Spriggs, James F. II & Hansford, Thomas G. (2001) “Explaining the Overruling of U.S. Supreme Court Precedent,” 63 J. of Politics 1091–111.CrossRefGoogle Scholar
Staton, Jeffrey K. & Vanberg, Georg. 2008. “The Value of Vagueness: Delegation, Defiance, and Judicial Opinions,” 52 American J. of Political Science 504–19.CrossRefGoogle Scholar
Stimson, James A. (1991) Public Opinion in America: Moods, Cycles, and Swings. Boulder: Westview Press.Google Scholar
Stimson, James A. (1999) Public Opinion in America: Moods, Cycles, and Swings. 2nd ed. Boulder: Westview Press.Google Scholar
Sunshine, Jason & Tyler, Tom R. (2003) “The Role of Procedural Justice and Legitimacy in Shaping Public Support for Policing,” 37 Law & Society Rev. 513–48.CrossRefGoogle Scholar
Tsay, Wen-Jay & Chung, Ching-Fan (2000) “The Spurious Regression of Fractional Integrated Processes,” 96 J. of Econometrics 155182.CrossRefGoogle Scholar
Ura, Joseph Daniel & Wohlfarth, Patrick C. (2010) “‘An Appeal to the People’: Public Opinion and Congressional Support for the Supreme Court,” 72 J. of Politics 939–56.CrossRefGoogle Scholar
Vickrey, William C., Denton, Douglas G., & Jefferson, Wallace B. (2012) “Opinions as the Voice of the Court: How State Supreme Courts Can Communicate Effectively and Promote Procedural Fairness,” 48 Court Rev.: The J. of the American Judges Association 7485.Google Scholar
Wedeking, Justin P (2010) “Supreme Court Litigants and Strategic Framing,” 54 American J. of Political Science 617631.CrossRefGoogle Scholar
Zilis, Michael (N.d.) “I Respectfully Dissent: Coverage of High Salience Supreme Court Decisions in the New York Times, 1981-2008.” Unpublished manuscript.Google Scholar
Zink, James R., Spriggs, James F. II, & Scott, John T. (2009) “Courting the Public: The Influence of Decision Attributes on Individuals’ Views of Court Opinions,” 71 J. of Politics 909–25.CrossRefGoogle Scholar
Figure 0

Figure 1. Readability Formula Inputs.

Figure 1

Table 1. The Aggregate Impact of Public Opinion on Supreme Court Majority Opinion Clarity, 1952–2011

Figure 2

Figure 2. The Impact of Public Opinion on Supreme Court Opinion Clarity. Estimates Reflect the Predicted Level of Opinion Clarity Across the Range of Public Mood, with Larger Clarity Scores Representing More Readable Opinions. The Vertical Whiskers Denote 95-Percent Confidence Intervals. Predicted Effects Among Conservative and Liberal Decisions are Computed Using Regression Results from Models 1(c) and 2(c), respectively. Differences Across Values are Statistically Significant.

Figure 3

Table 2. An Individual-Level View of the Impact of Public Sentiment on Supreme Court Majority Opinion Content