Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-24T00:05:26.572Z Has data issue: false hasContentIssue false

Media Reflect! Policy, the Public, and the News

Published online by Cambridge University Press:  28 September 2023

CHRISTOPHER WLEZIEN*
Affiliation:
University of Texas at Austin, United States
STUART SOROKA*
Affiliation:
University of California, Los Angeles, United States
*
Christopher Wlezien, Hogg Professor of Government, Department of Government, University of Texas at Austin, United States, [email protected].
Stuart Soroka, Professor, Departments of Communication and Political Science, University of California, Los Angeles, United States, [email protected].
Rights & Permissions [Opens in a new window]

Abstract

Mass media are often portrayed as having large effects on democratic politics. Media content is not simply an exogenous influence on publics and policymakers, however. There is reason to think that this content reflects publics and politics as much as—if not more than—it affects them. This letter examines those possibilities, focusing on interactions between news coverage, budgetary policy, and public preferences in the defense, welfare, and health-care domains in the United States. Results indicate that media play a largely reflective role. Taking this role into account, we suggest, leads to a fundamentally different perspective on how media content matters in politics.

Type
Letter
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of American Political Science Association

INTRODUCTION

It is common to view mass media as having a profound impact on democratic politics. This makes sense—there is after all a vast literature chronicling media effects on attitudes, both observational and experimental (see Dunaway and Graber Reference Dunaway and Graber2022). There also is a growing literature highlighting the importance of media effects on public policymaking (e.g., Langer and Gruber Reference Langer and Gruber2021; Walgrave et al. Reference Walgrave, Boydstun, Vliegenthart and Hardy2017). This work is of fundamental importance not just because of what it has taught us about political communication and behavior, but because it makes clear that the study of modern representative democracy requires a consideration of media.

Media content is not an exogenous influence on publics and policymakers, however. As we discuss below, a small body of research has emphasized a “media reflect” account, but that work has offered little empirical examination of this role of media, especially in their ongoing interactions with public preferences and policy. Taking the reflective role of media into account, we show, leads to a different view of media influence.

We focus here on the interactions between news coverage, budgetary policy, and public preferences on defense, welfare, and health spending in the United States (US). Our analysis draws on the research on “thermostatic responsiveness” (e.g., Jennings Reference Jennings2009; Pacheco Reference Pacheco2013; Soroka and Wlezien Reference Soroka and Wlezien2010; Wlezien Reference Wlezien1995). This work finds that policy feeds back negatively on the public’s relative preferences, for example, if the public wants more spending on defense and the government provides more spending, then the public adjusts its preference for more spending downward, other things being equal. The analysis also builds on recent work using automated content analysis to identify a “media policy signal” (e.g., Dun, Soroka, and Wlezien Reference Dun, Soroka and Wlezien2021; Neuner, Soroka, and Wlezien Reference Neuner, Soroka and Wlezien2019; Soroka and Wlezien Reference Soroka and Wlezien2022). Our results demonstrate the role that media play, possibly affecting public opinion and policy change but also reflecting them.

THE ROLE OF MEDIA IN PREVIOUS RESEARCH

Even a cursory consideration of what journalists do implies that media reflect both policymaking and public opinion. Journalists attend press conferences and report on what governments are doing. Media organizations also conduct and report on public opinion polls. Indeed, many major media outlets have had their own polling operations, the objective of which is to gauge what the public thinks. “Vox pops” are prevalent (e.g., Beckers, Walgrave, and Van den Bulck Reference Beckers, Walgrave and Van den Bulck2018), as is the use of social media content as a (flawed) representation of public attitudes (e.g., Molyneux and McGregor Reference Molyneux and McGregor2021). There is good reason to expect media coverage to both convey information to citizens and follow public opinion (see, e.g., Soroka, Stecula, and Wlezien Reference Soroka, Stecula and Wlezien2015).

The literatures on agenda-setting have considered this intervening role of media. These have found that media help to signal to politicians the importance citizens attach to issues, and vice versa (e.g., Soroka Reference Soroka2002; Van Aelst and Walgrave Reference Van Aelst and Walgrave2011); this is especially true of the literature on “policy agendas” (e.g., Baumgartner and Jones Reference Baumgartner and Jones2005; Boydstun Reference Boydstun2013). Even where issue salience is concerned, however, the potentially reciprocal nature of public-policy/media effects has made teasing out causality difficult, as Barberá et al. (Reference Barberá, Casas, Nagler, Egan, Bonneau, Jost and Tucker2019) have noted. Their recent article exploits the fine-grained timing of Twitter messaging to identify directions of causality between citizens’ and legislators’ issue attentiveness. Results show that legislators tend to follow the public in the issues they discuss. This is of real significance given concerns about elites leading rather than following opinion (e.g., Jacobs and Shapiro Reference Jacobs and Shapiro2000).

To our knowledge, however, there is no work that considers the ways in which media coverage reflects the substance of public preferences, that is, the support for more policy, and also the direction and magnitude of policy actions. Substantively speaking, this means that we do not fully understand the role that media play in representative democracy. Methodologically speaking, it means that we may overestimate media effects on public opinion and policy.

Our approach is similar to that of Barberá et al. (Reference Barberá, Casas, Nagler, Egan, Bonneau, Jost and Tucker2019) insofar as we take seriously the possibility of reciprocal effects. But rather than focusing on policymakers’ attention to issues in media posts, we examine their actual policy decisions, specifically budgetary ones. And we are able to exploit the sequence of measurement of public opinion, media coverage, and policy to assess causal dynamics.

The sections that follow pay special attention to the budgetary cycle, and what it offers for the identification of causal effects. We then propose measures of the substance of policy, public preferences, and a “media policy signal,” and estimate the relationships between these three variables. Results suggest that media coverage may be best viewed as a reflection of both opinion and policy. This has significant implications for the ways in which scholars conceive of and model media effects.

THE PUBLIC, BUDGETARY POLICY, AND THE NEWS

The budgetary process in the US occurs annually. The president submits spending requests that Congress acts on, usually, but not always, before the fiscal year begins on October 1. There is reason to think that the public figures into these policy decisions, as borne out in previous research (e.g., Caughey and Warshaw Reference Caughey and Warshaw2022; Erikson, MacKuen, and Stimson. Reference Erikson, MacKuen and Stimson.2002; Erikson, Wright, and McIver Reference Erikson, Wright and McIver.1993; Soroka and Wlezien Reference Soroka and Wlezien2010). There also is reason to suppose that the media cover those decisions and that this coverage informs the public and facilitates public responsiveness (Neuner, Soroka, and Wlezien Reference Neuner, Soroka and Wlezien2019; Soroka and Wlezien Reference Soroka and Wlezien2022). Set out temporally: policy change made during the year is reflected in media coverage during that year, which informs the public’s relative preferences registered in the next year, and those preferences then influence policymaking during that year for the following year.

Media may not just reflect policy but affect it as well; and coverage may also both reflect and affect public opinion. These possibilities clearly complicate analysis. We nevertheless can assess multiple directions of influence among the variables following the logic of Granger causality. This involves estimation of cross-lag models, where each variable is a function of its lagged value and the lagged value of the other variable(s), which we more fully describe below. Doing so provides a starting point—a straightforward test of the degree to which media coverage, public preferences, and policy are determined by the other variables across years. We also can observe how things unfold within years because of the sequence of measurement of preferences and policy, and because we can separate media content at different points of each year. This provides what we believe is the first direct test of the possibility that media both affect and reflect the substance of policy and public attitudes.

DATA

Measures of the variables for each of the three policy areas that we examine are drawn from past work on thermostatic responsiveness (cited above). We capture budgetary policy using annual appropriations data available from the Office of Management and Budget (OMB). We follow previous practice in our calculation of appropriations in each of the three domains (see Supplementary material). Our measures of preferences are drawn from the General Social Survey (GSS), where respondents have regularly been asked about spending in different programs, specifically, whether it is “too little,” “too much,” or “about the right amount.” These measures of policy and preferences are common in the literature.

Our measure of the “media policy signal” is less well established and relies on estimates produced by Soroka and Wlezien (Reference Soroka and Wlezien2022); see also Dun, Soroka, and Wlezien (Reference Dun, Soroka and Wlezien2021); Neuner, Soroka, and Wlezien (Reference Neuner, Soroka and Wlezien2019). (Data are distributed publicly through Harvard Dataverse). The “signal” is intended to capture the direction and magnitude of policy changes in each domain, as reported in 17 major US newspapers. It is based on a corpus including all news articles on each policy domain, and the implementation of three “layered” dictionaries capturing sentences that refer to (1) the policy domain, (2) spending, and (3) direction (upward or downward). The measure reflects the number of sentences mentioning upward change minus the number of sentences mentioning downward change in each domain, aggregated by fiscal year (or narrower periods, discussed below). Importantly, it summarizes coverage of what government is doing and should do, owing to events, political debate, and possibly even public opinion itself (Soroka and Wlezien Reference Soroka and Wlezien2022). For further details on the variables used here—descriptives, correlations, and tests of stationarity—see the Supplementary material.

WHO LEADS, WHO FOLLOWS?

Across-Year Analyses

To assess the direction of causation between the public, policy, and media, we begin by estimating the following equations:

(1) $$ \hskip-5pc {\displaystyle \begin{array}{l}{\mathrm{Public}}_t{=}^P{\mathrm{a}}_0+{\mathrm{b}}_1{\mathrm{Public}}_{t\hbox{-} 1}+{\mathrm{b}}_2{\mathrm{Policy}}_{t\hbox{-} 1}\\ {}\hskip3.5pc + {\mathrm{b}}_3{\mathrm{Media}}_{t\hbox{-} 1}{+}^{Pub}{\mathrm{e}}_t\end{array}} $$
(2) $$ \hskip-5.4pc {\displaystyle \begin{array}{l}{\mathrm{Policy}}_t{=}^S{\mathrm{a}}_0+{\mathrm{b}}_4{\mathrm{Policy}}_{t\hbox{-} 1}+{\mathrm{b}}_5{\mathrm{Media}}_{t\hbox{-} 1}\\ {}\hskip3.5pc + {\mathrm{b}}_6{\mathrm{Public}}_{t\hbox{-} 1}{+}^{Pol}{\mathrm{e}}_t\end{array}} $$
(3) $$ \hskip-5pc {\displaystyle \begin{array}{l}{\mathrm{Media}}_t{=}^M{\mathrm{a}}_0+{\mathrm{b}}_7{\mathrm{Media}}_{t\hbox{-} 1}+{\mathrm{b}}_8{\mathrm{Public}}_{t\hbox{-} 1}\\ {}\hskip3.5pc + {\mathrm{b}}_9{\mathrm{Policy}}_{t\hbox{-} 1}{+}^M{\mathrm{e}}_t\end{array}} $$

In these equations, Policy is a first difference variable, that is, appropriations in the current year minus appropriations in the previous year, which we expect to relate to levels of the Public and Media variables, both of which explicitly capture change—the former because the question used to construct it asks about relative preferences and the latter by construction.Footnote 1 Subscripts indicate the year during which we observe the variables, perhaps the most noteworthy of which is for Policy, as we date it based on the year appropriations decisions (usually) are taken, for example, in 2019 for FY2020.

Equations 13 provide conservative estimates of “causal” effects, as each variable is modeled as a function of both its lagged value and the lagged values of the other variables. To the extent that effects actually are more current, they will be reflected in the lagged dependent variable(s), which will dampen the estimated effects of the other independent variables, concealing at least some of their true effect(s). Positive evidence thus is to be taken seriously and important to document before exploiting the timing of measurement within years.

Table 1 presents results of the estimated equations pooling the three domains for the 39-year period between 1980 and 2018. The equations are estimated using ordinary least squares (OLS) and include domain fixed effects.Footnote 2

Table 1. Basic Granger Causality Tests—Defense, Welfare, and Health Domains Pooled

a First difference of budgetary policy decisions taken in year t for fiscal year t+1 (or year t−1 for fiscal year t). Cells contain regression coefficients with standard errors in parentheses.

*** p < 0.01;

** p < 0.05;

* p < 0.1.

The first column shows results for Equation 1, in which public preferences are the dependent variable. Here we see significant positive effects of lagged preferences and negative effects of lagged policy, as expected. The latter reflects thermostatic public responsiveness: when spending increases (decreases), public preferences tend to adjust downward (upward). Previous research (Soroka and Wlezien Reference Soroka and Wlezien2022) demonstrates that media provide information about policy, that is, the effect is mediated. The coefficient for policy thus at least partly reflects information conveyed by media. Including policy in the model allows us to assess the independent effect of coverage on preferences—the impact of media content that deviates from actual policy change and has a potentially very different positive effect. There is only the hint of such an effect, however; while the coefficient is positive, implying that coverage unrelated to policy may influence the public’s underlying preferred level of policy, it is not statistically significant (p = 0.12).

Policy is the focus of the second column of Table 1, and here we see evidence of representation in the form of a significant positive coefficient for lagged public preferences. Public preferences last year influence budgetary policy decisions taken this year (for the following fiscal year). These results are in line with the work referenced above, though there is reason to suppose that policy responsiveness is more immediate, which we will explicitly consider. The estimate for media coverage also is positive and on the cusp of statistical significance (p = 0.101), and so it may matter for policymaking independently of public opinion.

Results for media coverage are shown in the third column. Note that there is a strong relationship between the media signal in years t and t−1, where coverage this year is related to coverage last year, possibly due in part to stability in media resource allocation (see Boydstun Reference Boydstun2013). Above and beyond that history, we see positive policy effects on coverage that just miss conventional levels of statistical significance (p = 0.13). There is reason from previous research to think that the estimate understates the true effect, and that timing matters, as our analysis below considers (and supports). Although policy does not have reliable effects on media coverage in this analysis, public opinion does: media reflect public sentiment.Footnote 3

Across these highly salient domains, then, policy responds positively to public preferences, and public preferences respond negatively (thermostatically) to policy. To the extent that there are media effects on either, those effects appear to be muted (or subsumed) here by other variables, though there are some hints of positive influences on both opinion and policy. That said, to the extent that media play a role in this system, it is mostly as a reflection of public preferences. We consider the magnitude of these effects in the concluding section, but first explore sequencing.

Within-Year Analyses

The preceding section captures only part of the story, since we also are able to observe how things unfold within years. Consider that public preferences are captured by the US GSS in March of each year, after presidents (almost always) have submitted their budgets but before Congress has acted (or even started in). Congressional action, in turn, is completed before public preferences are registered in the following year, with some exceptions. And news coverage can be measured for different periods of time within years; indeed, news coverage happens daily, and there is no constraint on how we aggregate it. To summarize: public preferences are captured in the spring, budgetary decisions usually happen in the fall, and media coverage is measurable in the fall or spring (and at other points).

We thus are able to estimate a set of equations that takes the timing of each of these measures into account, as follows:

(4) $$ \hskip-5.5pc {\displaystyle \begin{array}{l}{\mathrm{Public}}_t={}^P{\mathrm{a}}_0+{b}_1{\mathrm{Public}}_{t\hbox{-} 1}+{b}_2{\mathrm{Policy}}_{t\hbox{-} 1}\\ {}\hskip4em +{b}_3{\mathrm{Media}}_{Late,t\hbox{-} 1}{+}^{Pub}{\mathrm{g}}_t\end{array}} $$
(5) $$ \hskip-4pc {\displaystyle \begin{array}{l}{\mathrm{Policy}}_t={}^S{\mathrm{a}}_0+{b}_4{\mathrm{Policy}}_{t\hbox{-} 1}+{b}_5{\mathrm{Media}}_{Early,t\hbox{-} 1}\\ {}\hskip4em +{b}_6{\mathrm{Public}}_t{+}^{Pol}{\mathrm{g}}_t\end{array}} $$
(5) $$ \hskip-.7pc {\displaystyle \begin{array}{l}{\mathrm{Media}}_{Early,t}={}^{M,E}{\mathrm{a}}_0+{b}_7{\mathrm{Media}}_{Late,t\hbox{-} 1}+{\mathrm{b}}_8{\mathrm{Public}}_{t\hbox{-} 1}\\ {}\hskip6em +{b}_9{\mathrm{Policy}}_{t\hbox{-} 1}{+}^{M,E}{\mathrm{g}}_t\end{array}} $$
(6) $$ \hskip-.7pc {\displaystyle \begin{array}{l}{\mathrm{Media}}_{Late,t}={}^{M,L}{\mathrm{a}}_0+{b}_{10}{\mathrm{Media}}_{Early,t\hbox{-} 1}+{\mathrm{b}}_{11}{\mathrm{Public}}_t\\ {}\hskip6em +{b}_{12}{\mathrm{Policy}}_t{+}^{M,L}{\mathrm{g}}_t\end{array}} $$

Here, we separate media coverage into Early and Late coverage—coverage for (a) the first six months of each fiscal year and (b) the last six months of each fiscal year. We also vary the timing (at t−1 or t) of measurement based on temporal sequence—for example, public opinion in year t is measured in the first half of that same year, alongside early media coverage but before late media coverage in that year and also policy decisions. To be clear, our approach to capturing causal effects is no different than in the preceding section; Equations 47 just take careful account of the timing of events within (and across) years. Perhaps most notably, public opinion in year t is measured prior to budget policy decisions made in that fiscal year and media coverage in the second half of that year; we accordingly include opinion for year t in Equations 5 and Equation 7. Table 2 presents the results of estimating each of the four equations.

Table 2. Revised Analyses Based on the Timing of Measurement

a First difference of budgetary policy decisions taken in year t for fiscal year t+1 (or year t−1 for fiscal year t). Cells contain regression coefficients with standard errors in parentheses.

*** p < 0.01;

** p < 0.05;

* p < 0.1.

Results for the public preferences Equation 4 are shown in column 1 of Table 2. Recall that this model differs in one way from the model shown in Table 1: it includes media coverage not from the entire prior year but rather the second half of that year. This change matters. Although there was not clear evidence of media coverage on preferences in Table 1, results in the first column of Table 2 indicate a statistically significant, positive effect of late coverage. This is an important result, as it suggests that the news causes public preferences—that is, it apparently is not only a consequence. As we highlighted in our discussion of Table 1, such an effect of media coverage is different from—and independent of—the effect of information about policy (captured here by the coefficient for Policyt−1) that produces thermostatic responsiveness.

The second column of Table 2 shows estimates for the policy Equation 5, in which we include Early coverage at t rather than media coverage from all of year t−1. The change makes sense given the timing of budgetary policymaking, since Early coverage occurs before the time that policymakers undertake budgetary policy in year t (and Late coverage happens only in the final stages). We also include the measure of public preferences in year t, which are taken at the same point in time as early news. Results continue to show a significant, positive impact of preferences but also reveal a similar effect for Early media coverage. The latter is of special importance, as it suggests a causal effect of media coverage on policy. In contrast with Table 1, then, results in the first two columns of Table 2 reveal clearer media effects: coverage positively affects both public opinion and policy change.Footnote 4

The third column shows estimates for Early media coverage. This equation differs in one basic way from Table 1: past media coverage is measured Late, during the second half of the previous year. Otherwise, the specification is the same, as we estimate the effects of policy decisions taken (late) in the previous year as well as public preferences from (early) that year.Footnote 5 Results suggest that media coverage in the first half of the year reflects public preferences from the previous year, as the coefficient is positive and highly reliable (p = 0.01), which comports with estimates in Table 1. There we also observed the hint of positive effects of policy decisions on media coverage; this is apparent in Table 2 once again, though still not reliable (p = 0.24).Footnote 6

Although policy decisions may still find expression in the ensuing year’s media coverage, there is reason to expect more current effects. Indeed, given the timing of budgeting, we expect last year’s policy decisions to be reflected in media coverage late in that year. We explicitly consider this in Equation 7—a model of Late media coverage that includes coverage from Early that year and preferences from the same point in time, alongside spending decisions taken that year. Results, shown in the final column of Table 2, suggest that policy has a significant, positive effect on this late media coverage independent of media coverage and preferences registered earlier that year, only the former of which has reliable effects.Footnote 7 Table 2 thus provides even stronger evidence than Table 1 that media reflects both public opinion and policy itself, in the first and then second halves of the year, respectively.

Considering within-year timing makes for a more complicated storyline. That said, taking this timing into account allows us to consider each dependent variable as a product of the factors that are most temporally proximate. And breaking media coverage into parts reveals some important differences between coverage early in the year (reflecting last year’s preferences) and late in the year (reflecting this year’s policy). This change in measurement has real consequences for our understanding of media effects.

DISCUSSION: MEDIA (MOSTLY) REFLECT

Do media affect or reflect opinion and policy? Our results suggest they do both, although the reflecting role is more pronounced. Consider some comparisons of coefficients estimated in Table 2. In the model of public preferences, the standardized coefficient for policy is roughly 50% larger than the (early) media signal (in absolute terms, 0.18 versus 0.13). There thus appear to be limited media effects on preferences above and beyond the supply of information about policy change. In the model of policy, media appear to play a somewhat larger role: the standardized coefficients for opinion and the media signal are more similar (0.24 and 0.23, respectively). This impact of media on policy is also evident in the simulated effects of standardized shocks to each of the different variables (based on Table 2 estimates) shown in Figure 1.Footnote 8 Those results illustrate the limited (positive) influence of media on public preferences: compare the slight impact of a shock to coverage on the public (in the third panel) with the rather large impact of a shock to opinion on coverage (in the second panel).

Figure 1. Impulse Response Functions

Media thus appear to reflect policy and preferences more than they affect them. Indeed, media effects are not highly robust to changes in specification, even as media reflect estimates are. (See Footnote footnote 4 and especially the “On Media Effects” section of the Supplementary material.) This is not to say that there are no media effects on policy and preferences; and there presumably are policy areas not explored here in which coverage matters even more. But the “two-way” relationships between media, policy, and preferences highlight the importance of considering the causes of media coverage, not just its consequences. What we often consider to be a result of that coverage, especially public opinion, may be a determinant.

These findings have implications for research, both observational and experimental. In observational work, scholars should whenever possible consider the possibility of bidirectional flows in their analysis. In experimental work, researchers should consider how the possible endogeneity of coverage complicates the identification of treatments and their effects. (See also Arceneaux and Johnson Reference Arceneaux and Johnson2013.) Media may affect, but they reflect as well, and this matters for our understanding of the role that media play in modern representative democracies.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/S0003055423000874.

Data availability statement

Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/BOGTNU.

Acknowledgements

For helpful comments, we thank Mandi Bates Bailey, Wouter van der Brug, Fabrice d’Almeida, Nicholas Dias, Johanna Dunaway, Gianna Eijk, Armen Hakhverdian, Tobias Heinrich, Bruno Jerome, Veronique Jerome, Theresa Kuhn, Tom van der Meer, Gijs Schumacher, Wouter Schakel, and Frederik Zumer. Earlier versions of this article were presented at the 2022 Meeting of the Midwest Political Science Association, Chicago, and at the University of Amsterdam and Panthéon-Assas Université.

Funding statement

Some of the analysis relates to research supported by National Science Foundation Grants SES-1728792 and SES-1728558.

CONFLICT OF INTEREST

The authors declare no ethical issues or conflicts of interest in this research.

ETHICAL STANDARDS

The authors affirm this research did not involve human subjects.

Footnotes

1 All of the variables used in the analysis—differenced spending policy and the measures of public preferences and media coverage—are expected to be and also appear to be “stationary,” though there are hints of slight trends in the media variables that have minor consequences for the results. See the Supplementary material for details.

2 Results by domain are included in the appendices, where we can see that patterns are similar in the different areas, most importantly for the effects of and on media coverage, though the size and significance of estimates do vary across domains.

3 That Media follows Policy is important for thermostatic public responsiveness, even as it is not clear from the equations in Table 1. Most importantly, the functional form by which policy is reflected in coverage is slightly different to what we capture there given the logic of Granger causality, where news in actuality tends to reflect decisions made this year (for next year), as we will see. Coverage includes other information that may influence public preferences independently—and differently, with a positive sign—per results in the first column of Table 1. Also see Soroka and Wlezien (Reference Soroka and Wlezien2022).

4 That said, estimated media effects disappear when we include more current measures of media coverage in the models, that is, early year t coverage when analyzing opinion and late year t coverage when assessing spending change—see Supplementary material for details.

5 Although it is tempting to assume that coverage in the first half of the year reflects opinion from that point in time given that polls are taken in March of each year, we adopt the more conservative specification here; using preferences from year t increases the size of their estimated effect, by approximately 50%.

6 The effects of Policyt−1 are more pronounced and reliable (p=0.07) when using preferences in year t.

7 Keep in mind that preferences from (early in) year t already are evident in that Early coverage, and they also influence the spending decisions that ultimately are reflected in Late coverage; they just do not matter independently.

8 The simulations are based on variables that are mean-centered by spending domains and also ignore the intercepts.

References

Arceneaux, Kevin, and Johnson, Martin. 2013. Changing Minds or Changing Channels? Chicago, IL: University of Chicago Press.Google Scholar
Barberá, Pablo, Casas, Andreu, Nagler, Jonathan, Egan, Patrick J., Bonneau, Richard, Jost, John T., and Tucker, Joshua A.. 2019. “Who Leads? Who Follows? Measuring Issue Attention and Agenda Setting by Legislators and the Mass Public Using Social Media Data.” American Political Science Review 113 (4): 883901.Google ScholarPubMed
Baumgartner, Frank R., and Jones, Bryan D.. 2005. The Politics of Attention. Chicago, IL: University of Chicago Press.Google Scholar
Beckers, Kathleen, Walgrave, Stefaan, and Van den Bulck, Hilde. 2018. “Opinion Balance in Vox Pop Television News.” Journalism Studies 19 (2): 284–96.Google Scholar
Boydstun, Amber E. 2013Making The News. Chicago, IL: University of Chicago Press.Google Scholar
Caughey, Devin, and Warshaw, Christopher. 2022. Dynamic Democracy. Chicago, IL: University of Chicago Press.Google Scholar
Dunaway, Johanna, and Graber, Doris. 2022. Mass Media and American Politics. Thousand Oaks, CA: CQ Press.Google Scholar
Dun, Lindsay, Soroka, Stuart, and Wlezien, Christopher. 2021. “Dictionaries, Supervised Learning, and Media Coverage of Public Policy.” Political Communication 38 (1-2): 140–58.Google Scholar
Erikson, Robert S.Wright, Gerald C., and McIver., John P. 1993Statehouse Democracy. New York: Cambridge University Press.Google Scholar
Erikson, Robert. S., MacKuen, Michael B., and Stimson., James A. 2002. The Macro Polity. New York: Cambridge University Press.Google Scholar
Jacobs, Lawrence, and Shapiro, Robert. 2000. Politicians Don’t Pander. Chicago, IL: University of Chicago Press.Google Scholar
Jennings, Will. 2009. “The Public Thermostat, Political Responsiveness and Error-Correction: Border Control and Asylum in Britain, 1994–2007.” British Journal of Political Science 39 (4): 847–70.Google Scholar
Langer, Ana Ines, and Gruber, Johannes B.. 2021. “Political Agenda Setting in the Hybrid Media System: Why Legacy Media Still Matter a Great Deal.” The International Journal of Press/Politics 26 (2): 313–40.Google Scholar
Molyneux, Logan, and McGregor, Shannon C.. 2022. “Legitimating a Platform: Evidence of Journalists’ Role in Transferring Authority to Twitter.” Information, Communication & Society 25 (11): 15771595.Google Scholar
Neuner, Fabian G., Soroka, Stuart N., and Wlezien, Christopher. 2019. “Mass Media as a Source of Public Responsiveness.” The International Journal of Press/Politics 24 (3): 269–92.Google Scholar
Pacheco, Julianna. 2013. “The Thermostatic Model of Responsiveness in the American States.” State Politics & Policy Quarterly 13 (3): 306–32.Google Scholar
Soroka, Stuart. 2002Agenda-Setting Dynamics in Canada. Vancouver, BC: UBC Press.Google Scholar
Soroka, Stuart, Stecula, Dominik A., and Wlezien, Christopher. 2015. “It’s (Change in) the (Future) Economy, Stupid: Economic Indicators, the Media, and Public Opinion.” American Journal of Political Science 59 (2): 457–74.Google Scholar
Soroka, Stuart, and Wlezien, Christopher. 2022Information and Democracy. New York: Cambridge University Press.Google Scholar
Soroka, Stuart, and Wlezien, Christopher. 2010Degrees of Democracy. New York: Cambridge University Press.Google Scholar
Van Aelst, Peter, and Walgrave, Stefaan. 2011. “Minimal or Massive? The Political Agenda-Setting Power of the Mass Media According to Different Methods.” The International Journal of Press/Politics 16 (3): 295313.Google Scholar
Walgrave, Stefaan, Boydstun, Amber E., Vliegenthart, Rens, and Hardy, Anne. 2017. “The Nonlinear Effect of Information on Political Attention: Media Storms and U.S. Congressional Hearings.” Political Communication 34 (4): 548–70.Google Scholar
Wlezien, Christopher. 1995. “The Public as Thermostat: Dynamics of Preferences for Spending.” American Journal of Political Science 39 (4): 9811000.Google Scholar
Wlezien, Christopher, and Soroka, Stuart. 2023. “Replication Data for: Media Reflect! Policy, the Public, and the News.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/BOGTNU.Google Scholar
Figure 0

Table 1. Basic Granger Causality Tests—Defense, Welfare, and Health Domains Pooled

Figure 1

Table 2. Revised Analyses Based on the Timing of Measurement

Figure 2

Figure 1. Impulse Response Functions

Supplementary material: File

Wlezien and Soroka supplementary material

Appendix

Download Wlezien and Soroka supplementary material(File)
File 46.8 KB
Supplementary material: Link

Wlezien and Soroka Dataset

Link
Submit a response

Comments

No Comments have been published for this article.