Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-03T05:30:18.592Z Has data issue: false hasContentIssue false

Traceability and Mass Policy Feedback Effects

Published online by Cambridge University Press:  01 August 2024

BRIAN T. HAMEL*
Affiliation:
University of North Texas, United States
*
Corresponding author: Brian T. Hamel, Assistant Professor, Department of Political Science, University of North Texas, United States, [email protected].
Rights & Permissions [Opens in a new window]

Abstract

Theory suggests that policy benefits delivered directly by government are most likely to affect the voting behavior of beneficiaries. Nearly every empirical study, however, analyzes a policy or program that meets this criterion. To address this limitation, I compare the electoral impacts of two New Deal-era employment programs—the Works Progress Administration (WPA) and the Public Works Administration (PWA)—which differed primarily in their traceability to government. Though both programs provided employment, the WPA directly hired and paid employees. In contrast, the PWA subsidized private sector employment. Across two datasets, I find that the WPA increased support for the enacting Democratic Party. As expected, however, the PWA had no discernible causal effect on voting patterns. These results offer the strongest evidence to date that whether policy beneficiaries can easily see government as responsible for their benefits shapes the development of mass policy feedback effects.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

Many decades ago, E.E. Schattschneider (Reference Schattschneider1935) observed that “a new policy creates a new politics” (288). One way that policy can affect politics is by creating new constituencies in the mass public that mobilize to protect their policy benefits. In this way, policies are not just outputs of the electoral process but inputs as well. And indeed, across a variety of issue domains, scholars have demonstrated that new policies routinely shape the participatory, attitudinal, and voting patterns of those affected by them (Campbell Reference Campbell2012).

Yet scholars have also recognized that not all policies and programs are likely to generate such mass “feedback” effects. Arnold (Reference Arnold1990), for example, argued that policies must be both visible and traceable to influence the voting behavior of those benefitting. That is, the policy effects or benefits must be easily discernible, and viewed by recipients as the consequence of action by specific elected officials.Footnote 1 Otherwise, feedback effects will be muted or nonexistent. Research focused on the traceability dimension has identified several ways that policies may obscure their connection to government. Most commonly, policies mask the role of government by conferring benefits quietly through the tax code or by delegating the delivery of benefits to non-state actors (Kettl Reference Kettl1988; Mettler Reference Mettler2011; Morgan and Campbell Reference Morgan and Campbell2011).

Empirical research, however, has largely neglected to test whether traceability actually conditions the impact of policy on voting behavior. Instead, existing work has simply analyzed the effects of highly traceable policies and programs (Healy and Malhotra Reference Healy and Malhotra2009; Kogan Reference Kogan2021; Shepherd Reference Shepherd2022). That is, scholars have focused their attention on policies for which theory anticipates the emergence of mass feedback effects—policies where government provides direct assistance to citizens. In part, this shortcoming likely reflects the fact that “it is often difficult enough to measure the most concrete consequences of policies, let alone things as intangible as traceability” (Pierson Reference Pierson1993, 622). But of course, demonstrating effects for traceable policies does not mean that they produce larger feedback effects than do less traceable policies. What is more, a recent analysis suggests that even polices where the role of government is hidden from view can still impact the voting decisions of those affected (Rendleman and Yoder Reference Rendleman and Yoder2023).

I assess the role of traceability in mass feedback effects by comparing two New Deal-era employment programs: the Works Progress Administration (WPA) and the Public Works Administration (PWA). Combined, these two programs spent $12 billion between 1933 and 1939 and put more than 10 million people to work. In many ways, the programs were similar. Both programs provided a very visible benefit (employment), and were championed by Roosevelt and the Democrats in Congress while overwhelmingly opposed by Republican elites. Likewise, both created jobs by investing in infrastructure projects from new roads, schools, hospitals, and courthouses to the Hoover Dam and LaGuardia Airport. They differed, however, in how they put people to work and, thus, in how easy it would be for beneficiaries to see government as responsible for their new job. Specifically, while the WPA created direct, public employment where employees were hired and paid directly by government, the PWA subsidized private sector employment. In other words, unlike WPA workers, PWA workers were not hired and paid by government but rather by private businesses with government dollars. Given these designs, theory predicts that the WPA should have increased support for the Democratic Party. Conversely, it suggests that there should be no effects of the PWA on voting behavior.

I test these predictions using county-level information on WPA and PWA spending and voting behavior in presidential elections. I draw on two datasets of program spending: (1) a nationwide dataset reporting the total amount of money spent by program in each county (Fishback, Kantor, and Wallis Reference Fishback, Kantor and Wallis2003) and (2) an original, archival dataset of school construction projects completed by each program in California. Each has advantages. While the nationwide data allow for the most comprehensive and generalizable analysis of feedback effects, the California project data, as I will argue below, permit a comparison of the two programs when at least partially “controlling for” the type of project and worker employed. Put another way, the California data help to minimize non-traceability differences between the two programs.

Using a generalized difference-in-differences design, I find that counties receiving more WPA money became more Democratic than counties receiving less WPA money. Using the nationwide data, I find that moving from the 10th to 90th percentile in WPA spending increased Democratic vote share by about 1.46 percentage points. However, as anticipated, counties that received more PWA money became no more or less Democratic than counties that received less PWA money. In fact, I show that any positive effects of the PWA on support for the Democratic Party would have likely occurred even in the absence of the PWA. Both findings replicate when using the data on school construction spending in California. I also report several robustness checks and additional analyses, including a replication of the main findings in gubernatorial elections. Of most significance, though, are a set of analyses on the economic effects of the WPA and PWA. I find no positive economic effects of the WPA, suggesting that the observed positive effect of the WPA on Democratic support is unlikely to be a reflection of voters simply observing positive economic growth in the aftermath of WPA spending and rewarding the Democratic Party for it.

This article offers some of the first clear empirical support for the long-standing theoretical claim that traceable policy designs generate stronger mass feedback effects than do less traceable policy designs. The findings have vast implications for democratic accountability and governance. I discuss these implications, as well as directions for future research, in the conclusion.

WHEN DO POLICIES AFFECT MASS POLITICAL BEHAVIOR?

Pierson’s (Reference Pierson1993) seminal work identifies two mechanisms through which public policies can affect mass political behavior: resource effects and interpretive effects. First, policies can provide tangible material benefits that enhance one’s available resources (e.g., more time and money; see Brady, Verba, and Schlozman Reference Brady, Verba and Schlozman1995) and facilitate and incentivize political action (resource effects). Policies can also shape recipients’ orientations toward government and in turn their political choices by providing them with information about government, the beliefs and priorities of elected officials, and their place and standing in society (interpretive effects).

Researchers have found evidence in favor of both channels across a variety of different policies and behavioral outcomes, including turnout, vote choice, and policy preferences. Campbell (Reference Campbell2002) finds that Social Security benefits increase participation, particularly among low-income seniors for whom the money is most consequential. Likewise, the food stamp program increased support for the enacting Democratic Party, primarily through the mobilization of new voters in high poverty areas (Kogan Reference Kogan2021). Mettler (Reference Mettler2002) finds similar resource effects for the G.I. Bill among the poorest veterans but also evidence of an interpretive effect whereby veterans engage in politics as a way of “giving back” for the life-changing opportunity given to them through the policy. Support for Social Security also appears to be a function of interpretive learning, as confidence in the program among beneficiaries increases when it provides them with timely, personally relevant information about the benefits they can expect (Cook, Jacobs, and Kim Reference Cook, Jacobs and Kim2010).

Still, positive mass feedback effects of this sort do not always emerge. For one, the size of the benefits may not be large enough to increase one’s capacity to participate and may also too small to be worth fighting for (Howard Reference Howard2007; Patashnik and Zelizer Reference Patashnik and Zelizer2013). Policies may, therefore, have no political impacts at all. Whether a policy generates mass feedback effects, and in what direction, can also depend on the policy’s design and administration and on how these features color recipients’ interactions with and view of government—a type of interpretive effect. Along these lines, while universal programs like Social Security and the G.I. Bill show positive participatory effects, means-tested programs (e.g., TANF) actually decrease the political engagement of program participants (Mettler and Stonecash Reference Mettler and Stonecash2008). Scholars often attribute these differences to the messages embedded within these two program designs: universal programs are generally associated with deservingness and bestow positive civic status upon beneficiaries, while means-tested programs are often paternalistically and arbitrarily governed by government caseworkers, convey stigma, and reinforce recipients’ feelings of economic disadvantage (Bruch, Ferree, and Soss Reference Bruch, Ferree and Soss2010; Soss Reference Soss1999; Reference Soss2000).

At an even more basic level, policy beneficiaries must also recognize that they are in fact policy beneficiaries and that government is the one providing those benefits. Scholars argue that this too may depend on how the policy is designed and the way benefits are administered (Arnold Reference Arnold1990; Mettler Reference Mettler2011). Arnold (Reference Arnold1990) offers a detailed framework that identifies two conditions that must be met for policies to affect whether policy beneficiaries reward incumbent politicians with their vote based on new policies: policies must be visible and traceable. Visible policies are those for which beneficiaries are able to easily observe the impact of a policy in their day-to-life life, like “10 percent inflation, losing one’s job, paying a new tax, or having one’s student loans terminated” (Arnold Reference Arnold1990, 48). In contrast, paying a few cents less for bread as a consequence of some policy change would likely go unnoticed by even the most frequent consumers.

Traceability includes two components, both necessary for beneficiaries to reward politicians in the ballot box for the benefits they are receiving. Specifically, they must (1) see government as the reason they are receiving those benefits and perhaps most obviously (2) be able to attribute that government action to specific government actors, be it a particular elected official or political party. Quite clearly, in the same way that citizens may struggle to vote retrospectively on the basis of the state of the economy in times of divided or coalition government (Powell and Whitten Reference Powell and Whitten1993; Samuels Reference Samuels2004), policy beneficiaries will be unable to vote with their policy benefits in mind if both Democratic and Republican leaders back the policy. Thus, beneficiaries must be able to credit one elected official or one political party for their benefits.

But even before connecting a policy back to a particular actor or party, beneficiaries must first recognize that government is the reason they have received benefits or experienced some policy effect. What matters here is not whether government is actually responsible for conferring those benefits but whether recipients perceive government to be responsible. Variation in these perceptions depends on the length of the “causal chain” linking policy benefits to government. The greater the length of the causal chain, the more difficult it becomes for recipients to see government as the reason for their benefits. One way that policies may differ in the length of their causal chain is through the way that benefits are delivered. SNAP benefits (i.e., food stamps), for example, are funded and delivered by government: every month, government adds funds to a government-issued benefits card, which recipients can then use at grocery stores. This direct delivery mechanism ensures that government is seen as the sole source of these benefits.

Many other programs, however, delegate the provision of benefits to non-state actors, including private businesses (Morgan and Campbell Reference Morgan and Campbell2011) and nonprofit organizations (Hamel and Harman Reference Hamel and Harman2023; Salamon Reference Salamon1995). That is, though government may fund most all of the benefits, they do not deliver those benefits themselves. For example, benefits for two of Medicare’s most widely used components—Parts C and D—are administered entirely by private-sector insurers. Similarly, student loans provided by private banks under the Federal Family Education Loan program are subsidized and guaranteed by government. Social services provided by charitable organizations like The Salvation Army are also partially government-funded. In each case, the role of government is hidden from view. Mettler (Reference Mettler2011) classifies these types of policies, as well as those for which benefits are delivered through the tax code (e.g., the Earned Income Tax Credit, or EITC), as part of the “submerged state.” Consistent with arguments about traceability, she finds that beneficiaries of submerged programs are much less likely to believe that they receive government benefits relative to those who take part in easily traceable programs. Others have confirmed that the distinctive delivery mechanisms employed by these two types of programs is a primary reason why (SoRelle and Shanks Reference SoRelle and Shanks2024).

Taken together, Arnold (Reference Arnold1990) outlines clear expectations as to which kinds of policies should impact the vote choices of those affected by the policy. The problem, however, is that there is very little empirical research testing these expectations. Most analyses of feedback effects in voting behavior analyze only highly traceable policies or programs that provide direct government assistance, such as SNAP (Kogan Reference Kogan2021), Medicaid (Shepherd Reference Shepherd2022), FEMA disaster relief (Healy and Malhotra Reference Healy and Malhotra2009), and TAA job loss compensation (Margalit Reference Margalit2011).Footnote 2

When researchers have considered the effects of traceability, they have reported evidence incompatible with theory. Indeed, Rendleman and Yoder (Reference Rendleman and Yoder2023) find that governors enacting a state EITC see boosts in approval and vote share in the next election, with the effects concentrated among those eligible for the benefit and in states offering a financially generous benefit. These results are striking because for most EITC recipients, the fact that its benefits come from government remains “obscured and unappreciated” (Shanks-Booth and Mettler Reference Shanks-Booth and Mettler2019, 302). That feedback effects on voting behavior can emerge even for a submerged policy like the EITC could suggest that theories emphasizing the importance of traceability for mass policy feedback effects are overstated. I investigate this possibility further by systematically comparing and contrasting the voting effects of two large New Deal-era employment programs that differed primarily in their traceability to government.

TRACEABILITY OF THE WPA AND PWA

The Great Depression and the subsequent New Deal agenda saw the birth of the modern American welfare state. New Deal policies dramatically and permanently expanded the size and scope of the federal government; some of the programs and agencies born during this period remain cornerstones of US public policy and institutions (e.g., the Social Security Administration and the Federal Deposit Insurance Corporation). The two largest and most significant economic relief and recovery programs were the WPA and PWA. Their purpose was to reduce unemployment and consequently stimulate economic activity by investing in public works. The former began in May 1935 and the latter in June 1933.

In the end, the two programs spent $12 billion, about $8 billion of which was spent by the WPA.Footnote 3 Over a 9-year period, the WPA paved 650,000 miles of roads and built 125,000 public buildings, 75,000 bridges, and 8,000 parks (Federal Works Agency 1947). The PWA, meanwhile, was responsible for 70% of all school buildings and 35% of hospitals and healthcare facilities built between 1933 and 1939 (Public Works Administration 1939). Together, the two programs generated almost 25 billion man-hours of work. The WPA had a particularly large impact on employment. In total, it employed more than 8 million workers; nearly 25% of US families relied on WPA wages at some point in time (Federal Works Agency 1947). The PWA created a significant number of jobs, too: in the first 2 years of the program alone, it is estimated that the PWA put two million people to work (Ickes Reference Ickes1935, 199).

Both programs provided a highly visible benefit: employment. Without these programs, many would have gone without work. The two differed, though, in their traceability to government—how easy it would have been for those employed through the programs to see the program (and by extension, the government generally) as the reason that they are employed. To be sure, both programs were equally traceable in one manner: both were proposed and championed by the Roosevelt administration as part of its New Deal program, and consequently, both were also primarily backed by the Democrats in Congress and opposed by the Republicans.Footnote 4 Clearly, policy beneficiaries of either program should view the Democrats as responsible for their benefits. I argue, however, that beneficiaries of the PWA would not attribute their benefits to the PWA or government to begin with, while those receiving WPA benefits would. The primary distinction in this regard involves how the two programs created jobs: directly via government or through the private sector.

The WPA provided direct, public employment on (primarily) public works projects.Footnote 5 Local governments proposed projects to the WPA, which were then approved on the basis of whether there was sufficient labor supply to support the project and whether the project would provide useful benefits to the community upon completion. Project workers were selected through a government screening process. Persons interests in WPA work would apply at a local relief agency and register with the US Employment Service, a revamped federal labor designed to connect job seekers with employers. Applicants were interviewed by agency staff to assess their need and employability (e.g., physical ability to perform labor-intensive work). Those certified would be notified by the WPA once selected for work.Footnote 6 Once hired, they became federal employees paid directly by the government each week. Supplementary Figure A2 shows an anonymized WPA paycheck as marked by the US Treasury, as well a WPA worker receiving this same type of check. From application to hiring to receiving their wages, WPA workers interacted exclusively and directly with government, making it easy for them to attribute their job to government.

The PWA, on the other hand, subsidized private-sector employment on public works projects. Supplementary Figure A3 illustrates the PWA’s design. As with the WPA, local governments proposed projects that were then accepted or rejected by the federal government based upon the potential benefits to the proposing community. Once a project was accepted, though, the PWA did not hire workers to execute the project. Instead, they contracted with a private sector construction firm, which then hired workers on the private market to carry out the project. As a result, PWA workers were not government employees but employees of the contractor. Consequently, they received their paycheck not from the government but from the contractor. The government served only as a bank, subsidizing employment and wages but otherwise having no direct role in hiring, overseeing, or paying PWA workers.

The PWA’s reach notably extended far beyond the project work site. In fact, just 35% of PWA funds were spent on wages for those working at the project site (Monthly Labor Review 1938). The remaining funds were spent on material purchases, which paid the wages of workers in factories, plants, and mills that manufactured the materials needed on-site. Indeed, every 2 hours of work on a PWA construction site corresponded to about 5 hours of work elsewhere (Public Works Administration 1939, 28). These workers were not PWA employees, either, but instead employees of a private-sector factory or plant. The PWA itself recognized that most of these workers would have no idea that they owed their continued employment and wages to the PWA:

If PWA dollars could have been marked with a distinctive symbol, their progress and speed from the mint to the Treasury, from the Treasury to the local owners of public works, from the sponsors to the contractors, from the contractors to the workers and to the material manufacturers, and so on down the line, might have been easy to observe. Such, however, was not the case. Workers in factories making materials had no way of knowing that their wages were paid in PWA dollars. Brakemen and handlers on railroad lines shipping materials had no way of knowing that their wages were paid in PWA dollars. (Public Works Administration 1939, 18)

In Arnold’s (Reference Arnold1990) parlance, the causal chain linking benefits to government was short for every WPA beneficiary and very long for most PWA beneficiaries. For the most common PWA beneficiary (an off-site worker), seeing government as the cause for their job would require a complex cognitive exercise tracing the origins of their wages through two private sector firms and back to government. In subsidizing private-sector job creation, the PWA disguised the role of government—much like student loans backed by government but supplied by private banks. By comparison, because the WPA hired and paid workers directly, beneficiaries could easily attribute their job to government. While Kantor, Fishback, and Wallis (Reference Kantor, Fishback and Wallis2013) assess the combined electoral effects of these two programs (plus more than 10 other New Deal programs), theory suggests that the WPA should have had much larger mass feedback effects than the PWA. My analysis tests this expectation.

Comparing the WPA and PWA is imperfect because there are other features of the two programs that differ. Having said that, I argue that these differences are minimal or can otherwise be effectively “controlled” for. The main non-traceability difference is in the kinds of workers hired by the two programs (i.e., they had different target populations; see Schneider and Ingram Reference Schneider and Ingram1993). As alluded to, the WPA hired nearly exclusively from the relief rolls—that is, those without work.Footnote 7 The PWA, though, did not require private firms to hire off the relief rolls, or even to hire a certain proportion of its workers off relief. Nevertheless, PWA contractors were directed to prioritize relief workers (see Public Works Administration 1939, 86).Footnote 8 Even still, while the degree of reliance on relief remains a difference between the programs, it is less clear how this may impact potential feedback effects: given the magnitude of the Depression, those working on PWA projects may not have all come off relief, but they may well have ended up on relief if not for the PWA.

The kinds of workers employed by each program is partially a result of the kinds of projects taken on by the two programs. PWA projects tended to be larger on average (e.g., the Hoover Dam) and called for the operation of heavy machinery. In contrast, though the WPA completed many major construction projects (e.g., LaGuardia Airport and Griffith Observatory), most WPA projects were “make-work,” with those employed paving roads or cleaning local parks. PWA workers, therefore, tended to be skilled: about 62% of wages on PWA projects were paid to skilled laborers (Byer Reference Byer1935). On the other hand, averaging across project types, 77% of WPA workers were unskilled (Federal Works Agency 1947, 38). I address this difference in my empirical analysis by analyzing a subset of projects for which the WPA employed more skilled labor: school construction projects. Doing so helps hold relatively constant the type of worker employed and the kind of project completed across the two programs.

DATA

The ideal way to test my hypotheses would be with an individual-level administrative dataset containing information on the voting behavior of WPA/PWA beneficiaries, and non-beneficiaries, in the 1930s (Anzia, Jares, and Malhotra Reference Anzia, Jares and Malhotra2022). Though the National Archives keep records on program workers, these records do not include any political information; even if they did, they would lack information about those who were never employed by these programs. As an alternative, feedback studies often make use of self-reports of program participation or state interaction regularly found in political survey data (e.g., Mettler and Stonecash Reference Mettler and Stonecash2008; Weaver and Lerman Reference Weaver and Lerman2010). While Gallup asked many questions about support for New Deal programs in the 1930s (Caughey, Dougal, and Schickler Reference Caughey, Dougal and Schickler2020), they did not ask about WPA and PWA program participation.

Given these limitations, I instead examine changes in county-level presidential election results before and after the WPA and PWA were put in place as a function of the degree of WPA and PWA program activity in the county. This approach is possible because the WPA and PWA were administered unequally across the country. That is, some states and counties saw more program benefits flow to its residents than did others. In this way, my strategy draws on two others found in previous mass feedback studies: (1) comparing similar programs that differ along the dimension of interest but that are otherwise similar (Soss Reference Soss1999) and (2) leveraging differences in program design or presence across geographies (Bruch, Ferree, and Soss Reference Bruch, Ferree and Soss2010; Ternullo Reference Ternullo2022).

To measure WPA and PWA activities at the county level, I use two datasets of program expenditures. The first captures the total amount of money the WPA and PWA spent in every US county totaled from 1933 to 1939 (Fishback, Kantor, and Wallis Reference Fishback, Kantor and Wallis2003). Supplementary Figure A4 provides an example of the data for Cook County, Illinois. Importantly, for each county, the data provide only the total amount of money spent by the program in the county without any indication of how that money was spent. For each county, I calculate per capita WPA and PWA spending in 1944 dollars, respectively.Footnote 9 Figure 1 places each county into a spending quintile. As is clear, there is substantial geographic variation in how WPA and PWA spending was distributed. Moreover, it is not the case that receiving more WPA money also meant receiving more PWA money, or vice versa, as spending per capita for the two programs are correlated at just 0.04. The median county received about $34 per person in WPA spending and about $10 per person in PWA spending. The interquartile range values for both programs ($38 per person for the WPA and $17 for the PWA) exceed the median, indicating again the variation in how program benefits were allocated nationally. My analysis exploits this variation.Footnote 10

Figure 1. Geographic Distribution of WPA and PWA Spending

Note: Panels (a) and (b) plot per capita WPA and PWA spending by county, respectively. Counties are placed into quintiles, with counties in the lightest shade of yellow receiving the least amount of money (Q1) and counties in the darkest blue receiving the most money (Q5).

A key assumption made in using these data is that counties that received the most WPA and PWA funds are also counties for which there were the most policy beneficiaries. Another is that the measure is comparable across the two programs. One concern is that at least some of the plants and factories supplying materials to PWA sites may not have been located in the same county or even the same state.Footnote 11 Consequently, the votes of these off-site PWA beneficiaries would not be accounted for in the same county that their benefits are accounted for in the spending data. Null effects may not be because they do not see their wages as courtesy of the PWA but because the aggregate-level data do not pick up on their behavior. I address this potential problem by analyzing the effects of wages paid to on-site workers—that is, those who were more likely to also live in the county of the project site. As cited above, nationally, about 35% of PWA dollars were spent on on-site labor. Applying this percentage to each county’s total PWA spending figure, I estimate how much of the total money spent in the county would have gone toward on-site labor and assess the effects of those monies separately.

I supplement the nationwide data with an original dataset of WPA and PWA school construction spending in California between 1933 and 1938. Analyzing the effects of one type of project pursued by both programs allows for a cleaner comparison of the two because it holds more constant the scope and output of the work. I chose school construction spending specifically because the WPA and PWA also employed more comparable labor on such projects. While the WPA employed far fewer skilled workers than the PWA on average, public building construction projects, of which school construction is one subset, represent an important exception. On these projects, just 53% of workers were unskilled (Federal Works Agency 1947, 38), a decrease of 24 percentage points relative to the WPA average. Comparing the effects of PWA and WPA school construction spending can, therefore, minimize (though not eliminate) differences in who the two programs employed, one of the key remaining non-traceability differences between the two programs identified in the previous section.

Records for every WPA and PWA project are available at The National Archives; for the WPA, these records were recently digitized and made public.Footnote 12 PWA projects are reported in state-by-state spreadsheets, while WPA projects are reported in an index organized by state and county where each card corresponds to one project. For both programs, the data report the total amount of money spent on each project as well as the location of the project. Supplementary Figure A5 shows part of one California PWA report, while Supplementary Figure A6 displays a WPA project index card. The WPA data for California include 9,791 total index cards, each corresponding to a single project.

I enlisted a team of research assistants to hand-code each WPA and PWA project as involving school construction, or not, and then calculated the total per capita amount of money spent by each program on school construction in each county. While the WPA spent more nationally (and in most counties) than the PWA, the PWA spent considerably more money on school construction, at least in California ($50 vs. $24 million). Moreover, school construction was a major focus of the PWA, as about 49% of all California PWA projects involved school construction compared to just 15% of all WPA projects.

I merged each of these spending measures with county-level presidential elections voting returns from 1916 to 1944, meaning that my dataset includes five pre-spending elections and three post-spending elections. For each county-election year, I calculate the Democratic candidate’s share of the two-party vote.

EMPIRICAL STRATEGY

Before describing my research design, I plot the trajectory of Democratic voting from 1916 to 1944 by WPA and PWA spending using the nationwide data. Figure 2 places each county into a within-state spending quintile by program and gives the average Democratic county vote share by spending quintile and election. For both programs, we see few differences in voting by spending levels prior to 1936. “High” and “low” spending counties are clearly on a relatively similar path in the pre-spending period. After 1932, though, counties appear to diverge by spending levels. By 1944, support for the Democratic Party is highest in counties that received the most WPA and PWA spending. This provides some initial descriptive evidence of feedback effects, albeit with feedback effects for the PWA, as well.

Figure 2. Trajectory of Democratic Voting by WPA and PWA Spending

Note: Panels (a) and (b) plot average Democratic vote share by within-state WPA and PWA spending quintiles, respectively, in each election. The dashed gray line indicates the first post-spending election cycle.

To estimate the effects of the WPA and PWA more systematically, I use a generalized difference-in-differences design. My preferred specification is the following linear model:

(1) $$ {\displaystyle \begin{array}{l}{\%\hskip0.35em Democrat}_{cst}={\beta}_1{ln(Spending+1)}_{cs}+{\theta}_{cs}\\ {}\hskip7em +\hskip0.35em {\alpha}_{st}+{X}_{cst}+{\epsilon}_{cst},\end{array}} $$

where $ {\%\hskip0.35em Democrat}_{cst} $ is the Democratic share of the two-party vote in county c and state s at time t. $ {ln(Spending+1)}_{cs} $ is either logged WPA or PWA spending per capita in county c and state s. For all t prior to 1936, $ {ln(Spending+1)}_{cs} $ is equal to 0.Footnote 13 $ {\theta}_{cs} $ are county fixed effects, and $ {\alpha}_{st} $ are state-year fixed effects.Footnote 14 County fixed effects control for time-invariant features of each county that may affect voting, while state-year fixed effects account for election-specific, state-level factors. Combined, these fixed effects specify a model comparing the within-county change in Democratic support in counties with large WPA or PWA allocations to the within-county change in Democratic support in same-state counties with smaller allocations.

For $ {\beta}_1 $ to be causal, we must assume parallel trends. Parallel trends means that the outcomes of the treated and comparison groups would have evolved similarly in the absence of treatment. This assumption is typically assessed by examining pretreatment trends in the outcome by treatment status. In my case, the assumption would be clearly violated if counties that received larger WPA or PWA allocations were already trending in the direction of the Democratic Party faster than counties received smaller WPA or PWA allocations. Figure 2 suggests that this is not the case: counties were “moving together” in terms of their voting patterns prior to spending regardless of how much WPA or PWA money they later received.

But evidence of parallel trends before treatment does not guarantee parallel trends over the entire study period. It must be the case that “big” and “small” WPA and PWA spending counties would also have continued along the same trend in their voting behavior if the two programs never came to be. Violations of this assumption could result from the presence of covariates with time-varying effects on voting: county characteristics that positively predict WPA or PWA spending and increasingly predict support for the Democratic Party over the study period. It is undisputed that voting coalitions changed dramatically between the 1920s and 1930s: the poor, immigrants, urban dwellers, and African Americans all shifted away from the Republican Party and toward the Democratic Party (Gamm Reference Gamm1989). Supplementary Figure A7 depicts some of these dynamics—for example, urban counties and counties with high employment as of 1930 “flipped” partisan loyalties just as WPA and PWA spending kicked in. Further, these same demographic characteristics are also positively correlated with WPA and PWA spending (Supplementary Table A1). As a result, any positive effect I observe of these programs on Democratic voting may not by directly attributable to the programs themselves—that is, urban counties, for instance, may have become more Democratic by virtue of being urban rather than because they were the primary beneficiaries of either program.

To directly account for this confounding, I estimate models with a series of demographic fixed effects, denoted by $ {X}_{cst} $ in Equation 1. I include fixed effects for each of the county Black, urban, unemployed, and foreign-born populations as measured in the 1930 Census (i.e., prior to spending). For each of the four variables, I place each county into a within-state quintile and then interact each quintile with state and year. I am, therefore, including a separate fixed effect for each demographic quintile, state, and election year combination, which has the benefit of allowing the effects of these demographics on voting patterns to not only vary by election cycle but also by state. The inclusion of these demographic fixed effects means that the counterfactual comparison group for high spending counties is not just low spending counties within the same state, but low spending, same-state counties with similar pre-spending demographic features.

I estimate two sets of models, each using a different sample of elections. The first includes all election years in my data, 1916 to 1944. It estimates the effect of WPA and PWA spending as the within-county change in average support for the Democratic Party between 1916–32 (pre-spending) and 1936–44 (post-spending). The second approach uses just 1932–44 and estimates the effect of spending on Democratic voting relative only to support for Roosevelt in 1932. This sample holds the Democratic candidate constant and allows me to see whether New Deal programs increased Roosevelt’s county-level support above and beyond his local support in his landslide 1932 victory.

RESULTS

Table 1 presents the effects of the WPA and PWA on Democratic voting.Footnote 15 Columns 1–6 show the results using elections from 1916 to 1944, while columns 7–12 show the effects of the two programs relative to Roosevelt’s vote share in 1932. The even numbered columns include the set of demographic fixed effects. In models without adjusting for demographics, I find positive and significant effects of both the WPA and PWA on Democratic voting in both election year samples. The effect of the WPA, however, far outpaces that of the PWA in substantive magnitude and in the 1916–44 models; the effects of two are also statistically distinguishable from one another (95% CI $ = $ [0.68, 1.59]). Moreover, as anticipated, when restricting PWA expenditures to those monies put toward on-site wages and for which we can be relatively more certain the beneficiaries live and vote in the county of the project, the effect of the PWA on Democratic support increases in magnitude. Still, it remains substantively smaller than the effect the WPA.

Table 1. Effect of WPA and PWA Spending on Democratic Voting

Note: Standard errors are clustered by county. ***p<0.001.

After incorporating the demographic fixed effects, the effect of the WPA attenuates in magnitude but remains positive and statistically significant. The WPA, therefore, appears to have increased support for the Democratic Party above and beyond what changes in demographic voting patterns predict on their own. The coefficient in column 2 suggests that each 1% increase in logged per capita WPA spending boosted Democratic vote share by 0.007 percentage points. More substantively, the estimates suggest that moving from the 10th percentile to the 90th percentile of county WPA spending—$10.34 to $89.47—would increase Democratic vote share by about 1.46 percentage points. Because the average Democratic vote share between 1916 and 1944 is 56.93%, this effect represents about a 2.6% increase in Democratic vote share. The effects are marginally smaller when considering the change in FDR’s vote share from before and after WPA spending began (column 8), with the same change in WPA spending increasing Democratic voting by 1.37 percentage points. On the other hand, when accounting for demographic confounders, the positive feedback effects of the PWA disappear completely. From column 3 to column 4 of Table 1, the coefficient reduces in size by about 90%, and the effect is also no longer statistically significant (p $ = $ 0.65).

Why do the effects of the PWA, but not the WPA, fade when accounting for demographics? Supplementary Tables A6–A9 re-estimate the models in Table 1 without demographic fixed effects and instead include an interaction between spending and the continuous measure of each of the demographic variables included in the fixed effects vector. Consider county urbanicity. Supplementary Table A6 shows that WPA spending increased Democratic voting regardless of how urban the county is. Differences in effects are a matter of degree, with more urban counties seeing larger shifts toward the Democratic Party than less urban counties. Consequently, though the WPA effects attenuate when accounting for urbanicity, the effects do not disappear completely.

In contrast, the PWA only had effects on Democratic voting in urban counties. Put differently, Democratic voting in nonurban areas slightly decreases as a function of greater PWA spending. The average PWA effect once accounting for county urbanicity then is a combination of positive and negative effects in different types of counties. As a result, the average effect of the PWA trends toward zero. Any positive feedback effects of the PWA are not causal; they can be well explained by the demographic features of the county and the relationship between those features and voting behavior over time. I find similar dynamics for the other variables, too, including and especially unemployment (Supplementary Table A7).

The results presented thus far show the average differences in Democratic voting between the pre- and post-spending periods. Figure 3 re-estimates columns 2 and 4 of Table 1 but displays the effects of both programs by election year, with effects for each cycle interpreted as the change in support relative to 1916. This “event study” specification has two purposes. First, it offers another way of confirming that the parallel trends assumption is satisfied. And indeed, as in Figure 2, there are no significant effects of either program in the pre-spending period. Second, it tests when effects emerge in the post-spending period and whether any observed effects persist through multiple election cycles. In the post-spending period, I observe significant, similarly sized effects for the WPA in both 1940 and 1944, but no effects in 1936. The lack of effects in 1936 may be because the WPA only began in mid-1935, and so much of the spending included in my measure had yet to occur.

Figure 3. Event Study Estimates of WPA and PWA Spending on Democratic Voting

Note: Panels (a) and (b) plot event study estimates with 95% confidence intervals of the effects of the WPA and PWA on Democratic voting. Effects can be interpreted relative to 1916. Models include demographic fixed effects. Coefficients and model diagnostics are also presented in Supplementary Table A3.

Table 2 reports the effects of the WPA and PWA on voting using the supplementary dataset covering school construction spending in California. Broadly speaking, the results are consistent with the nationwide analyses. First, across election year samples, the WPA effects dwarf those of the PWA in substantive magnitude. For analyses of 1932–44, the effects of the WPA on Democratic support are even statistically significant, while the WPA effect using election years 1916–44 is close to reaching significance at p $ = $ 0.14. To be sure, because this analysis rests on just 58 counties in one state, statistical power is limited; the difference between the WPA and PWA effects crosses zero no matter which election sample is used. Still, that the coefficients point in a similar direction is encouraging. Even when holding more fixed the type of project and type of worker employed by the two programs, mass feedback effects appear to have emerged out of the WPA, while PWA feedback effects appear vastly more limited.

Table 2. Effect of WPA and PWA School Construction Spending on Democratic Voting (CA)

Note: Standard errors are clustered by county. *p<0.05.

I also conducted several robustness checks and additional analyses. First, I estimate a placebo test where I regress Democratic voting on WPA monies allocated for projects that were never completed. These are all projects that local sponsors proposed, but that the WPA—for whatever reason—rejected. Information on each rejected project, and how much money was proposed to be spent, is included in the WPA project index at the Archives. Therefore, the analysis is restricted to California. Supplementary Figure A8 shows an example of a rejected project. Because no one benefitted from these projects, they should have no effect on voting patterns. As expected, I find no effects of such spending (Supplementary Table A10), lending additional credibility to the results using data on projects actually completed.

Second, I explore one possible mechanism behind these WPA effects: mobilization. As discussed, policy benefits may increase the ability of beneficiaries to participate in politics. The WPA may, therefore, have had positive effects on Democratic voting because the program mobilized Democratic-leaning citizens who otherwise would not vote. An alternative mechanism is persuasion, whereby WPA benefits changed the political preferences among existing voters receiving jobs through the program. Supplementary Table A11 regresses the log of overall votes cast on WPA spending. Though suggestive, I find no increase (or decrease) in votes cast a function of spending. Persuasion appears to be the likely driver of the reported WPA feedback effects.

Finally, I assess the effects of the WPA and PWA in gubernatorial elections, which provides some purchase on whether the effects I report are FDR-specific or whether the program affected partisan loyalties broadly.Footnote 16 Supplementary Table A12 gives the results. As with presidential elections, the WPA (but not the PWA) increased Democratic voting; the WPA, therefore, boosted Democrats generally, not just Roosevelt. Moreover, I show that the effects of the WPA on Democratic voting are largest in states where Republicans held the governorship. That is, the WPA increased support for Democratic challengers more so than it did for Democratic incumbents. Taken together, these results confirm the broad effects of the WPA up-and-down the ballot, and the expectation that these jobs were traced back to Democrats and only Democrats. Supplementary Table A13, however, does show an important condition on these results: the effects of the WPA in gubernatorial elections appear much smaller in Southern states, implying that the down-ballot effects did not extend to those state-level Democrats typically most hostile to the federal New Deal agenda.

ALTERNATIVE MECHANISMS

It Is the Economy, Stupid

I have argued that the disparate effects of the WPA and PWA on voting behavior are because of variation in easy it was for policy beneficiaries to see government as the grounds for their employment. But, non-beneficiaries may also be politically responsive to policies (e.g., Soss and Schram Reference Soss and Schram2007). My use of aggregate-level data, therefore, makes it plausible that the effects I report capture the behavior of the broader public. Below I propose and assess two possible alternative mechanisms along these lines. One is that the WPA improved the local economy generally, while perhaps the PWA did not. In this case, the observed positive effect of the WPA on Democratic support may just be a reflection of voters observing positive economic growth in the aftermath of WPA spending and rewarding the Democratic Party for it. In other words, citizens may have behaved as they always do by rewarding the incumbent party for good economic times (de Benedictis-Kessner and Warshaw Reference de Benedictis-Kessner and Warshaw2020; Healy and Lenz Reference Healy and Lenz2017).

Research in economics casts some initial doubt on this explanation. Most notably, counties with more WPA workers actually had higher unemployment in both 1937 and 1940 (Fleck Reference Fleck1999), in part due to declines in private sector employment opportunities (Neumann, Fishback, and Kantor Reference Neumann, Fishback and Kantor2010). Moreover, Bernanke (Reference Bernanke1986) shows that the WPA had no impact whatsoever on average earnings, at least in manufacturing. In light of these findings, it seems unlikely that overall improvements in the labor market in “high” WPA spending counties can explain the WPA’s effect on Democratic support.

Below I offer further analysis of how the WPA and PWA affected local economies using additional economic outcomes and a difference-in-differences style analysis. Doing so requires measures of the county economy taken with regularity before and after the two programs began. Common measures used in contemporary studies of economic voting (e.g., income and GDP) are not available at the county level during this time period. I instead measure the county economy with information on the value of bank deposits as reported annually by the Federal Deposit Insurance Corporation from 1925 to 1936. Bank deposits are an appropriate measure because deposit contraction and bank failures during the Depression co-varied with local, state, and national economic fundamentals (Calomiris and Mason Reference Calomiris and Mason1997; Reference Calomiris and Mason2003). Supplementary Figure A9 confirms that bank deposits appear to be a reasonable economic indicator, as median per capita bank deposits dip around the start of the Depression and steadily improve afterward as one would expect.

I estimate the effects of WPA and PWA spending on bank deposits with the following specification:

(2) $$ {\displaystyle \begin{array}{l}{ln(Bank\hskip0.35em Deposits)}_{cst}={\beta}_1{ln(Spending)}_{cs}+{\theta}_{cs}t\\ {}\hskip10em +\hskip0.35em {\alpha}_{st}+{\epsilon}_{cst},\end{array}} $$

where $ {ln(BankDeposits)}_{cst} $ is the log-transformed per capita bank deposits in county c and state s at time t, $ {ln(Spending)}_{cs} $ is either logged WPA or PWA spending per capita in county c and state s, $ {\theta}_{cs} $ are county fixed effects, and $ {\alpha}_{st} $ are state-year fixed effects. I again assume parallel tends: that, in the absence of spending, counties receiving large and small WPA and PWA allocations would have otherwise been on a similar trajectory with respect to bank deposits. This is a strong assumption in this context because counties might get more funding (and perhaps especially, more WPA funding) precisely because they experience or anticipate greater economic decline. If so, then any economic boost after WPA spending cannot be interpreted causally. While not a silver bullet for dealing with such concerns, scholars recommend accounting for differential trends by including county-specific linear time trends in the specification (Hassell and Holbein Reference Hassell and Holbein2024). These are symbolized by t in Equation 2.

Table 3 shows the results. Even-numbered columns include the county time tends, while the odd-numbered columns do not. The results suggest that, no matter the specification, the WPA did not increase or decrease county bank deposits. In addition, whatever positive effects the PWA appears to have had in fact represent a continuation of pre-existing trends in bank deposits between counties that received large versus small PWA allocations. Put more simply, bank deposits were already growing in counties that received more from the PWA before the PWA spent any money at all. Figure 4 shows event study estimates, which again confirm both the lack of pre-trends (especially for the WPA) and the lack of causal economic effects for both programs.Footnote 17

Table 3. Effect of WPA and PWA Spending on Bank Deposits

Note: Standard errors are clustered by county. ***p<0.001.

Figure 4. Event Study Estimates of WPA and PWA Spending on Bank Deposits

Note: Panels (a) and (b) plot event study estimates with 95% confidence intervals of the effects of WPA and PWA spending, respectively, on bank deposits. Estimates are generated from a model including county and state-year fixed effects and linear county trends. Coefficients and model diagnostics are also presented in Supplementary Table A4.

Drawing conclusions about economic effects on the basis of bank deposits alone may seem premature. I, therefore, also report difference-in-differences estimates of the effects of WPA and PWA spending on retail sales and farm values (see Supplementary Figure A10 for descriptive statistics on these variables), both of which have been used in other historical research as indicators of the county economy (Fishback, Horace, and Kantor Reference Fishback, Horace and Kantor2005; Rogowski et al. Reference Rogowski, Gerring, Maguire and Cojocaru2022). Supplementary Table A14 reports these results. Again, I find no positive effects of the WPA on either economic measure. Instead, the WPA appears to have had a negative effect on both of these outcomes, even when accounting for county trends in the specification. These results, though, must be interpreted with immense caution because, for both outcomes, data availability precludes me from testing for parallel trends violations. For instance, county retail sales data are available from the Census of Business only for the years 1929, 1933, 1935, and 1939. In estimating the effects of retail sales on the WPA, then, there are just two pre-treatment periods, one of which must be used as the baseline year in an event study-type analysis. Still, setting aside these data challenges, there does not appear to be any clear or overwhelming evidence that the WPA boosted local economies. My evidence, combined with existing studies focused on unemployment and earnings suggest that, more than likely, the effects of the WPA on voting behavior are not a response to spending-induced economic growth.

Ideology

Another possible mechanism is ideology. Caughey, Dougal, and Schickler (Reference Caughey, Dougal and Schickler2020) show that public support for the New Deal agenda (mass liberalism, as they call it) positively predicted Democratic voting in presidential elections during this time period. If mass liberalism is highest in counties that received more WPA money, then increased support for Democrats as a function of the WPA may just reflect ideological liberalness in the public at large. Testing this claim directly requires county-level public opinion data, which, to the best of my knowledge, does not exist. However, it seems unlikely that WPA-driven ideological liberalism drives my results. For one, the public routinely cited the WPA as its least favorite New Deal program, derisively claiming that WPA stood for “We Poke Along” or “We Piddle Around.”Footnote 18 Additionally, public support for cuts in New Deal relief programs was highest in places that had the highest levels of unemployment (Newman and Jacobs Reference Newman and Jacobs2010)—exactly those places that, on average, received the most WPA spending. If anything, then, the places that received the most WPA resources were likely those that would score lowest on mass liberalism.

CONCLUSION

Though prominent theoretical work sees traceability as a necessary condition for the formation of mass feedback effects on voting behavior, there is little empirical evidence that this is the case, largely because nearly every empirical analysis examines a highly traceable policy. But as Campbell (Reference Campbell2012, 347) argues, “examining instances in which feedback effects did not emerge … is needed to be able to say something conclusive about the conditions under which they do occur.” Put another way, establishing that traceable programs can generate feedback effects is insufficient to conclude that whether feedback effects emerge depends on the traceability of the program design. This article addresses this weakness in the existing literature with an analysis of two New Deal-era employment programs, one of which created jobs directly through government (WPA), while the other funded private-sector job creation (PWA). Consistent with theory, I find that the WPA increased support for the Democratic Party, while the PWA did not.

At their core, mass policy feedback studies are about the prospects for democratic accountability. A central tenet of democratic theory is that citizens reward and punish elected officials for their behavior and performance in office (e.g., Ferejohn Reference Ferejohn1986). My results suggest citizens’ ability to hold politicians accountable is limited, but not entirely so. On the one hand, policy recipients are not blind—they do often see government providing them with policy benefits, and respond as such politically. On the other hand, recipients are not all-seeing, either, and can struggle to accurately attribute policies and policy effects to government (Achen and Bartels Reference Achen and Bartels2016; Sances Reference Sances2017). While the efforts of government were surely more obscured with the PWA, PWA workers and those working in factories supporting PWA projects did owe government for their wages. The problem was that the program’s design exaggerated the market’s role in providing those wages. Citizens’ capacity to see and evaluate government action is, therefore, limited, and highly dependent on policy design.

At the elite level, these results suggest that politicians have electoral incentives to design policies that effectively and efficiently deliver benefits straight to citizens. Elected officials agree: they regularly take clear responsibility for effects of policies they are responsible for.Footnote 19 Beyond electoral effects, direct policy designs may also improve waning citizen trust in political elites and their ability to help citizens (Dawkins Reference Dawkins2021). Yet they may also open the door to using policy for electioneering. Perhaps unsurprisingly, Republicans often charged that the WPA was nothing more than a campaign organization dressed as a recovery program (e.g., Clement Reference Clement1971, 248). More generally, while direct policies may be most politically useful for office-seekers, they may not always the most effective way of improving the societal conditions they are intended to address. Electoral incentives, therefore, could be a threat to good public policy.

My analysis is not without limitations. For starters, I use aggregate-level data to test an individual-level theory about the political behavior of policy beneficiaries, a choice which stems from a lack of available individual-level data but which can result in ecological fallacies. My approach cannot directly identify the vote choice effect of receiving a WPA job versus not receiving a WPA job. Rather, I rely on the assumption that counties receiving more WPA spending had more WPA workers than counties receiving less spending and that any effect of spending is capturing the behavior of program workers. While I have explored and largely ruled out other mechanisms that could plausibly explain the aggregate results, the results must still be interpreted with this caveat in mind.

There may also be generalizability concerns. The historical nature of the evidence presented means that the findings may not hold true for programs created during more polarized times where persuasion is more difficult. On the other hand, Anzia, Jares, and Malhotra (Reference Anzia, Jares and Malhotra2022) find feedback effects on political attitudes for direct farm assistance programs initiated during the Trump administration, especially among self-described liberals predisposed to oppose Republican-initiated programs. Mass feedback effects may, therefore, still be possible in an era of partisan sorting and hostility, but more research is certainly needed.

To be sure, this article is not the be-all and end-all on traceability and mass policy feedback effects. Two paths forward seem especially fruitful. First, from a research design standpoint, scholars should look for opportunities to leverage within-program changes in traceability, an approach likely stronger than the comparative program design one that I use in this article. The Child Tax Credit (CTC) may be particularly well suited for this type of analysis. While traditionally a part of the “submerged state,” the CTC’s delivery mechanism temporarily changed during 2021 when families began to receive direct monthly checks.

Second, scholars might pay more attention to Arnold’s (Reference Arnold1990) second facet of traceability—whether recipients can credit particular government actors for a policy or not. Theory suggests that there should be no feedback effects on vote choice when policy responsibility is shared across parties, yet Medicaid expansion appears to have increased support for Democrats even in states where Republicans were wholly responsible for it (Shepherd Reference Shepherd2022). This finding raises questions about how exactly citizens decide who is responsible for their policy benefits. An experimental study that randomizes elements of responsibility (e.g., initial enactment and implementation) across different levels of government and partisan actors could offer some insights on this question and help clarify existing theoretical accounts.

The empirical mass policy feedbacks literature has made tremendous strides over the past several decades in establishing a robust connection between the creation of new policies and the politics that ensue. Scholars should continue to stress-test existing theories—be it those emphasizing the role of traceability, the size of policy benefits, whether or not benefits are conferred upon individuals that share a common identity, and so forth—as new policies and programs surface. If theory is correct, many of these empirical studies will return null results. Pursuing and publishing these null findings, however, is crucial to deepening and refining our understanding of when and why policy makes mass politics and of democratic governance more generally.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S0003055424000704.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/VGLBPX.

ACKNOWLEDGMENTS

I am grateful to Graeme Blair, Andrea Campbell, Chris Faricy, David Fortunato, Marty Gilens, Jeff Lewis, Shom Mazumder, Tyler Reny, Chris Tausanovitch, Dan Thompson, Lynn Vavreck, four anonymous reviewers, and audiences at the European University Institute, the Junior Americanist Workshop Series, Louisiana State University, Stanford University, the University of California, Los Angeles, and the University of Utah for helpful comments and suggestions. I also thank Evan Kalish for sharing data, and Emma Long, Carly Watts, Sean Whyard, and Lailah Williams for exceptional research assistance. All errors are my own.

FUNDING STATEMENT

This work was supported by Louisiana State University and the Marvin Hoffenberg Chair in American Politics and Public Policy at the University of California, Los Angeles.

CONFLICT OF INTEREST

The author declares no ethical issues or conflicts of interest in this research.

ETHICAL STANDARDS

The authors affirm this research did not involve human participants.

Footnotes

1 Throughout the article, when referring to “benefits,” I mean the effects or consequences of a policy, good or bad—e.g., a new job or being drafted to war.

2 The focus on traceable policies extends to studies of political participation and attitudes, too. These include papers on universal basic income (Loeffler Reference Loeffler2023), TANF (Soss Reference Soss1999), Medicare (Lerman and McCabe Reference Lerman and McCabe2017), Medicaid (Clinton and Sances Reference Clinton and Sances2018; Michener Reference Michener2018), FEMA and USDA assistance (Anzia, Jares, and Malhotra Reference Anzia, Jares and Malhotra2022; Chen Reference Chen2011), as well as Social Security (Campbell Reference Campbell2002) and the G.I. Bill (Mettler Reference Mettler2002). One exception is Mettler (Reference Mettler2011), who studies a set of traceable and non-traceable policies and finds that more traceable policies are more likely to motivate political participation than less traceable ones.

3 $12 billion is about 13% of nominal GDP in 1939. By comparison, President Obama’s 2009 economic stimulus package represented about 5% of 2009 nominal GDP.

4 The WPA was created by an executive order in 1935 but drew initial funding from an appropriation bill supported by 96% of congressional Democrats and just 36% of Republicans. Partisan support for the initial PWA appropriation in 1933 was similar (https://www.govtrack.us/congress/votes/74-1/h34; https://www.govtrack.us/congress/votes/74-1/s40; https://www.govtrack.us/congress/votes/73-1/h44; https://www.govtrack.us/congress/votes/73-1/s91).

5 The WPA also pursued white-collar, service-related projects. These accounted for about 25% of total expenditures.

6 See Supplementary Figure A1 for an example of WPA Form 402, which served as the WPA’s official notification of work.

7 On average, about 95% of WPA workers were hired from the relief rolls (Federal Works Agency 1947, 7).

8 To the best of my knowledge, data on how many PWA workers were on relief are not available. Anecdotal evidence supports the claim, though. For instance, in one small Florida town, 350 of the 600 people on relief were hired to work on a PWA project (Ickes Reference Ickes1935, 201).

9 I use county population from the 1930 decennial Census to create per capita measures.

10 Spending is also right-skewed, with the average county receiving more than the median. For this reason, I log-transform all per capita spending measures in my analyses.

11 Harold Ickes, the Secretary of Interior and head of the PWA, mentions this in his memoir about the PWA: “The great bridges being thrown across San Francisco Bay provide work not alone for private individuals in California; they have caused orders to be placed in steel mills in Colorado and Pennsylvania; they have required lumber from the forests of Oregon and Washington” (Ickes Reference Ickes1935, 197).

13 I also estimate three alternative specifications. First, given recent methodological concerns over difference-in-differences estimation relying on a continuous treatment like mine (Callaway, Goodman-Bacon, and Sant’Anna Reference Callaway, Goodman-Bacon and Sant’Anna2024), I estimate models with a binary variable indicating whether the county received more than the national per capita average in spending (Supplementary Table A3). The results are generally consistent, though weaker. I also estimate models using the spending quintiles from Figure 2 interacted with post-spending indicators. These results are in Supplementary Table A4, and while there are no clear nonlinearities in effects, these results do show that most of the effects as reported in the main text can only be observed at the upper end of the spending distribution. Third, using the California data expanded to include projects beyond school construction projects, Supplementary Table A5 estimates the effect of the number of WPA and PWA projects in each county.

14 In all specifications, I cluster the standard errors by county.

15 Supplementary Table A2 tests whether effects were larger or smaller in counties in Southern states relative to counties in non-Southern states. Overall, I find no discernible differences between the South and non-South in the effects of either program.

16 Local governments were more involved in both WPA and PWA projects than state governments. Beneficiaries may have also rewarded these actors. Unfortunately, local election data (e.g., Mayor or City Council) from this time period are not consistently available.

17 In contrast, Supplementary Figure A11 shows the event study plot for the effects of PWA when using estimates from a model without the inclusion of county time trends (column 3 of Table 3). Here, there are clear pre-trends.

19 For example, Americans received their $1,200 COVID-19 stimulus check along with a letter signed by President Trump.

References

REFERENCES

Achen, Christopher H., and Bartels, Larry M.. 2016. Democracy for Realists: Why Elections Do Not Produce Responsive Government. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Anzia, Sarah F., Jares, Jake Alton, and Malhotra, Neil. 2022. “Does Receiving Government Assistance Shape Political Attitudes? Evidence from Agricultural Producers.” American Political Science Review 116 (4): 1389–406.CrossRefGoogle Scholar
Arnold, R. Douglas. 1990. The Logic of Congressional Action. New Haven, CT: Yale University Press.Google Scholar
Bernanke, Ben S. 1986. “Employment, Hours, and Earnings in the Depression: An Analysis of Eight Manufacturing Industries.” American Economic Review 76 (1): 82109.Google Scholar
Brady, Henry E., Verba, Sidney, and Schlozman, Kay Lehman. 1995. “Beyond SES: A Resource Model of Political Participation.” American Political Science Review 89 (2): 271–94.CrossRefGoogle Scholar
Bruch, Sarah K., Ferree, Myra Marx, and Soss, Joe. 2010. “From Policy to Polity: Democracy, Paternalism, and the Incorporation of Disadvantaged Citizens.” American Sociological Review 75 (2): 205–26.CrossRefGoogle Scholar
Byer, Herman B. 1935. “Wage Rates on Public Works Administration Construction.” Monthly Labor Review 40 (6): 1427–31.Google Scholar
Callaway, Brantly, Goodman-Bacon, Andrew, and Sant’Anna, Pedro H. C.. 2024. “Difference-in-Differences with a Continuous Treatment.” Working Paper.CrossRefGoogle Scholar
Calomiris, Charles W., and Mason, Joseph R.. 1997. “Contagion and Bank Failures during the Great Depression: The June 1932 Chicago Banking Panic.” American Economic Review 87 (5): 863–83.Google Scholar
Calomiris, Charles W., and Mason, Joseph R.. 2003. “Fundamentals, Panics, and Bank Distress during the Depression.” American Economic Review 93 (5): 1615–47.CrossRefGoogle Scholar
Campbell, Andrea Louise. 2002. “Self-Interest, Social Security and the Distinctive Participation Patterns of Senior Citizens.” American Political Science Review 95 (3): 565–74.CrossRefGoogle Scholar
Campbell, Andrea Louise. 2012. “Policy Makes Mass Politics.” Annual Review of Political Science 15: 333–51.CrossRefGoogle Scholar
Caughey, Devin, Dougal, Michael C., and Schickler, Eric. 2020. “Policy and Performance in the New Deal Realignment: Evidence from Old Data and New Methods.” Journal of Politics 82 (2): 494508.CrossRefGoogle Scholar
Chen, Jowei. 2011. “Voter Partisanship and the Effect of Distributive Spending on Political Participation.” American Journal of Political Science 57 (1): 200–17.CrossRefGoogle Scholar
Clement, Priscilla Ferguson. 1971. “The Works Progress Administration in Pennsylvania, 1935 to 1940.” Pennsylvania Magazine of History and Biography 95 (2): 244–60.Google Scholar
Clinton, Joshua D., and Sances, Michael W.. 2018. “The Politics of Policy: The Initial Mass Political Effects of Medicaid Expansion in the States.” American Political Science Review 112 (1): 167–85.CrossRefGoogle Scholar
Cook, Fay Lomax, Jacobs, Lawrence R., and Kim, Dukhong. 2010. “Trusting What You Know: Information, Knowledge, and Confidence in Social Security.” Journal of Politics 72 (2): 397412.CrossRefGoogle Scholar
Dawkins, Ryan. 2021. “Private Contracting and Citizen Attitudes toward Local Government.” Urban Affairs Review 5 (57): 1286–311.CrossRefGoogle Scholar
de Benedictis-Kessner, Justin, and Warshaw, Christopher. 2020. “Accountability for the Local Economy at All Levels of Government in United States Elections.” American Political Science Review 114 (3): 660–76.CrossRefGoogle Scholar
Federal Works Agency. 1947. Final Report on the WPA Program, 1935–1943. Washington, DC: U.S. Government Printing Office.Google Scholar
Ferejohn, John. 1986. “Incumbent Performance and Electoral Control.” Public Choice 50(1/3): 525.CrossRefGoogle Scholar
Fishback, Price V., Horace, William C., and Kantor, Shawn. 2005. “Did New Deal Grant Programs Stimulate Local Economies? A Study of Federal Grants and Retail Sales During the Great Depression.” Journal of Economic History 65 (1): 3671.CrossRefGoogle Scholar
Fishback, Price V., Kantor, Shawn, and Wallis, John Joseph. 2003. “Can the New Deal’s Three R’s Be Rehabilitated? A Program-by-Program, County-by-County Analysis.” Explorations in Economic History 40 (3): 278307.CrossRefGoogle Scholar
Fleck, Robert K. 1999. “The Marginal Effect of New Deal Relief Work on County-Level Unemployment Statistics.” Journal of Economic History 59 (3): 659–87.CrossRefGoogle Scholar
Gamm, Gerald H. 1989. The Making of New Deal Democrats: Voting Behavior and Realignment in Boston, 1920–1940. Chicago, IL: University of Chicago Press.Google Scholar
Hamel, Brian T., and Harman, Moriah. 2023. “Can Government Investment in Food Pantries Decrease Food Insecurity?Food Policy 121: 102541. https://doi.org/10.1016/j.foodpol.2023.102541CrossRefGoogle Scholar
Hamel, Brian T. 2024. “Replication Data for: Traceability and Mass Policy Feedback Effects.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/VGLBPXCrossRefGoogle Scholar
Hassell, Hans J. G., and Holbein, John B.. 2024. “Navigating Potential Pitfalls in Difference-in-Differences Designs: Reconciling Conflicting Findings on Mass Shootings’ Effect on Electoral Outcomes.” American Political Science Review, 121. https://doi.org/10.1017/S0003055424000108CrossRefGoogle Scholar
Healy, Andrew, and Lenz, Gabriel S.. 2017. “Presidential Voting and the Local Economy: Evidence from Two Population-Based Data Sets.” Journal of Politics 79(4): 1419–32.CrossRefGoogle Scholar
Healy, Andrew, and Malhotra, Neil. 2009. “Myopic Voters and Natural Disaster Policy.” American Political Science Review 103 (3): 387406.CrossRefGoogle Scholar
Howard, Christopher. 2007. The Welfare State Nobody Knows: Debunking Myths about U.S. Social Policy. Princeton, NJ: Princeton University Press.Google Scholar
Ickes, Harold L. 1935. Back to Work: The Story of the PWA. New York: MacMillan.Google Scholar
Kantor, Shawn, Fishback, Price V., and Wallis, John Joseph. 2013. “Did the New Deal Solidify the 1932 Democratic Realignment?Explorations in Economic History 50 (4): 620–33.CrossRefGoogle Scholar
Kettl, Donald F. 1988. Government by Proxy: (Mis?)Managing Federal Programs. Washington, DC: Congressional Quarterly Press.Google Scholar
Kogan, Vladimir. 2021. “Do Welfare Benefits Pay Electoral Dividends? Evidence from the National Food Stamp Program Rollout.” Journal of Politics 82 (1): 5870.CrossRefGoogle Scholar
Lerman, Amy E., and McCabe, Katherine T.. 2017. “Personal Experience and Public Opinion: A Theory and Test of Conditional Policy Feedback.” Journal of Politics 72 (2): 624–41.CrossRefGoogle Scholar
Loeffler, Hannah. 2023. “Does a Universal Basic Income Affect Voter Turnout? Evidence from Alaska.” Political Science Research and Methods 11 (3): 861–82.CrossRefGoogle Scholar
Margalit, Yotam. 2011. “Costly Jobs: Trade-Related Layoffs, Government Compensation, and Voting in U.S. Elections.” American Political Science Review 105 (1): 166–88.CrossRefGoogle Scholar
Mettler, Suzanne. 2002. “Bringing the State Back into Civic Engagement: Policy Feedback Effects of the G.I. Bill for World War II Veterans.” American Political Science Review 96 (2): 351–65.CrossRefGoogle Scholar
Mettler, Suzanne. 2011. The Submerged State: How Invisible Government Policies Undermine American Democracy. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Mettler, Suzanne, and Stonecash, Jeffrey M.. 2008. “Government Program Usage and Political Voice.” Social Science Quarterly 89 (2): 273–93.CrossRefGoogle Scholar
Michener, Jamila. 2018. Fragmented Democracy: Medicaid, Federalism, and Unequal Politics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Monthly Labor Review. 1938. “Employment Resulting from P.W.A. Construction, 1933 to 1937.” Monthly Labor Review 46 (1): 1626.Google Scholar
Morgan, Kimberly J., and Campbell, Andrea Louise. 2011. The Delegated Welfare State: Medicare, Markets, and the Governance of Social Policy. New York: Oxford University Press.CrossRefGoogle Scholar
Neumann, Todd C., Fishback, Price V., and Kantor, Shawn. 2010. “The Dynamics of Relief Spending and the Private Urban Labor Market during the New Deal.” Journal of Economic History 70 (1): 195220.CrossRefGoogle Scholar
Newman, Katherine S., and Jacobs, Elisabeth S.. 2010. Who Cares?: Public Ambivalence and Government Activism from the New Deal to the Second Gilded Age. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Patashnik, Eric M., and Zelizer, Julian E.. 2013. “The Struggle to Remake Politics: Liberal Reform and the Limits of Policy Feedback in the Contemporary American State.” Perspectives on Politics 11 (4): 1071–87.CrossRefGoogle Scholar
Pierson, Paul. 1993. “When Effect Becomes Cause: Policy Feedback and Political Change.” World Politics 45 (4): 595628.CrossRefGoogle Scholar
Powell, G. Bingham Jr., and Whitten, Guy D.. 1993. “A Cross-National Analysis of Economic Voting: Taking Account of the Political Context.” American Journal of Political Science 37 (2): 391414.CrossRefGoogle Scholar
Public Works Administration. 1939. America Builds: The Record of the PWA. Washington, DC: U.S. Government Printing Office.Google Scholar
Rendleman, Hunter E., and Yoder, Jesse. 2023. “Do Government Benefits Affect Officeholders? Electoral Fortunes? Evidence from State Earned Income Tax Credits.” Working Paper.Google Scholar
Rogowski, Jon C., Gerring, John, Maguire, Matthew, and Cojocaru, Lee. 2022. “Public Infrastructure and Economic Development: Evidence from Postal Systems.” American Journal of Political Science 66 (4): 885901.CrossRefGoogle Scholar
Salamon, Lester M. 1995. Partners in Public Service: Government-Nonprofit Relations in the Modern Welfare State. Baltimore, MD: Johns Hopkins University Press.CrossRefGoogle Scholar
Samuels, David. 2004. “Presidentialism and Accountability for the Economy in Comparative Perspective.” American Political Science Review 98 (3): 425–36.CrossRefGoogle Scholar
Sances, Michael W. 2017. “Attribution Errors in Federalist Systems: When Voters Punish the President for Local Tax Increases.” Journal of Politics 79 (4): 1286–301.CrossRefGoogle Scholar
Schattschneider, Elmer Eric. 1935. Politics, Pressure and the Tariff. New York: Prentice Hall.Google Scholar
Schneider, Anne, and Ingram, Helen. 1993. “Social Construction of Target Populations: Implications for Politics and Policy.” American Political Science Review 87 (2): 334–47.CrossRefGoogle Scholar
Shanks-Booth, Delphia, and Mettler, Suzanne. 2019. “The Paradox of the Earned Income Tax Credit: Appreciating Benefits but not Their Source.” Policy Studies Journal 47 (2): 300–23.CrossRefGoogle Scholar
Shepherd, Michael E. 2022. “The Politics of Pain: Medicaid Expansion, the ACA and the Opioid Epidemic.” Journal of Public Policy 42 (3): 409–35.CrossRefGoogle Scholar
SoRelle, Mallory E., and Shanks, Delphia. 2024. “The Policy Acknowledgement Gap: Explaining (Mis)perceptions of Government Social Program Use.” Policy Studies Journal 52 (1): 4771.CrossRefGoogle Scholar
Soss, Joe. 1999. “Lessons of Welfare: Policy Design, Political Learning and Political Action.” American Political Science Review 93 (2): 363–80.CrossRefGoogle Scholar
Soss, Joe. 2000. Unwanted Claims: The Politics of Participation in the U.S. Welfare System. Ann Arbor: University of Michigan Press.CrossRefGoogle Scholar
Soss, Joe, and Schram, Sanford F.. 2007. “A Public Transformed? Welfare Reform as Policy Feedback.” American Political Science Review 101 (1): 111–27.CrossRefGoogle Scholar
Ternullo, Stephanie. 2022. “The Electoral Effects of Social Policy: Expanding Old-Age Assistance, 1932–1940.” Journal of Politics 84 (1): 226–41.CrossRefGoogle Scholar
Weaver, Vesla M., and Lerman, Amy E.. 2010. “Political Consequences of the Carceral State.” American Political Science Review 104 (4): 817–33.CrossRefGoogle Scholar
Figure 0

Figure 1. Geographic Distribution of WPA and PWA SpendingNote: Panels (a) and (b) plot per capita WPA and PWA spending by county, respectively. Counties are placed into quintiles, with counties in the lightest shade of yellow receiving the least amount of money (Q1) and counties in the darkest blue receiving the most money (Q5).

Figure 1

Figure 2. Trajectory of Democratic Voting by WPA and PWA SpendingNote: Panels (a) and (b) plot average Democratic vote share by within-state WPA and PWA spending quintiles, respectively, in each election. The dashed gray line indicates the first post-spending election cycle.

Figure 2

Table 1. Effect of WPA and PWA Spending on Democratic Voting

Figure 3

Figure 3. Event Study Estimates of WPA and PWA Spending on Democratic VotingNote: Panels (a) and (b) plot event study estimates with 95% confidence intervals of the effects of the WPA and PWA on Democratic voting. Effects can be interpreted relative to 1916. Models include demographic fixed effects. Coefficients and model diagnostics are also presented in Supplementary Table A3.

Figure 4

Table 2. Effect of WPA and PWA School Construction Spending on Democratic Voting (CA)

Figure 5

Table 3. Effect of WPA and PWA Spending on Bank Deposits

Figure 6

Figure 4. Event Study Estimates of WPA and PWA Spending on Bank DepositsNote: Panels (a) and (b) plot event study estimates with 95% confidence intervals of the effects of WPA and PWA spending, respectively, on bank deposits. Estimates are generated from a model including county and state-year fixed effects and linear county trends. Coefficients and model diagnostics are also presented in Supplementary Table A4.

Supplementary material: File

Hamel supplementary material

Hamel supplementary material
Download Hamel supplementary material(File)
File 7.5 MB
Supplementary material: Link
Link
Submit a response

Comments

No Comments have been published for this article.