Lobbying’s influence is a long-standing, but contested, question in political science. Corporations, unions, and interest groups spend over $3 billion per year on professional lobbyists to pressure members of Congress (OpenSecrets.org 2020). These lobbyists are thought to influence legislators’ policy positions by providing campaign resources (Grossman and Helpman Reference Grossman and Helpman1994) or policy expertise (Austen-Smith and Riker Reference Austen-Smith and Riker1987; Battaglini Reference Battaglini2002; Bertrand et al. Reference Bertrand, Bombardini and Trebbi2014; Blanes i Vidal et al. Reference Blanes i Vidal, Draca and Fons-Rosen2012; Gilligan and Krehbiel Reference Gilligan and Krehbiel1987; Hall and Deardorff Reference Hall and Deardorff2006; Schnakenberg Reference Schnakenberg2017).
Despite fears of lobbying’s pernicious sway, identifying credible estimates of lobbying’s effects has been difficult (De Figueiredo and Richter Reference De Figueiredo and Richter2014). The limited experimental evidence of lobbying’s influence is mixed. Yan (Reference Yan2020) finds that a phone call from an interest group staffer makes state legislators no more likely to support the interest group’s legislation, though there is some evidence that calls from constituents, rather than from the interest group, persuade legislators. Grose et al. (Reference Grose, Lopez, Sadhwani and Yoshinaka2022) find that lobbying staff can change legislators’ policy positions, but only when lobbyists take the staffer to dinner rather than lobby the staffer in the capitol building. Neither study finds that policy expertise, on its own, reliably changes legislators’ behavior.
This paper presents evidence on lobbying’s effects from four field experiments conducted in two state legislatures over three years. We partnered with three different lobbyists – including a citizen advocate, an in-house lobbyist, and a trade association president – and examined two different outcomes – cosponsorship and social media posts – to provide several measures of lobbying’s effects on public indicators of legislators’ support for pending legislation. In each experiment, the lobbyist provided policy expertise to randomly assigned legislators about bills that had already been drafted and were making their way through the legislative process. Treatments were provided as part of the lobbyists’ ordinary course of business on real legislation important to the lobbyists’ clients. Across all four experiments, we find no significant effects of lobbyist outreach on legislators’ public policy positions.
This paper contributes to several literatures on legislative behavior. First, the paper shows the limits of lobbyists’ influence. Despite rising fears that legislators are overwhelmed by paid lobbyists, our paper shows four cases in which lobbyists were unable to change legislators’ behavior. Second, results extend recent research on the effects of policy information on legislative position-taking by showing the conditions under which information is more or less persuasive (Zelizer Reference Zelizer2018). Third, the experiments speak to the relative effectiveness of lobbying from citizen advocates, professional lobbyists, and legislative staff. In these experiments, legislators were more influenced by a nonpartisan legislative staffer than by a passionate public advocate or professional lobbyists.
Experimental design
The four experiments were conducted between March 2016 and March 2018. Experiments were conducted during four distinct state legislative sessions in two states. Experiment 1 was fielded in 2016 in a state legislature Footnote 1 in the Southeastern USA. The legislature features unified Republican control of the state government and supermajority control of the legislature. The state ranks in the bottom quintile of Squire’s index of state professionalism (Squire Reference Squire2017). The legislature has relatively little staff, low legislative salaries, and sessions lasting approximately four months. Experiments 2, 3, and 4 occurred from 2016 to 2018 in a state legislature in the Northeastern USA. The state features a highly professionalized legislature with high salaries and year-round sessions. Control of the legislature was split between Democrats and Republicans. It ranks in the top quintile of Squire’s index of professionalism (Squire Reference Squire2017). By fielding similar interventions in legislatures near the extremes of Squire’s professionalism index, we gain confidence in the external validity of our results for other legislatures.
Each experiment included an information treatment provided by a lobbyist. We partnered with three different lobbyists. They included a relatively unprofessional citizen advocate, a professional in-house lobbyist, and the president of a statewide trade association. All three lobbyists had strong relationships in the legislature. The citizen advocate was a former legislative intern who continued to attend committee and caucus meetings after his stint in the legislature ended. The in-house lobbyist and trade association president were frequently in touch with legislators in their state on behalf of their clients, well-known colleges and universities in the state. Our studies did not partner with contract lobbyists, the professionals who are hired on a case basis to lobby specific issues, or government lobbyists, those employed by city, state, or federal agencies to advocate for their interests (Payson Reference Payson2020).
Lobbyists’ information treatments prioritized policy expertise: technical, hard information about pending bills that legislators may not have the time to obtain absent treatment (Caillaud and Tirole Reference Caillaud and Tirole2007; Krehbiel Reference Krehbiel1992; Mooney Reference Mooney1992). While lobbyists may also provide political information, such as the results of polls or the preferences of campaign donors, we focus on policy information as it is the basis of models of informational persuasion and lobbying (Krehbiel Reference Krehbiel1992). As an example, Experiments 2 and 3 provided a one-page research report on a state program that provided matching capital grants to institutions of higher education (see Appendix B in the Supplementary material). The research informed legislators when the program was established, in what section of the state code, and how funds could be used by colleges and universities. Importantly, it informed legislators that previous levels of funding were inadequate, as only half of all applications in the prior year were funded. It also claimed that the matching funds helped create 10,000 jobs across the state. Together, the report educated legislators about the program’s statutory basis and provisions, cost, and benefits. This is one type of information legislators need in order to know whether to support or oppose a government program.
Table 1 provides an overview of the contexts, subjects, and outcomes for the four experiments. Each experiment included bills in a single issue area on which the lobbyist was an expert. Experiment 1 focused on veterans affairs, while Experiments 2, 3, and 4 examined higher education. Experiments 1, 2, and 3 each included multiple issues pending before the legislature; by including multiple bills and multiple legislators, we are able to collect over 2,000 observations of legislators’ public positions on specific policy proposals across the four experiments. Because there were multiple bills in three of the studies, we assigned each legislator-policy observation to treatment using block randomization within legislator. Block random assignment ensures balance across legislator characteristics in these three studies. The Appendix A in the Supplementary material includes descriptive statistics and balance tests for the legislators included in each experiment.
Experiment 1
Experiment 1 was fielded in a Southern state legislature with 81 of the 99 legislators in the lower house. The experiment included two treatment arms. The main treatment arm consisted of policy information provided by a citizen advocate. Himself a disabled veteran, the advocate had interned for a senator in the state legislature and remained active with veterans issues in the state. He attended meetings of the Veterans Caucus and met with legislators to advocate for veterans issues. At the time, he led a group to raise money for veterans and even hosted a public access television show which featured his interviews with state legislators. This lobbyist was not a professional in the sense of being highly paid for his work, but he was an active, and passionate, supporter of veterans. He was more professional and connected than the typical citizen who seeks to influence their legislator.
The experiment featured a second treatment arm to test the relative influence of the citizen advocate. In addition to the advocate, treatments were also administered by a legislative staffer who worked on behalf of the legislature’s Veterans’ Caucus. The lobbyist and staffer provided the same policy information, but were randomly assigned to different legislators through a multi-level procedure Footnote 2 . Zelizer (Reference Zelizer2018) reports the results of an experiment conducted with this staffer in the prior year, in the same legislature, during which similar policy expertise treatments were randomly assigned to legislators. Legislators were found to be 60% more likely to publicly support legislation when provided policy information in that experiment.
Legislators were randomly assigned to policy expertise on four of sixteen veterans-related bills. All 16 bills intended to benefit veterans, but all had costs such that legislators’ decision to support them was not straightforward. One bill would have reduced the state income tax for disabled veterans, costing the state revenue. Another would have encouraged employers to provide paid time off to veterans on Veterans Day. This policy was opposed by business groups. While legislators may have wanted to support these bills to appeal to veterans, all had downsides. By the end of the session, nine of the 16 bills would be enacted into law.
Policy expertise was provided through in-person policy briefings conducted face-to-face between the advocate (or staffer) and legislators. The advocate traveled to the legislature several times during the session to prepare and conduct meetings. One of the authors met with the advocate twice to plan the study and discuss the bills and research. The advocate then, on his own, scheduled and conducted meetings with the legislators assigned to him. Briefings were conducted in the legislators’ offices in the capitol building. The advocate reported successfully meeting with 22 of the 24 (92%) of the legislators assigned to him and covering all bills selected for each meeting. The staffer reported meeting with 25 of 29 (86%) legislators assigned to him.
The outcome measure for this experiment is bill cosponsorship. In this legislature, many legislators choose to cosponsor legislation to signal their support for it. Lobbyists thus seek to add cosponsors to build broad coalitions that will help their legislation pass. Zelizer (Reference Zelizer2018) shows not only that veterans-related bills are cosponsored by nearly 10% of legislators, on average, in this legislature, but also that the randomly assigned policy information in that experiment increased cosponsorship by about 5 percentage points.
Experiment 2
Experiment 2 was fielded in the Northern state legislature. Nearly the entire membership of both the state House and Senate – 210 members in total – were included. The handful of omitted legislators were party leaders whom the lobbyist needed to treat with probability one.
We partnered with the in-house government relationships officer for a mid-sized, private college in the state. This lobbyist is a professional whose job responsibilities include monitoring and advocating for legislation that would impact the college. The treatment messages prominently featured the logo for the lobbyists’ institution on the top of the page.
Legislators were assigned to treatment for one of two education-related programs being considered by the legislature. Both issues were items in the state budget that would impact the college. The treatment consisted of an emailed informational sheet on the budget line item. Legislators were emailed twice for each issue, approximately one week apart, to ensure that they, or their staffer, saw the message. Information sheets included background research on the policy issue, such as its legislative history, how it relates to federal law, and how many people would be helped by the policy. Because the issues being lobbied were budget items rather than stand-alone bills, we cannot evaluate canonical position-taking outcomes like cosponsorship or roll-call voting – neither occurs on specific budget line items. As a result, our primary outcome is another form of public position-taking: tweeting.
Legislators were asked to tweet their support for the two programs, graduate student aid and capital grants for private colleges. All legislators were sent an email asking them to tweet their support for the programs, but only legislators selected for treatment were sent the one-page policy research report.
It is not ex ante clear that lobbyists, or legislators, would consider tweets to advance their goals. In some cases, lobbyists may want to keep their interactions with legislators private and behind-the-scenes. Nevertheless, we believe tweets are a meaningful outcome for three reasons. First, state legislators frequently use their social media platforms to take policy positions. Casas et al. (Reference Casas, Payson, Nagler, Bonneau and Tucker2020) find that across fifteen states, over 75% of state legislators have Twitter accounts; that they tweet on average once per day; and that 70% of tweets discuss policy-relevant issues.
Second, one of the few existing experimental studies of lobbying finds effects via social media position-taking. Grose et al. (Reference Grose, Lopez, Sadhwani and Yoshinaka2022) find that lobbying legislative staffers on an education-related budget item increased their legislators’ public support of the issue by 12 percentage points. With over 200 legislators and 2 budget items, our study is well-powered to recover similarly sized treatment effects.
Third, our lobbyist partner believed that tweets were a useful means to build a coalition for the budget items. He stated that such tweets would be a big “win” for his work, and he would happily share those with his superiors. Grose et al. (Reference Grose, Lopez, Sadhwani and Yoshinaka2022) note that tweeting for budget items helps build momentum, which is important for interest groups because it often takes several sessions to enact their priorities.
Experiment 3
Experiment 3 was a replication of Experiment 2. The intervention was fielded in the same legislature, by the same lobbyist, on the same two budget issues, in the following fiscal year. The 2017 replication included 206 legislators, 180 of whom had been members in the prior year. Changes to the subject population resulted primarily from turnover in the legislature. To increase the chances that legislators would see the lobbyists’ policy brief, treated legislators in the second experiment were both emailed and mailed a copy of the research report and the request to tweet.
Replicating the experiment allows us to extend our analysis from Experiment 2 in two ways. First, it allows us to repeat the exercise in Experiment 2 in which we estimate the effects of contemporaneous lobbying on legislators’ policy outcomes, effectively doubling our sample size for that analysis. However, to do so, we must assume that there are no lingering effects of the prior experiment. If this assumption is unrealistic, relaxing it allows us to conduct a different analysis. We can estimate the effects of varying the timing and dosage of the lobbying treatment. With two treatment assignments at two different times, we can estimate the effects of being lobbied in the prior session, current session, or both on position-taking in the current session.
Experiment 4
Experiment 4 was again fielded in the Northern legislature. It included 196 members of the upper and lower house. Legislators who were part of chamber leadership or who held key committee leadership posts were excluded from the study.
We administered treatments in partnership with a trade association representing the interests of colleges and universities in the state. The president of the association emailed legislators about the state’s student aid program. The president’s email included forecasts of the economic benefits generated by not-for-profit campuses in the state. The email again asked legislators to tweet their support for the legislation with a specific hashtag.
Unlike Experiments 2 and 3, legislators in the control group were not contacted by the lobbying effort in Experiment 4 and thus were not asked to tweet their support for the student aid program. One explanation for the lack of treatment effects observed in Experiments 2 and 3 might have been that simply receiving an email from a lobbyist caused legislators to seek out information on legislation or tweet their support, resulting in the equal expressions of support among untreated and treated legislators. For this reason, legislators in the control group were not contacted by the lobbyist and were not given the hashtag to express support for student aid.
Results
Table 2 displays the percentage of legislators who publicly supported the lobbyists’ policy position by treatment condition and experiment. Footnote 3 Across the experiments, there is no evidence that lobbyists made legislators more supportive of legislation. In fact, in the first three experiments, legislators assigned to the lobbyist treatment were less supportive of bills, by 1–3 percentage points, than legislators assigned to control. In the fourth experiment, not only did no legislators in control tweet about the program, but neither did any legislators in treatment.
Experiment 1 shows, in contrast, that information can be influential, but when provided by a different source from the lobbyists. Legislators assigned to the legislative staffer condition were substantially more likely to support legislation than the control group. Legislators in this condition were over 10 percentage points more likely to support bills than in the control condition, an increase of over 60% from baseline levels of support. That the treatment did influence the outcome behavior suggests that the null effects from lobbyists result from their lack of influence rather than from our providing useless treatments or selecting irrelevant outcomes.
To improve precision and account for potential imbalance in pre-treatment covariates, we estimate the intent-to-treat effects of lobbying using regression. ITTs represent the average change in bill support of assigning a unit to lobbying and do not account for whether units actually received the treatment. Regressions include bill fixed effects to account for variation in legislators’ support across bills. Standard errors for Experiment 1 are clustered at the legislator level. Footnote 4
Table 3 shows lobbying treatment effects are small, negative (or zero), but not statistically distinguishable from zero in any single experiment. The staffer’s briefing, in contrast, induces a 4 percentage point increase in legislators’ support for legislation. This estimated ITT is substantially lower than the naive difference-in-means estimate due to chance imbalance in the profile of bills assigned to treatment conditions. Footnote 5
Standard errors for Experiment 1 are clustered at the legislator level.
One interpretation of the null results is that legislators, or their staff, simply did not see the email treatments in Experiments 2, 3, and 4. We did observe feedback from legislators in response to the lobbyists’ emails. Responses included several form emails thanking the lobbyist for “sharing your views about support for [student aid]” before going on to campaign-style rhetoric about the importance of education without stating a clear position on the specific issue lobbied. Some responses were from staffers noting their legislator was currently unavailable (“Senator [Washington] was not feeling well for a few days, better now. I will show this to him tomorrow”). Several staffers or legislators did respond directly to the issue lobbied. Two staffers followed up to say that the legislator did publicize their support for the issue. Another legislator responded personally that “I am on board and talking to our Higher Ed Chair.” At least some legislators and staff saw these messages.
The results from Experiment 1 indicate that the information provided in the experiment can influence legislators’ policy positions and to a meaningful degree. The staffer estimated effects are similar in magnitude to prior studies with similar treatments and outcomes (Zelizer Reference Zelizer2018). However, the information was not influential when provided by the advocate. Since the advocate had strong relationships in the legislature, administered treatments face-to-face with selected legislators, but exerted no influence, we interpret these results as suggesting that nonpartisan legislative staffers may have a credibility advantage over outside advocates.
Persistence and dosage
The main analyses above assume no persistence or dosage effects of treatments between Experiments 2 and 3. The experiments included largely the same subject pool and the same two pieces of legislation. As a result, we examine whether there appears to be any interaction in treatment for these two experiments.
Table 4 shows legislators’ policy support for bills in Experiment 3 as a function of their treatment assignment in both Experiments 2 and 3. It shows whether legislators treated during only Experiment 2, only Experiment 3, or in both experiments supported legislation at higher rates by the end of the two studies.
There is, again, no evidence that being lobbied, either once or twice, meaningfully increased legislators’ support for legislation. Legislators untreated in both sessions supported the lobbyists’ bills 2.4% of the time. Legislators briefed once, either in the prior session or current session, supported the lobbyists’ bills 3.2% of the time, an increase of less than one percentage point. Legislators briefed during both sessions actually supported lobbyists’ bills half as often as untreated legislators, only 1.2% of the time. Even repeated lobbying contacts do not appear to change legislators’ positions.
Conclusion
Across four experiments, we find no evidence that lobbyists influence legislators’ policy positions through the provision of policy expertise. The treatments in our experiments may have failed to be influential for several reasons. Three of the experiments provided legislators with policy research via an email message and analyzed legislators’ public positions via Twitter. Lobbying’s influence may be felt behind-the-scenes, rather than on the very public platforms of social media. Each individual lobbyist may have been ineffective in contacting legislators. Legislators receive so many appeals in a session that they may be overwhelmed by another request to support legislation. One of the interventions, Experiment 4, occurred amidst substantial lobbying by members of the trade association on the issue of student aid. Dozens of colleges met with legislators during an “Advocacy Day” organized by the trade association, but conducted independently of its President’s letters to legislators.
While each of these explanations may be possible for specific studies, none is likely to explain the null effects in all four experiments. While legislators were sometimes briefed via email, in one experiment they were treated in person. Tweets may not be the canonical outcome measure in legislative research, but there is evidence that lobbying can influence legislators’ social media behavior (Grose et al. Reference Grose, Lopez, Sadhwani and Yoshinaka2022) and one experiment examined bill cosponsorship as its outcome. Social media may be a more useful outcome to understand lobbying’s effects earlier rather than later in the legislative process, which is where many studies of lobbying focus (Leech Reference Leech2010). The citizen advocate was not a highly paid, professional lobbyist, but three of the experiments did feature professional lobbyists. All lobbyists had personal relationships and experience lobbying the legislature in question. And while one experiment occurred on a hotly lobbied issue, another (Experiment 1) occurred on an issue on which there was no lobbying outside the experiment. In sum, we think the evidence is more convincing due to the array of experiments and the varied partners, treatment methods, contexts, and outcomes we analyze.
Information may not have been persuasive for reasons other than it not being administered properly or relevant to the outcomes analyzed. Informational models of lobbying give mixed predictions about lobbying’s effectiveness. The canonical cheap talk model of Gilligan and Krehbiel (Reference Gilligan and Krehbiel1987) would predict minimal effects if lobbyists’ policy goals are deemed extreme by legislators. However, if the lobbyists’ message were verifiable, if lobbyists engage in repeated interactions with legislators, or if lobbying is a distributive exercise, their information might be more impactful (Caillaud and Tirole Reference Caillaud and Tirole2007; Ottaviani and Sørensen Reference Ottaviani and Sørensen2006; Schnakenberg Reference Schnakenberg2017). Policy information need not make legislators more supportive, on average, of legislation, if a given message may be appealing to some legislators, but off-putting to others. Relatedly, the treatments may not be commensurable to the kinds of messages that lobbyists typically give legislators (Bueno De Mesquita and Tyson Reference Bueno De Mesquita and Tyson2020). Receiving messages might have indicated, for example, that the lobbyist was pessimistic about the policy’s chance of passage and thus have communicated information beyond the research that the lobbyist intended to share.
Lobbyists may play a role by providing expertise and input earlier in the legislative process at a stage where it could influence outcomes more subtly (Hall and Deardorff Reference Hall and Deardorff2006). Lobbying may be influential in setting the legislative agenda, in winning support from key legislative gatekeepers, in drafting legislation with specific provisions, or in amending legislation under consideration in committees. Our interventions came later in the process, once bill content was largely finalized, and, in Experiments 2, 3, and 4, when bills were headed toward the floor. Lobbying may be important early in the legislative process but be a poor tool for trying to persuade a large numbers of legislators to join a coalition.
Another explanation for our null results could be the “piranha problem” (Tosh et al. Reference Tosh, Greengard, Goodrich, Gelman, Vehtari and Hsu2021). There are myriad factors thought to influence legislators’ positions, from lobbying to party whipping to media coverage to public pressure. All of these treatments simply cannot exert large, consistent, and independent effects on position-taking. Legislators must ignore lobbyists or appeals from party leaders or phone calls from constituents most of the time, and it may be those are the cases studied in this paper.
Ultimately, additional work is necessary to distinguish the reasons why lobbying is or is not effective in specific cases and how certain mechanisms underlie lobbying’s influence. As we collect more evidence on lobbying’s influence, future work might prioritize experimental designs that distinguish mechanisms rather than those that simply identify whether lobbying is, or is not, influential. Future studies can build off the design choices made in, and results obtained from, these four experiments. Even if lobbyists are found to be influential in other contexts, their aggregate influence also includes the null results from these four interventions.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2022.25
Data availability statement
The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at https://doi.org/10.7910/DVN/PKZNPC.
Financial support
There are no interested parties, defined as an “individual, group, or organization that has a financial, ideological, or political stake related to the article,” and so we have no conflicts of interest to disclose. No financial support was provided for this research.
Conflicts of interest
The authors report no conflicts of interest.
Ethics statement
This research was approved by the Human Research Protection Office at Columbia University under protocol number IRB-AAAQ7721. The interventions were deemed to pose minimal risk to subjects. The experiments were fielded under the Common Rule, under which an exemption applied to research with public officials.