Scholars increasingly consider how the rapid decline in survey response affects various measures of public opinion. The emerging consensus is that low response rates—reaching today below 10% in most polling organizations in the United States—do not necessarily generate survey bias (Jennings and Wlezien Reference Jennings and Wlezien2018; Keeter Reference Keeter2018). Rather, the quality of survey measurements depends on the correlation between the characteristics of the resulting sample and the measures of interest (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021; Prosser and Mellon Reference Prosser and Mellon2018). Such bias may be at issue when measuring polarization, where overrepresentation of certain, more engaged, knowledgeable, and polarized groups may affect survey measurements (Abramowitz and Saunders Reference Abramowitz and Saunders2008).
To illustrate this suggested bias, Figure 1 plots the Kendall tau correlation coefficients between party identification (Democrat-Republican; 7 scale) and response to six policy questions that have been repeatedly asked in every American National Election Study (ANES) from 1984 to 2020 (Liberal-Conservative; 7 scale), along with the unit response rates in those surveys.
Note: Models estimated using the “lmridge” package in R (Imdad and Aslam Reference Imdad and Aslam2018). The K tuning parameter is the KM4 estimation proposed in (Muniz and Kibria Reference Muniz and Kibria2009). They find that KM4 yields the lowest MSE values in samples of N = 100 and when the correlation between two predictors is 0.9, which most resembles our data. *p < 0.05, **p < 0.01, *** p < 0.001.
The figure demonstrates a gradual increase in measured polarization—higher correlation coefficients—of all series until 2012, a surprising drop in 2016, and a return to the trend in 2020. We suggest that the changing response rate (white line) explains the puzzling changes in polarization. Response rates—the proportion of respondents who participate in a poll—in ANES surveys have been among the highest in U.S. polling. But, even this highly acclaimed series has not been immune to the general trend of declining response in survey data—at about 70% in the 1980s, dropping to 60% in the following two decades, and below 50% in the last three election cycles. With every drop, we see an increase in the measure of polarization. In 2016 both trends reversed—response rate rose, and all six policy measures declined. In 2020 both trends reversed again. This descriptive illustration speaks for itself: existing measures of polarization are strongly associated with survey response rates (r = -0.82, p = 0.000). While we concur that polarization of Americans is real (Abramowitz Reference Abramowitz2018; Campbell Reference Campbell2016), the low response rates in current probability samples may be exaggerating the perceived level of polarization.
In a previous study (Cavari and Freedman Reference Cavari and Freedman2018; hereafter CF), we offered empirical support for this claim. Analyzing rich survey data from Pew, we demonstrated that as survey response declines, survey samples overrepresent politically engaged respondents who report more polarized views on several domestic issues. Using the same data and additional simulations, Mellon and Prosser (Reference Mellon and Prosser2021; hereafter MP) suggest that the relationship between survey response and nonresponse bias depends on the cause of low survey response, specifically, declining contact rates, associated with random polling mechanisms and caller ID features that offer screening abilities, or declining cooperation rates, associated mostly with personal preferences, knowledge, and interest in politics.
To establish that survey response produces a survey bias of engaged and involved respondents, we should, therefore, consider the cause of nonresponse. If the primary causes are random effects of increased cold-calling or socially driven ability to decline cooperation using caller ID, we should expect no consistent, directional effect of contact rates on polarization (Contact Hypothesis). If, however, the primary cause is the purposeful refusal to participate in surveys, then we should expect the personal preferences, knowledge, and interest in politics that affect cooperation to generate an engagement bias that is correlated with measures of polarization (Cooperation Hypothesis).
We expect that the effect of declining cooperation on measures of polarization is conditioned on the policy domain. Specifically, we expect that the decline in cooperation rates—resulting in an overrepresentation of an engaged public—is associated with increased measures of polarization on domestic performance issues (economy, immigration, and energy). Because neither party owns these issues, public attitudes and diverging partisan views are affected by political knowledge and awareness of elite positions (Egan Reference Egan2013). We do not expect a similar association on civil rights and social welfare—issues that have traditionally displayed strong disagreements between the parties, have further polarized over time (Campbell Reference Campbell2016; Webster and Abramowitz Reference Webster and Abramowitz2017), and are shared by most levels of political awareness (Claassen and Highton Reference Claassen and Highton2009). In contrast, we expect that on foreign policy, where Americans possess little information and appear to rely heavily on various leadership cues to form an opinion (Guisinger and Saunders Reference Guisinger and Saunders2017), a decline in cooperation rates is associated with a decrease in polarization. Politically engaged participants, who make up a larger share of low-cooperation-rate surveys, are expected to demonstrate a weaker divide: they are more likely to be informed about, and their position affected by, real events and nonpartisan professional cues (Gelpi Reference Gelpi2010; Sulfaro Reference Sulfaro1996), they are more likely to revert to more structured purposeful attitudes (Cavari and Freedman Reference Cavari and Freedman2021; Page and Bouton Reference Page and Bouton2006), and they are less likely to see distinct foreign policy types among elites (Kertzer, Brooks, and Brooks Reference Kertzer, Brooks and Brooks2021).
To test the relative effect of contact and cooperation rates on polarization over various policy domains, we updated the data used by CF and MP to include survey responses to 1,223 policy questions from 158 Pew surveys collected between 2004 and 2018 that report information on the two measures of survey response. Following the two previous studies, we operationalized the dependent variable (polarization) as Cohen’s d coefficient of mean differences between the parties and divided the data into their six issue domains: economy, civil rights, energy, immigration, social welfare, and foreign affairs.Footnote 1 The results suggest that declining cooperation rates are the primary factor in declining response rates; and that poor response rates—caused by random contact and by purposive cooperation—bias survey measurements of polarization. The cause, direction, and strength of bias are conditional on the policy issue.
Declining Response Rate and Survey Bias
The apparent decline in unit response in probability sampling survey data has generated scholarly interest in the extent to which this decline causes survey bias. The evidence suggests that a drop in survey response does not necessarily generate survey bias (Prosser and Mellon Reference Prosser and Mellon2018). For example, Jennings and Wlezien (Reference Jennings and Wlezien2018) analyze the accuracy of election polls in national surveys from 45 countries between 1942 and 2017. They find that although declining response rates pose real challenges to the representativeness of surveys, we may have a reasonable portrait of electoral preferences because there are more polls today, often with larger samples, and most pollsters have incorporated weighting and other techniques to increase representativeness. However, this assumption does not hold when nonresponse is correlated with the variable of interest. Such survey bias may explain some of the 2020 preelection polling misses (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021; Keeter, Kennedy, and Deane Reference Keeter, Kennedy and Deane2020; Panagopoulos Reference Panagopoulos2021).
Studying the effect of nonresponse on measures of polarization is difficult because we lack information on the demographics and preferences of those not included in the polls (Berinsky Reference Berinsky2004; Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021). And yet, examining the characteristics of those in the sample can reveal possible biases that may correlate with measures of polarization. Specifically, a rich body of scholarly work shows that surveys overrepresent politically interested respondents (Groves, Presser, and Dipko Reference Groves, Presser and Dipko2004; Keeter et al. Reference Keeter, Kennedy, Dimock, Best and Craighill2006; Mellon and Prosser Reference Mellon and Prosser2017; Tourangeau, Groves, and Redline Reference Tourangeau, Groves and Redline2010) and that politically engaged respondents are more polarized than unengaged respondents (Abramowitz and Saunders Reference Abramowitz and Saunders2008).
In panel 1 of Figure 2, we offer an empirical illustration of the engagement bias in the Pew data. For each survey, we calculated the proportion of the sample with higher (academic) education—a primary correlate (and cause) of political engagement (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001; Hillygus Reference Hillygus2005; Perrin and Gillis Reference Perrin and Gillis2019)—and plotted it as a function of unit response in the survey. We reversed the horizontal axis to demonstrate the effect of a decline in unit response. Each dot is one survey. The curved line is a fitted cubic line.Footnote 2 As unit response declines, the share of respondents with a college degree in survey data increases—an incremental rise as unit response declines below 30% and a rapid climb as unit response drops below 10%.
In panel 2, we compare census data on annual college education attainment to the percentage of self-reported college graduates in our surveys (reporting the weighted average of all surveys each year). The figure illustrates the consistent, yet growing, gap over time. Although education bias in surveys data is characteristic of the last two decades, it has increased substantially in recent years. We are especially concerned with survey data when unit response rates drop below 10% and the education gap in surveys data is over 15 percentage points.
Comparing the Decline of Contact and Cooperation Rates
Following MP, we examine the cause of nonresponse—failure to contact respondents or refusal to cooperate with the interviewer. Figure 3 plots each measure from all available Pew surveys (N = 158).Footnote 3 Contact rate refers to the proportion of all cases in which a responsible person from the contacted housing unit was reached. Cooperation rate refers to the proportion of cases interviewed of all eligible units contacted. Unit response rate refers to the number of complete interviews divided by the number of eligible reporting units in the sample. Because of the variation between landline and cellphones in our variables of interest, we plot our measures for each separately but use unified trending lines.
All three measures have declined considerably. In 2004, contact rates were at about 70%, and cooperation rates among those contacted were about 50%. These two components produced overall unit response rates of 30%. Over time, pollsters have found it more challenging to contact respondents (gray line), and of those contacted, only very few agree to cooperate (white line). This combination produces the low unit response rates that are characteristic of recent telephone surveys—under 10% (black line). The decline in response rates, therefore, is a feature of the selection that results from new contact technologies and social habits of telephone use and of declining cooperation from people who are contacted. Yet, although both components of survey response have dropped, the decline of contact rate (ratio of 0.7 from 2004 to 2018) is overshadowed by the more significant collapse of cooperation rate (ratio of 0.3).
Response Rates and Party Polarization
To assess the effect of declining response rates on perceived polarization, we estimated our measure of polarization in each policy question in our data (N = 1,223, divided into six policy domains) as a function of response rates and elite polarization (Table 1). For response rates, we accounted for the three measures discussed above: the overall unit response and its two components—contact and cooperation. Because unit response is a function of contact and cooperation, we estimated two models for each policy domain—one with overall unit response rates (model 1 for each policy domain) and one with contact and cooperation rates (model 2 for each policy domain).Footnote 4
To account for the possible effect of increasing elite polarization on American public opinion, we included in all models the level of congressional polarization on the issue measured. Our data are House roll-call votes on policy-related issues between 2004 and 2018. Similar to the public opinion data, we coded the vote—aye (1) or nay (0)—of each Representative on each roll call and calculated the average Cohen’s d for the mean difference between Republicans and Democrats on all votes.
To control for the evident trend of increased polarization over time, we included a linear time trend.Footnote 5 Because of the high correlation between time and polarization,Footnote 6 we estimated ridge regression models, which reduce the bias in the variance of correlated predictors by shrinking their coefficients toward zero (Hoerl and Kennard Reference Hoerl and Kennard1970; Rawlings, Pantula, and Dickey Reference Rawlings, Pantula and Dickey1998; Seber and Lee Reference Seber and Lee2003; Tripp Reference Tripp1983).Footnote 7
Overall Response Models (1). Consistent with conventional wisdom and with MP, the effect of time is positive and significant on all domestic issues—economy, energy, immigration, civil rights, and welfare—and negative on foreign policy. Americans are increasingly polarizing on most issues on the public agenda. Consistent with CF, the results suggest that the overall response rates contribute to the increase in measured polarization on the three major domestic issues—economy, energy, and immigration—insignificant on civil rights and welfare, and positive (and significant) on foreign policy.
Contact and Cooperation Models (2). When we separate the two components of survey response—contact and cooperation—we find support for the cooperation hypotheses on our performance topics: economy, energy, immigration, and foreign affairs. Cooperation rates are negatively associated with polarization on the three domestic issues and positively associated with polarization on foreign affairs. As expected, we find no association between cooperation rates and polarization on civil rights and welfare.
The effect of contact rate is not significant on economy, negative on energy, immigration, and civil rights, and positive on welfare and foreign affairs. These conflicting results are consistent with our null hypothesis regarding contact rates. Further research is needed to draw conclusions about the random or purposeful nature of contact rates and how they affect measures of polarization.
Conclusion
The political polarization of Americans has attracted significant scholarly, media, and foundation attention in recent years, with growing concern about what this trend means for American democracy. Therefore, getting mass polarization right is a primary task for political scientists and should be a concern to the news industry. We show that declining response rates in probability surveys—a primary tool for assessing polarization—elevate perceived polarization on some topics (economy, energy, immigration) and downplay it on others (foreign affairs). Simply put, we are mismeasuring one of the most heated topics in political science today.
More broadly, survey data are frequently used in political science research and routinely discussed in the news, surveys are used by decisions makers to formulate a policy or by candidates to devise an electoral strategy, and evidence from surveys is shown to affect the political behavior of Americans. Given the importance of this tool and the various challenges it creates, the polling industry experiments with various polling techniques that include probability and nonprobability sampling designs. Despite the declining response rates in probability surveys, they are still a valuable tool that can produce accurate and reliable estimates of public behavior, far better than the alternative methods of nonprobability internet samples (Dutwin and Buskirk Reference Dutwin and Buskirk2017; Prosser and Mellon Reference Prosser and Mellon2018). And yet, the minuscule response rate that has become the norm today in probability samples demands caution in using this tool, especially if we suspect that nonresponse is associated with our outcome variable.
Correcting these biases in such low-response truncated surveys using postsample statistical tools may lead to colossal errors (Brehm Reference Brehm1993). Any application of postsample weights assumes we can infer the attitudes of those not responding from those who respond. However, this assumption is rarely met. Those not responding may have attitudes that are similar to those of their peer demographic or political groups (between-group nonresponse) or have different views from their peer groups caused by some unidentified factors (within-group nonresponse). Without knowing which of the two explanations (or a mix) is true, we cannot be confident in our estimation (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021). People not responding to the survey may be more, less, or equally polarized as people responding to the survey. Even if nonresponse does not have a pronounced effect on measures of overall policy or voting preferences, which focus on and estimate an average value, it may have dramatic effects when assessing a divide on a policy, which focuses on the variance between groups. And, unlike preelection polls, we do not have a postpolling reference (actual vote) to use as a benchmark for comparison. We simply do not know the extent to which Americans are polarized over policy.
Low response rates, which are a staple of current polling, bias our understanding of political and social life that we, as researchers, are tasked with. The evidence presented here suggests that the onus is on the researcher to justify generalizations based on survey data that relies on the selective group captured in samples with low response rates. Nonresponse undermines scientific representation, reducing the extent to which surveys provide an accurate portrait of the public (Brehm Reference Brehm1993). Empirical research formulates strict routines to improve confidence when making causal claims based on statistical models. Researchers should apply similar caution when evaluating survey data with a response rate of 6% or 8% to understand what the general public wants, thinks, and does politically.
Supplementary Materials
To view supplementary material for this article, please visit http://doi.org/10.1017/S0003055422000399.
DATA AVAILABILITY STATEMENT
Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/UECUBY.
Acknowledgments
We thank Ken Goldstein, Alex Mintz, Chris Wlezien, the anonymous reviewers, and editors of APSR for their helpful comments on various versions of this project.
CONFLICT OF INTEREST
The authors declare no ethical issues or conflicts of interest in this research.
ETHICAL STANDARDS
The authors affirm this research did not involve human subjects.
Comments
No Comments have been published for this article.