As Internet access expands, scholars are increasingly employing web surveys to reach populations that were previously only available through field contact or mail. However, web-based surveys remain prone to low response rates (Göritz, Reference Göritz2006). Researchers have found that incentives typically boost survey participation (ibid), and are typically designed in accordance with three theories of why individuals respond to surveys: (1) egoistic reasons (e.g., respondents are motivated by monetary incentives that advance their self-interest); (2) altruistic reasons (e.g., respondents are motivated by the promise of enabling a social good); and (3) survey-related reasons (e.g., respondents are motivated by their interest in the survey topic itself). Meta-analyses of incentive studies by Church (Reference Church1993), Edwards et al. (Reference Edwards, Roberts, Clarke, DiGuiseppi, Pratap, Wentz and Kwan2002), Singer and Bossarte (Reference Singer and Bossarte2006), Dillman (Reference Dillman2011), and Singer and Ye (Reference Singer and Ye2013) all show that, on average, egoistic incentives are most effective at boosting response rates.
These findings, however, come with a major limitation. The preponderance of studies on survey incentives has been conducted in the USA, Canada, or Western Europe (Meuleman et al., Reference Meuleman, Langer and Blom2018). Findings from these studies may not be applicable in countries with different cultural, social, political, or economic contexts. We help overcome this lacuna in the survey methodology literature by implementing an original online experiment on a comparable sample of individuals in three countries: Australia, India, and the USA. We examined which incentive strategies work best to elicit online survey participation for a like-minded set of individuals, namely recent college graduates interested in joining a national service organization. While the results from these three countries are not nationally representative, holding the sub-population type constant allows us to better assess how the relative effectiveness of these incentives might differ by country context.
The study focuses on the effectiveness of five incentive strategies to increase online survey participation: a control condition with just a narrative appeal;Footnote 1 a narrative appeal coupled with a 5–10 USD donation to a charity of the respondent's choice (an altruistic incentive); and three egoistic incentives. In a replication of the USA study, we also considered a 20 USD donation condition, as it is plausible that 5 USD in India may have been perceived as more attractive than even 10 USD in the USA and Australia.Footnote 2 Our findings indicate that the effectiveness of an incentive is also highly dependent on the country context. While we see that egoistic incentives consistently outperform the altruistic incentive or a narrative appeal in the USA (and to some extent in Australia), egoistic incentives are not more effective than the narrative appeal and altruistic incentive in India.
To provide corroborating insights into the relative effectiveness of egoistic versus altruistic rewards, we conducted an adapted dictator game among all survey respondents. At the start of the survey, respondents were told that if they completed the survey, they would be entered into a 100 USD lottery. They were additionally told that were they to win, they could keep all the prize money for themselves or contribute any or all of the award to one or more charities, and asked to share their preferred award allocations. In line with the findings of our incentive experiment, we find that respondents in India are more inclined to donate their potential monetary prize to charity than a similar population of individuals in the USA. Australian respondents fell somewhere in between American and Indian respondents in their propensity to donate lottery winnings to charity.
The results from these two research activities suggest that the strength of different incentive strategies vary across countries, even among similar groups of respondents. Our findings caution scholars conducting research outside of western countries against simply adopting the recommended incentives from existing survey methods research. Namely, egoistic incentives should be adequately tested for their effectiveness in other country and population settings.
1. Hypotheses
Evidence to date, largely from western settings, shows that altruistic incentives are less effective than egoistic appeals.Footnote 3 However, even egoistic incentives can take multiple forms and therefore vary in effectiveness. We thus tested the relative effectiveness of three different lotteries: a small monetary amount with many prizes, a large monetary amount with a few prizes, and a combination of these small and large lotteries.
Existing literature offers mixed evidence on the effectiveness of a larger number of lottery awards consisting of smaller monetary amounts versus a smaller number of lottery awards of higher amounts. While Deutskens et al. (Reference Deutskens, De Ruyter, Wetzels and Oosterveld2004) find the former to be most effective, Gajic et al. (Reference Gajic, Cameron and Hurley2012) find the latter to be more successful. In yet another configuration, Khan and Kupor (Reference Khan and Kupor2016) examine the effect of bundling smaller lottery prizes with a single large lottery prize, all with an equal likelihood of winning. They find that such bundling leads individuals to perceive the larger prize to be less valuable than if the larger prize is offered on its own, a concept termed “value atrophy.” We thus included this “mixed” lottery as a treatment. Given that the existing literature is inconclusive, we are not able to make any hypotheses on the relative effectiveness of different lotteries in our study, across or within countries.
Similar to egoistic incentives, altruistic incentives vary in nature. We examined the effectiveness of a donation to charity compared to a narrative appeal and egoistic appeals. While comparable altruistic incentives have not been tested across countries, evidence from dictator games suggests that individuals from high-income countries are more likely to give nothing to another party compared to individuals from the “developing” world (Engel, Reference Engel2011). Such differences, though, may change depending on the value of the currency. For example, larger monetary amounts have been shown to lessen giving in India, as players have more to gain (Raihani et al., Reference Raihani, Mace and Lamba2013). Thus, in the context of our study, extant research does not provide clear predictions. It is unclear if those in India (where the same monetary amount may be worth more than in the USA or Australia), are more likely to give away some portion of their prize.Footnote 4
2. Methods
2.1 Research design
To examine the effectiveness of various incentive strategies, we randomly assigned each survey respondent to receive one of five incentives in India, and one of six incentives in Australia and the USA.Footnote 5 The survey was conducted among college-educated individuals who applied to join a similar not-for-profit national service organization in all three country settings: Teach For America (2007–2015 application cycles), Teach For Australia (2011–2016 application cycles), and Teach For India (2009–2014 application cycles). All three organizations are part of the Teach For All network, and they each employ a similar service model and mission: to address education inequality in the country in which they work. The population consists of those who applied and made it to the final round of the selection process, which translates to 120,417 individuals in the USA, 14,336 individuals in India, and 1470 individuals in Australia.Footnote 6 By focusing on a similar population across countries, we overcome a key issue: that differences in respondent samples instead of country contexts may lead to varying effectiveness of survey incentives.
The incentives offered in each country included: (1) a control condition, consisting of a short narrative appeal, (2) an altruistic appeal with a 5 USD donation to charity (of the respondent's choice), (3) an egoistic appeal with entry into a big lottery with two 1000 USD prizes (henceforth called the big lottery condition), (4) an egoistic appeal with entry into a small lottery of twenty 100 USD prizes (small lottery condition), and (5) an egoistic appeal with entry into a mixed lottery with both two 1000 USD prizes and twenty 100 USD prizes (mixed lottery condition).Footnote 7 In Australia and the USA, we added another condition: a 10 USD charity donation out of concern that a 5 USD charity incentive is more valuable in India than the USA or Australia.Footnote 8 Appendix A provides the language used for each incentive. In a February 2024 replication in the USA, we further increased the charity condition incentive to 20 USD.Footnote 9 In the replication study, we also verified that the incentive manipulation worked as intended, with nearly all study participants correctly identifying the incentive that was offered to them when asked (see Tables F12–F14).
The way the incentive was administered differed slightly in India versus the other countries. Each individual in the India survey panel received an invitation e-mail with a narrative appeal as seen in Appendix A1. If individuals accepted the e-mail invitation, they were taken to a landing page with a consent form that randomized incentives across the five treatments described above (full text available in Appendices A2–A6). The online survey in India was kept open for two weeks between December 24, 2014 and January 6, 2015. Within that period, a total of 1780 individuals opened the invitation e-mail and saw the incentive (12.20 percent of the panel), and 643 completed the survey (4.41 percent of the panel; 36.1 percent of those who saw the e-mail invitation). We limit our analysis sample to the 1780 individuals who were exposed to one of the five treatments.
For the USA and Australia studies, the incentives were noted in the e-mail invitation itself as shown in Appendix A7. The USA incentives experiment ran between October 1, 2015 and October 15, 2015, yielding a 9.78 percent response rate and a 7.46 percent completion rate. The Australia study ran from January 9, 2018 to February 17, 2018 and yielded a 16.19 percent response rate and a 13.61 percent completion rate.Footnote 10 The surveys were implemented with random assignment across all respondents.Footnote 11
To validate results from our main study differently, we conducted an adapted dictator game (Gilens, Reference Gilens2011) at the start of the survey. After consenting to participate in the study, respondents were immediately told that if they completed the survey, they were automatically entered into an additional lottery where ten winners would be awarded a 100 USD cash prize in all three countries.Footnote 12 We then asked individuals to play a variant of the dictator game, wherein the respondent was asked to assign the 100 USD prize among themselves and ten charities in the event that they won the lottery (see Appendix A figures and Table A1, respectively, for the full text). The lottery did not include information on the probability of winning as it depended on the number of individuals who completed the survey, which neither the respondents nor researchers knew.
2.2 Measures
To assess the effect of incentives on response rates, we defined the dependent variable in two ways. First, we created a variable for whether the respondent started the survey. Second, we created a variable for whether the respondent completed the survey. The definitions of each of these outcome measures are as follows:
1. Completion rate: the number of respondents who completed the survey divided by the number of respondents who were exposed to the incentive condition (AAPOR RR1 response rate).
2. Response rate: the number of respondents who proceeded past the consent form (the first page) of the survey divided by the number of respondents who were exposed to the incentive condition (AAPOR RR2 response rate).
We calculated the completion and response rates slightly differently in India given individuals were informed about the incentive in the consent form of the survey, and not in an e-mail that asked subjects to click on a link to the survey (as was done in the Australia and USA survey). Given this difference, completion and response rates in Australia and the USA reflect the intent to treat (ITT) rates as defined above, akin to the AAPOR RR1 and AAPOR RR2 rates. Rates in India, using the same method, would yield the treatment on treated (ToT) estimates. However, we take advantage of our survey platform's equal distribution of the treatment to estimate the ITT rates in India, which enables comparability across country contexts.Footnote 13
3. Results
3.1 Incentive study
Overall, survey response rates (RR2) were highest in Australia (14.00 percent), followed by India (12.20 percent) and the USA (9.78 percent) (see Figure 1).Footnote 14 Assessing the relative effectiveness of the incentives, we see distinct patterns by country. In India, the charitable incentive and the control condition (narrative appeal only) are just as effective as the lottery treatments, whereas in the USA the lottery incentives are the most effective in improving response and completion rates. In a replication of the USA study in 2024, we also find that the lottery conditions yield the highest response and completion rates (see Table F5, and Figures F1 and F2 for response and completion rates by incentive), which helps assuage any concerns that our findings are sensitive to the timing of surveys. In Australia, the lottery incentives generated higher response rates on average than the charitable incentives and the control condition, though these differences are not statistically significant at standard levels. We report the approximate ITT means and t-test results for India in Appendix C, Tables C1 and C2. We also report the linear probability models and marginal effects models for the ITT in Appendix D, Tables D1–D8 for Australia and Tables D13–D20 for the USA. The results below are from the ITT analysis across the three countries.
3.1.1 India
Overall, we find that the charity and control conditions perform just as well as the lottery conditions in India. Specifically, while the charity treatment yielded the highest response rate in India (14.23 percent), it did not outperform any of the three lotteries (see Figure 1 and rows 5–7 in Appendix Table C1). The control condition also yielded a relatively high response rate (13.99 percent), though this rate is not statistically significantly different at the 90 percent level from the lottery-based or charity-based incentives’ response rates (see Appendix Table C1, rows 1–4). In sum, we see that the charity and control conditions perform just as well as the lottery conditions in eliciting survey response, making the zero-cost narrative appeal the most cost-effective strategy. Moreover, we do not see evidence of value atrophy; the mixed incentives did not underperform the big lottery or small lottery conditions. When we examine completion rates, we find a similar pattern (see Figure 2 and Appendix Table C2).
3.1.2 United States
In the USA, the lottery incentives led to the highest response rates (see Figure 1). Specifically, the mixed lottery yielded a 12.89 percent response rate—2.4 pp higher than both the big and small lotteries (p < 0.001 for both, see Appendix Table D13, rows 4 and 5 in column 6). This finding is at odds with the value atrophy hypothesis.
Our findings for survey completion rates are similar: the mixed lottery condition has the highest completion rate at 10.11 percent, followed by the small lottery condition at 8.0 percent and the big lottery condition at 7.74 percent (see Figure 2). The 5 USD charity condition has a slightly lower completion rate at 7.04 percent, followed by the 10 USD charity condition at 6.71 percent. In line with existing literature, the control condition yielded the lowest completion rate at 5.02 percent. All differences between the best performing incentive—the mixed lottery condition—and each of the other incentive conditions in the USA are statistically significant at p < 0.001 (see Appendix Table D15, column 6).
We replicated the 2015 USA study in 2024, and reassuringly, we find similar results (see Appendix F); the mixed lottery condition resulted in the highest response (9.19 percent) and completion rates (7.94 percent) (see Tables F3–F11 and Figures F1 and F2). These results held even though we increased the value of one of the charity conditions from 10 USD to 20 USD, and yet, each lottery condition achieved a higher response and completion rate than the 20 USD charity condition incentive at statistically significant levels. Moreover, the USD 20 charity condition and the 5 USD charity condition led to comparable completion rates.
3.1.3 Australia
In Australia, the small and big lottery conditions are the most effective, yielding 17.96 percent response rates for each condition, though differences in response rates between the lottery conditions and other incentives are not statistically meaningful (see Figure 1). In terms of completion rates in Australia, the small lottery, big lottery, and 10 USD charity condition yielded the highest rates of 15.10, 14.29, and 14.29 percent, respectively, followed by the mixed lottery condition (12.65 percent) and then the control condition (11.84 percent; see Figure 2 and Appendix Table D3). In Australia, the charity conditions—with completion rates of 13.47 percent for the 5 USD charity and 14.29 percent for the 10 USD charity—fell in between the trends in India and the USA. While the charity conditions performed nominally better than the mixed lottery incentives and the control, they did not perform better than the big and small lotteries on their own. However, no treatment in Australia yielded a statistically significant difference in completion rate over any other treatment or the control (see Appendix Table D3).
3.2 Dictator game
The findings from our adapted dictator game are summarized in Table 1 and visualized in Figure 3. In India, 81.35 percent of respondents donated some amount to charity, with an average donation of 62.04 USD. Comparatively, 45.13 percent of the individuals in the USA donated some amount to charity, with an average donation of 30.18 USD. This difference is statistically significant (p < 0.01). Altruism levels of the Australian respondents fell in between those of India and the USA; 61.88 percent of Australian respondents donated some amount of money, with an average donation of 50.12 USD, which is significantly different from both other countries (p < 0.01).Footnote 15
4. Conclusion
We conducted one of the first comparative studies of survey incentives and found that even among a similar pro-social population, responses to different incentives varied by country context.Footnote 16 While a wide variety of research suggests that egoistic incentives outperform altruistic incentives, these studies only seem to hold weight in the USA and to a lesser extent in Australia. Our study finds that while monetary lotteries are more likely to elicit higher participation rates vis-à-vis altruistic appeals in the USA and to some extent in Australia, they are equally effective as the charity appeal in India—at least among pro-social groups. In line with these findings, individuals in India are much more likely to donate money—either partly or wholly—than in the USA in an adapted dictator game. Similarly, Australian respondents are much more likely to act charitably than American respondents, but not as charitably as Indian respondents.
Additionally, we see that there is variation in the effectiveness of monetary incentives by country context. While the mixed lottery were found to be more effective among respondents in the USA [which is at odds with the value atrophy hypothesis (Khan and Kupor, Reference Khan and Kupor2016)], that was not the case in Australia. As Khan and Kupor's original value atrophy hypothesis was formed using a very different survey population (largely male) and a different set of small prizes, it is clear that additional tests of the value atrophy hypothesis are necessary to better understand the settings in which it holds.
Finally, while we are unable to unearth the mechanisms through which the Indian group acts more charitably or why Australians responded differently than Americans to different monetary incentives, it is evident that survey incentives from one context cannot be wholly exported to another—even among similar populations. Given the low participation rates of most web-based surveys and the high cost of certain incentives, it would be prudent for future researchers to test various incentives prior to running web-based surveys in county settings in which studies of survey incentives have not taken place.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2024.53.
To obtain replication material for this article, https://doi.org/10.7910/DVN/CT9EJU