Hostname: page-component-5cf477f64f-h6p2m Total loading time: 0 Render date: 2025-04-08T02:29:55.989Z Has data issue: false hasContentIssue false

Do participation rates vary with participation payments in laboratory experiments?

Published online by Cambridge University Press:  02 April 2025

Huizhen Zhong*
Affiliation:
College of Economics and Management, South China Agricultural University, Guangzhou 510642, China College of Business, Shaoguan University, Shaoguan 512005, China
Cary Deck*
Affiliation:
Department of Economics, Finance and Legal Studies, University of Alabama, Tuscaloosa 35487-0224, USA Economic Science Institute, Chapman University, Orange, USA
Daniel J. Henderson*
Affiliation:
Department of Economics, Finance and Legal Studies, University of Alabama, Tuscaloosa 35487-0224, USA Institute for the Study of Labor (IZA), Bonn, Germany
Rights & Permissions [Opens in a new window]

Abstract

This paper reports a series of experiments designed to evaluate how the advertised participation payment impacts participation rates in laboratory experiments. Our initial goal was to generate variation in the participation rate as a means to control for selection bias when evaluating treatment effects in common laboratory experiments. Initially, we varied the advertised participation payment to 1734 people from $5 to $15 using standard email recruitment procedures, but found no statistical evidence this impacted the participation rate. A second study increased the advertised payment up to $100. Here, we find marginally significant statistical evidence that the advertised participation payment affects the participation rate when payments are large. To combat skepticism of our results, we also conducted a third study in which verbal offers were made. Here, we found no statistically significant increase in participation rates when the participation payment increased from $5 to $10. Finally, we conducted an experiment similar to the first one at a separate university. We found no statistically significant increase in participation rates when the participation payment increased from $7 to $15. The combined results from our four experiments suggest moderate variation in the advertised participation payment from standard levels has little impact on participation rates in typical laboratory experiments. Rather, generating useful variation in participation rates likely requires much larger participation payments and/or larger potential subject pools than are common in laboratory experiments.

Type
Original Paper
Copyright
Copyright © The Author(s), under exclusive licence to Economic Science Association 2024

1 Introduction

The generalizability of laboratory experiments is potentially limited because participation is voluntary, which could introduce selection bias. In principle, one can use variation in the participation payment to help control for selection bias if more people are willing to participate in an experiment when the advertised participation payment is increased,Footnote 1 Harrison et al. (Reference Harrison, Lau and Yoo2020, p. 567) do just this in a field experiment and call “for future experimental designs [to] exogenously vary show-up fees and evaluate the effects [of selection bias] on a case-by-case basis." As a practical matter, the impact of varying the participation payment depends on the responsiveness of the potential subjects.

While our original intention was to answer the call of Harrison et al. (Reference Harrison, Lau and Yoo2020), we observed little variation in participation rates when advertised participation payments varied between $5 to $15 using otherwise standard procedures for our laboratory. This paper documents our initial experiment and three related follow-up studies: one conducted with participation payments ranging from $5 to $100 with the offered amount in the subject line of the recruitment email; one with verbal offers being made to potential participants; and one conducted at a different laboratory using its standard recruitment procedures. Collectively our findings suggest small increases in the advertised participation payment are unlikely to have a substantial impact on the participation rate. One implication of this finding is that a researcher may need access to a very large number of potential participants and/or be able to offer substantially different participation payments in order to use variation in advertised payments as means to control for selection bias. As a practical matter, such efforts may be better suited to settings where these features are more typical, such as in large scale field experiments, rather than standard laboratory settings.

2 Initial study

2.1 Recruitment procedure

On the Sunday evening prior to a planned experiment, we sent recruitment emails to 1734 potential participants in the subject pool of the University of Alabama's TIDE Lab. The pool is primarily comprised of undergraduate students in the College of Business and includes nearly every upper-level student in the college. In the email, each of the potential participants was told the following information: (1) they were being invited to participate in a study where they would receive a specific amount of money for showing up on time (i.e., the advertised participation payment), and they could earn extra money based on their decision-making during the study; (2) the study would last about 30 min; (3) they could choose any one of the sessions scheduled every half an hour from 9:30 am to 4:00 pm on Thursday or Friday of that week; (4) they could sign up for the study until midnight Wednesday via a link in the email. The recruitment email sent to each potential participant was identical except for the advertised participation payment and used the lab's standard recruitment email template for paid studies.

The range of advertised participation payments varied from $5 (a participation payment commonly used in lab experiments in the U.S.) to $15. Based on our own intuition, we expected participation rates to vary from approximately 10% to approximately 25% as the advertised participation amount increased from $5 to $15. We randomly drew 100 different levels of participation payments (in dollars and cents) from a uniform distribution over this range.Footnote 2 The 1734 potential participants were then randomly assigned to those payment levels. In total, 125 individuals signed up for the study and 105 showed up and participated in the experiment. Thus, the registration and participation rates are 7.21% and 6.06%, respectively, while the participation rate conditional on registration is 84 % . It is not clear whether the registration and participation rate are respectively more or less equal to, lower than, or higher than the typical rates in the lab since statistics are not available for the overall registration and participation rate at TIDE Lab. The participation rate conditional on registration was in line with the typical rate at the lab.

2.2 Registration and participation decisions

Here we examine the impact of the advertised participation payment on the registration decision, the participation decision, and the participation decision conditional on registration. Figure 1 presents a scatter plot of the data. Given that our outcome variables are binary in nature (e.g., 1 if the person registers and 0 if not), our regression results are based on Probit and Complementary Log-Log models. For each model, we regress the outcome of interest on the advertised participation payment and rely on bootstrapped standard errors.Footnote 3 The results can be found in Table 1 for all three outcomes of interest. As shown in columns 1 and 2 for each outcome variable, the estimated coefficient for the advertised participation payment is small and not statistically different from zero. Figure 1 also provides the predicted probability of participation given an advertised participation payment from the Probit regression as well as 95% bootstrapped confidence bounds for the estimated probabilities. To help interpret the Probit regression, we present Fig. 2 which shows the predicted difference in registration rates for different advertised participation payments.Footnote 4 Each curve in this contour plot denotes pairs of advertised amounts for which the difference in the predicted registration rate equals a given amount. For example, the curve just below the 45 line (not plotted) identifies advertised amounts where the predicted change in the registration rate is 0.1 percentage points. Intuitively, points close to the 45 line should be associated with small changes in the prediction registration rate since the two advertised amounts are almost identical. At the other extreme, increasing the advertised amount from $5 to $15 is associated with about a 1.1 percentage point increase in the registration rate, as can be seen in Fig. 2 as (15, 5) lies to the lower right of the (unlabeled) 1.0 percentage point change contour line. Using a bootstrapping procedure we do not find that the predicted change in the registration rate is significantly different from 0 at the 95% confidence level for any pair of advertised amounts between $5 and $15.

Fig. 1 Scatter plot of advertised participation payment versus registration decision, participation decision and participation decision conditional on registration with fitted values from Probit regression and bootstrapped confidence bounds for the initial study

Fig. 2 Contour plot of the predicted change in registration rate between two advertised amounts for the initial study (lower offer vs higher offer)

Table 1 Effect of advertised participation payment on an individual's registration decision, participation decision and participation decision conditional on registration based (999 bootstrapped standard errors are in parentheses below each point estimate) for the initial study

(a) Dependent variable: registration decision

Probit model

Complementary log–log model

(1)

(2)

Advertised participation payment

0.009

0.018

(0.015)

(0.030)

Constant

−1.559

−2.785

(0.162)

(0.322)

Observations

1734

1734

Log likelihood

−448.932

−448.934

AIC

901.863

901.868

BIC

912.780

912.785

(b) Dependent variable: participation decision

Probit model

Complementary log–log model

(1)

(2)

Advertised participation payment

0.009

0.018

(0.016)

(0.032)

Constant

−1.644

−2.962

(0.173)

(0.356)

Observations

1734

1734

Log likelihood

−396.048

−396.052

AIC

796.097

796.103

BIC

807.013

807.020

(c) Dependent variable: participation decision (conditional on registration)

Probit model

Complementary log–log model

(1)

(2)

Advertised participation payment

−0.001

−0.001

(0.050)

(0.041)

Constant

1.003

0.613

(0.547)

(0.452)

Observations

125

125

Log likelihood

−54.959

−54.959

AIC

113.917

113.917

BIC

119.574

119.574

2.3 Email read rate

One limitation of this study is that we do not observe which potential participants actually read the recruitment email. Thus, the 7.21% registration rate we observe is a lower bound. To evaluate how the possibility of unread emails affects the impact of the advertised participation payment on the registration decision, we reconduct the above analysis after simulating all possible email read rates. Specifically, for each value of K from 1 (the minimum positive number of emails that may not have been read) to 1608 (the maximum number of emails that may not have been read such that registration rate is less than 100%), we went though the following process 1000 times: (1) randomly select K observations from the set of 1609 non-registrants to drop from the data set; (2) estimate the Probit model for the registration rate as a function of the advertised amount using the 1734-K retained observations. We only randomly selected non-registrants to drop because the 125 people who registered had to have read the email and the decision to read the email could not be conditioned on the advertised participation payment as that was contained in the body of the message. The upper panel of Fig. 3 shows the average of the coefficients relating registration rate to advertised payment for the 1000 Probit regressions for each value of K. The bottom panel of the figure shows, for each value of K, the average of the 1000 p-values associated with the one-sided alternative hypothesis that the estimated coefficient in a Probit regression is greater than zero. What is clear from the figure is that the results do not depend on whether only 125 people who registered read the recruitment email or if all 1734 people we contacted read the recruitment email—the advertised participation payment had little affect on participation.Footnote 5

Fig. 3 Average estimated coefficient in Probit regression and average p-value of the coefficient from 1000 simulations for the given number of emails not read in the initial study

2.4 Impact of participation payment on treatment effects

We expected participation rates to vary from, say 10% to 25%, as the advertised amount increased from $5 to $15. As previously indicated, our intention was to use this variation as an exclusion restriction in a two-step Heckman (Reference Heckman1979) selection correction approach where the first stage estimates the impact of the payment amount on registration and the second stage uses that analysis in estimating the treatment effect. Specifically, we planned to investigate how the stakes impacted risk taking, how a match impacted charitable giving, and how cognitive load impacted reasoning in a beauty contest. Because the advertised show-up payment did not significantly impact the participation decision, there is no variation in participation rates to use as a control for selection bias when measuring treatment effects from the in-lab experiment conducted on the 105 people who came to the lab. As such, we relegate the details of the in-lab experiment and analysis of the treatment effects to the Appendix.

3 Large show-up payments

We were surprised that offering $15 did not lead to substantially higher participation rates than offering $5. To determine what advertised amount would lead to a sizable increase in participation, we conducted an exploratory second study using a larger range of participation payments: $5 to $100.Footnote 6 We invited 96 randomly selected individuals from TIDE Lab's subject pool excluding those who participated in the initial study. Each person was offered a unique integer dollar show-up payment, which was listed in the subject line of the individualized recruitment email (as opposed to the initial study that only revealed the payment amount in the body of the email as is standard in TIDE Lab).

The overall registration rate of 7.29% in this additional study was nearly identical to the 7.21% observed in the initial study. The participation rates were similar as well (7.29% vs 6.06% in the initial study).Footnote 7 With our 96 observations, we conduct both Probit and Complementary Log–Log regressions to test whether increasing the participation payment increases the likelihood potential participants register for the lab experiment. The results, presented in Table 2, indicate the coefficient (in the Probit model) for the participation payment is relatively small and insignificant based upon the bootstrapped standard errors. If we use the asymptotic standard errors (0.012), the coefficient is marginally significant (p-value = 0.088). Figure 4 presents a scatter plot of the data along with the predicted probability of participation given an advertised participation payment based on the Probit regression as well as 95% bootstrapped confidence bounds for the estimated probabilities.

Table 2 Effect of advertised participation payment on an individual's registration/participation decision (999 bootstrapped standard errors are in parentheses below each point estimate) for the large show-up payment study (all those who registered participated)

Dependent variable: registration/participation decision

Probit model

Complementary log–log model

(1)

(2)

Advertised participation payment

0.013

0.024

(0.019)

(0.021)

Constant

−2.250

−4.023

(1.791)

(2.017)

Observations

96

96

Log Likelihood

−23.563

−23.670

AIC

51.126

51.340

BIC

56.255

56.524

Fig. 4 Scatter plot of advertised participation payment versus registration decision with fitted values from Probit regression and bootstrapped confidence bounds for the study with large show-up payments

To help interpret the Probit regression results, we present Fig. 5 which shows the predicted difference in registration rates for different advertised participation payments, similar to Fig. 2 for our initial experiment. Using our bootstrap procedure, we do not find that any of these changes are significantly different from 0 at the 95% confidence level using a one-tailed test, although there is marginally significant evidence at the 90% confidence level for large participation payment values. However, one should be cautious in drawing conclusions given the relatively small number of observations.

Fig. 5 Contour plot of the predicted change in registration rate between two advertised amounts for the study with large show-up payments (lower offer vs higher offer)

Two aspects of Fig. 5 are worth highlighting. First, the estimated increase in the participation rate when going from $5 to $15 is 0.5 percentage points, which is very similar to the predicted increase in the first study for that same change. Second, the predicted increase in the participation rate when going from $60 to $100 is about 10.5 percentage points. For comparison, in their Danish field experiment Harrison et al. (Reference Harrison, Lau and Yoo2020) report an increase in the participation rates of about 6 percentage points with approximately equivalent dollar amounts. Thus, our predicted increase in participation over this range of advertised payments is nominally (although not statistically) larger than what they observed. We also note that our registration rates for these amounts are lower than those reported by Harrison et al. (Reference Harrison, Lau and Yoo2020). At $60 our predicted participation rate is 7.3% while their registration rate was 18.1% for a comparable amount. At $100 our predicted registration rate is 17.9% while theirs was 24.1% for a comparable amount.

During the review process, we were met with some skepticism regarding certain aspects of our first two experiments. We therefore ran two additional experiments to determine if our results were a fluke. The first uses verbal participation offers and the second was conducted at another university. The next two sections briefly describe each experiment.

4 Verbal offer of show-up payments

We conducted another exploratory study in which people were verbally offered either $5 or $10 to participate in an experiment as they left a separate unrelated experiment conducted by another researcher. The goal of this study was to ensure everyone who was solicited to participate knew how much money was being offered. The verbal offer is conducted as follows: when collecting their payment for the unrelated study, in private, each person was informed that “There is another study starting now that will last approximately 15 min. You will be paid $[5 or 10] plus what you earn in the study. Would you like to participate?” The amount offered to each person was randomly predetermined. In total, we approached 62 people. Thirty of 32 people agreed to stay for $5 (93.75%) and 29 of 30 agreed to stay for $10 (96.67%). These rates do not differ statistically (p-value = 0.593); however, there is a ceiling effect given the high participation rate for the lower amount.Footnote 8 , Footnote 9 Those who stayed completed a real effort task as in Azar (Reference Azar2019) with a piece rate compensation scheme. Subjects could complete up to 20 tasks, with the per task compensation decreasing from $1.50 for the first task to - $ 1.00 for the 20th task. The 16th task paid $0.05 and the 17 th task paid $0.00. The modal response was to complete 16 tasks and 36 people stopped exerting effort at piece rates between $0.25 and $0.00, inclusive, despite the average earnings in the prior experiment being $30.39.Footnote 10

5 Recruitment at another university

One may be concerned the results of our initial experiment are due to something peculiar with TIDE Lab's participant pool or recruitment process. After all, we ourselves originally anticipated the registration rate would increase from something like 10% to 25%. Therefore, we conducted an additional study at the Economic Science Institute at Chapman University using their standard recruitment procedures.Footnote 11 To be clear, this new study is not designed to be a formal replication of the initial study. To have 80% power when testing an alternative hypothesis that the registration rate would increase from 6% to 7% at the 95% confidence level would require over 15,000 subjects. The study at Chapman University is meant to determine if the initial study was a fluke and that increasing the participation payment would increase the participation rate along the lines of our original priors.Footnote 12

An additional benefit of conducting the study at the Economic Science Institute is that approximately 600 people in their subject pool had previously completed a survey including demographic information such as sex assigned at birth, class standing, GPA, and major as well as assessments of personal characteristics including CRT score, competitiveness, risk tolerance, time preferences, and belief in others' trustworthiness. From the subset of people who had completed the survey, 200 people were randomly selected and evenly split into two groups. Both groups received a standard recruitment email from the lab inviting them to participate in a thirty-minute session. The only difference in the recruitment messages was that one group was offered a show-up payment of $7 (the standard amount for the lab) and the other group’ was offered a payment of $15. As in the first two studies, all recruitment emails were sent out at the same time and registration closed prior to the start of the first session. Table 3 compares the characteristics of the two groups receiving recruitment emails and shows that the groups are balanced.

Table 3 Comparison between the two groups with different advertised participation payments in terms of characteristics

Samples

Test for equality

Recruited at $7 (1)

Recruited at $15 (2)

p-values (3)

Sex assigned at birth

(Female = 1, Male =0)

64.0%

(0.48)

69.0%

(0.46)

0.454

Major

(Econ, ACCT, BA = 1, Else = 0)

38.0%

(0.49)

38.0%

(0.49)

1.000

GPA

(Scale = 0 to 4)

3.64

(0.32)

3.68

(0.29)

0.389

Class Standing

(fr. = 1, soph. = 2, jr. = 3, sr. =4)

1.91

(1.18)

2.11

(1.15)

0.227

CRT Score

(Scale = 0 to 7)

3.38

(2.11)

3.30

(2.31)

0.799

Competitiveness

(Scale = 0 to 10)

7.08

(2.00)

7.14

(1.85)

0.826

Willingness to take risks

(Scale = 0 to 10)

6.08

(1.99)

5.99

(1.78)

0.736

Prefer enjoying oneself today

(Scale = 1 to 5)

2.90

(0.97)

3.09

(0.98)

0.169

Belief in others' trustworthiness

(Scale = 1 to 5)

2.56

(1.03)

2.76

(1.01)

0.166

Observations

100

100

Standard deviations are reported in parentheses. To test for equality in characteristics between the two groups recruited with different advertised participation payments, we use proportion tests for the first two variables (sex assigned at birth and major) and t-tests for the other variables. Three freshman students did not provide GPA information and are dropped from the GPA calculations

Twelve of the 100 subjects who were offered $7 registered while 16 of the 100 subjects offered $15 registered. The difference in registration rates between these two groups is not statistically significant (12% vs. 16%, p-value = 0.415).Footnote 13 This insignificant effect aligns with our initial experiment, although there is nominally more separation and the overall registration rate is about twice as high.

Table 4 compares the characteristics of the people who registered with those who did not for both advertised amounts. Characteristics of those who registered and characteristics of those who did not register are similar for both advertised show-up payments.Footnote 14 More importantly, the characteristics of those who registered for $7 and those who registered for $15 do not differ statistically. This suggests that small variation in show-up payments may not affect the composition of participants.

Table 4 Comparison of characteristics between those who registered and those who did not for both advertised participation amounts

Recruited at $7

Recruited at $15

p-values of Tests for Equality

Registered

Not

Registered

p-values

of Test

for Equality

Registered

Not

Registered

p-values

of Test

for Equality

Registered

at $7 vs. $15

Not Registered

at $7 vs. $15

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

Sex assigned at birth

(Female = 1, Male =0)

50.0%

(0.50)

65.9%

(0.48)

0.282

56.3%

(0.50)

71.4%

(0.45)

0.229

0.743

0.436

Major

(Econ, ACCT, BA = 1, Else = 0)

41.7%

(0.49)

37.5%

(0.48)

0.780

37.5%

(0.48)

38.1%

(0.49)

0.964

0.823

0.936

GPA

(Scale = 0 to 4)

3.52

(0.33)

3.66

(0.31)

0.155

3.57

(0.36)

3.70

(0.27)

0.111

0.710

0.374

Class Standing

(fr. = 1, soph. = 2, jr. = 3, sr. =4)

2.50

(1.45)

1.83

(1.13)

0.065

2.13

(1.31)

2.11

(1.13)

0.955

0.480

0.109

CRT Score

(Scale = 0 to 7)

3.58

(2.11)

3.35

(2.12)

0.724

3.06

(2.26)

3.35

(2.33)

0.656

0.541

0.984

Competitiveness

(Scale = 0 to 10)

7.50

(2.71)

7.02

(1.89)

0.441

6.81

(1.91)

7.20

(1.84)

0.442

0.437

0.529

Willingness to take risks

(Scale = 0 to 10)

6.33

(2.61)

6.05

(1.91)

0.640

6.31

(1.70)

5.93

(1.80)

0.431

0.980

0.680

Prefer enjoying oneself today

(Scale = 1 to 5)

2.75

(1.06)

2.92

(0.96)

0.570

3.44

(0.89)

3.02

(0.98)

0.121

0.073

0.486

Belief in others' trustworthiness

(Scale = 1 to 5)

2.50

(1.31)

2.57

(0.99)

0.831

3.25

(1.00)

2.67

(0.99)

0.033

0.098

0.515

Observations

12

88

16

84

Standard deviations are reported in parentheses. To test for equality in characteristics of the people who registered and who did not for both advertised amounts, we use proportion tests for the first two variables (sex assigned at birth and major). For other variables, we use t-tests to test for equality. Three freshman students did not provide GPA information and are dropped from the GPA calculations (one did not register at $7, one registered at $15 and one did not register at $15)

6 Discussion

In a series of experiments, we find little evidence to suggest increases in the advertised participation payment will lead to substantial increases in participation rates in laboratory experiments, at least when using typical monetary amounts and standard recruitment procedures. In our initial experiment, we sent emails to 1734 potential participants with advertised amounts varying from $5 to $15 (embedded in the text of the email per standard procedure at our lab) and the average registration rate was about 7%. The estimated increase in the registration rate for the $10 increase was about 1 percentage point and we show the insignificance of the coefficient does not depend upon the email read rate. In another experiment, we varied the advertised amount from $5 to $100 and included this amount in the subject line of the email. This is the only experiment where we find marginally significant evidence that a higher advertised amount leads to greater participation, but the overall participation rate was only 7% and marginal significance was only observed when large dollar amounts were involved. Analysis of this experiment indicates raising the $5 to $15 leads to approximately a 1 percentage point increase in participation, as in the first experiment. Increasing the advertised amount from $60 to $100 is estimated to lead to an 11 percentage point increase in registration rates, which is marginally significant in a one-sided test and is a larger effect size than was observed by Harrison et al. (Reference Harrison, Lau and Yoo2020) in a field experiment using similar advertised amounts. In a separate experiment conducted at a different laboratory with 200 subjects, we found increasing the participation rate from $7 to $15 nominally, but not significantly, increased registration rates from 12% to 16%.

Our initial experiment was designed to answer the call by Harrison et al. (Reference Harrison, Lau and Yoo2020) to vary advertised participation payments in order to induce variation in participation rates so that the impact of selection bias could be identified in experiments. Ultimately, we did not observe variation in the participation rate in our initial experiment and thus could not use the advertised payment amount as an exclusion restriction to control for selection bias in a two-stage Heckman procedure. Further, the experiment we conducted at another university enabled us to identify characteristics of the people being recruited and we did not find evidence that the advertised amount impacted who was willing to participate and who was not. Overall, our findings suggest using relatively small variation in the advertised participation payment to control for selection bias may require access to very large pools of potential participants. However, we do find marginally significant evidence that advertising different large monetary amounts may lead to variation in participation rates, akin to what has been observed in the field. Unfortunately, offering $60 or $100 as a participation payment may not be practical for many laboratory studies given the substantial increase in research cost.

While we did not observe sizable variation in participation rates, our results provide useful insights for experimental economists. First, researchers with limited budgets conducting lab experiments are likely better served keeping participation payments low and either raising the salient payoffs associated with the study, collecting data from more subjects even if that requires soliciting more potential participants, or using the funds to run additional treatments or other experiments all together. Second, our results suggest that people who are willing to participate in lab experiments are motivated by relatively small amounts of money. This is further supported by our experiment where people who had just completed one study were offered money to stay and participate in another study. Regardless of whether they were verbally offered $5 or $10 nearly everyone stayed and most were willing to complete real effort tasks for as little as $0.25 despite having already earned $30.39 on average in the prior experiment. This suggests the stakes in a typical laboratory experiment are sufficiently high to motivate the participants, consistent with induced value theory and counter to often heard criticisms against laboratory experiments. We also note that the fact varying participation payments did not have the intended effect does not preclude the existence of successful methods for inducing variation in lab experiment participation. We see exploring such alternative avenues as an important direction for future research.

Supplementary Information

The online version contains supplementary material available at https://doi.org/10.1007/s10683-024-09840-2.

Acknowledgements

We thank the editor, three anonymous referees, Bob Hammond, Matt Webb, and participants at various conferences for useful comments and criticisms which led to an improved version of the paper. We would also like to thank Buddy Anderson, Zachary Dorobiala, Jeffery Kirchner, and Megan Luetje for research assistance. Support for this project was provided by the University of Alabama and Chapman University. The authors have no conflicts of interest with this work. The replication and supplementary material for the study is available at https://doi.org/10.48707/3v2v-pk82.

Footnotes

1 While there are some studies documenting the degree of selection bias in lab experiments (Harrison et al., Reference Harrison, Lau and Elisabet Rutström2009; Slonim et al., Reference Slonim, Wang, Garbarino and Merrett2013; Cleave et al., Reference Cleave, Nikiforakis and Slonim2013; Falk et al., Reference Falk, Meier and Zehnder2013; Snowberg and Yariv, Reference Snowberg and Yariv2021) there has been relatively little work attempting to control for selection bias when identifying treatment effects. One exception is Andersen et al. (Reference Andersen, Harrison, Lau and Elisabet Rutström2010) which compares the effect of skewness frames on elicited risk attitudes and the effect of time horizon on elicited time preferences for both a field sample and laboratory participants. Their results indicate that the size of the treatment effects differed, but the comparative static results were similar.

2 Advertised amounts included $5.08, $5.17, $5.23, $14.91 and $14.93 along with 95 other amounts. An advantage of this approach over only offering two amounts is that it affords greater information about the shape of the response curve. For example, if one only used $5 and $15 and observed a large difference in the participation rate, it would not be possible to tell if change was linear or driven by a discrete jump at, say, $10.

3 Given that we have a binary response variable, our bootstrap procedure involves randomly drawing from a Uniform distribution (from 0 to 1) and assigning a value of 1 for the bootstrapped outcome variable if that fitted value from the initial model exceeds the draw from the Uniform random variable (else zero). For an example of this bootstrap in a nonparametric setting see (Henderson and Sperlich, Reference Henderson and Sperlich2023, p 272) Throughout the paper, bootstrapped results are based on 999 replications.

4 The contour plots are only defined for (X, Y) values where 5 ≤ Y < X ≤ 15 because advertised amounts varied between $5 and $15.

5 If the 125 people who registered are the only ones who read the email, meaning 1609 people did not read the email, then the registration rate is 100%. While the advertised amount would not have an effect on the registration rate in this scenario, the lack of variation in the dependent variable precludes conducting Probit analysis, which is why we do not include the case of K = 1609 in Fig. 3. The case of K = 0 corresponds to all 1734 people having read the email, which is scenario analyzed in the previous subsection.

6 The original intent of this study was to determine the cost of rerunning the first study with a large variation in registration rates.

7 The people who came to the lab completed a similar experiment that was administered to those who participated in the initial study. Given how few people came to the lab, we do not attempt to infer whether the participation payment has any impact on the participant's behavior or treatment effects.

8 To have 80% power to detect the observed effect size in a one-sided test would require inviting 663 people per dollar amount.

9 One can view the initial experiment as having a floor effect since the participation rate was low for the highest offered amount.

10 Two people completed more than 17 tasks while the others stopped when the piece rate was higher than $0.25. The $30.39 includes two people who were bumped from the prior experiment and had only received the $7.50 participation payment associated with that study. In addition to the money earned in the prior experiment, these subjects could have taken the $5 or $10 payment for additional study and not completed any real effort tasks.

11 While both labs maintain standing subject pools for economics experiments, there are differences in their procedures (e.g., text of fliers for recruitment, etc.). Both labs uses online recruitment systems that email potential participants about upcoming studies—TIDE Lab uses Sona Systems software for this purpose while the Economic Science Institute uses software developed in-house.

12 For example, if one expects the participation rate to increase from 10% to 25% when increasing the participation payments from $7 to $15, 80% percent power requires just under 160 total subjects for a one-tailed test at the 95% confidence level.

13 One subject who registered for $7 did not show up. All of the results are qualitatively similar if one considers subjects who came to the lab rather than subjects who registered. Because 28 registrants would be insufficient to compare the behavior of the two payment groups, no in-lab experiment was actually conducted. Had the participation rate been higher, a similar experiment to that used for the initial study would have been implemented.

14 The only instance in which there is a statistically significant difference at the 5% level is for belief in other's trustworthiness when the show-up payment is $15. Given that this one of 18 statistical tests, it may simply be Type I error.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

References

Allred, S., Duffy, S., & Smith, J.. (2016). Cognitive load and strategic sophistication. Journal of Economic Behavior & Organization, 125, 162178, 10.1016/j.jebo.2016.02.006.CrossRefGoogle Scholar
Andersen, S., Harrison, G.W., Lau, M.I., & Elisabet Rutström, E.. (2010). Preference heterogeneity in experiments: Comparing the field and laboratory. Journal of Economic Behavior & Organization, 73, (2, 209224, 10.1016/j.jebo.2009.09.006.Google Scholar
Azar, O.. (2019). Do fixed payments affect effort? Examining relative thinking in mixed compensation schemes. Journal of Economic Psychology, 70, 5266, 10.1016/j.joep.2018.10.004.CrossRefGoogle Scholar
Carpenter, J., Graham, M., & Wolf, J.. (2013). Cognitive ability and strategic sophistication. Games and Economic Behavior, 80, 115130, 10.1016/j.geb.2013.02.012.Google Scholar
Cleave, B.L., Nikiforakis, N., & Slonim, R.. (2013). Is there selection bias in laboratory experiments? the case of social and risk preferences. Experimental Economics, 16, (3, 372382, 10.1007/s10683-012-9342-8.CrossRefGoogle Scholar
Crosetto, P., & Filippin, A.. (2013). The “bomb” risk elicitation task. Journal of Risk and Uncertainty, 47, (1, 3165, 10.1007/s11166-013-9170-z.CrossRefGoogle Scholar
Crosetto, P., & Filippin, A.. (2016). A theoretical and experimental appraisal of four risk elicitation methods. Experimental Economics, 19, (3, 613641, 10.1007/s10683-015-9457-9.CrossRefGoogle Scholar
Duffy, S., & Smith, J.. (2014). Cognitive load in the multi-player prisoner's dilemma game: Are there brains in games?. Journal of Behavioral and Experimental Economics, 51, 4756, 10.1016/j.socec.2014.01.006.CrossRefGoogle Scholar
Eckel, C.C., & Grossman, P.J.. (1996). Altruism in anonymous dictator games. Games and Economic Behavior, 16, (2, 181191, 10.1006/game.1996.0081.Google Scholar
Eckel, C.C., & Grossman, P.J.. (2003). Rebate versus matching: Does how we subsidize charitable contributions matter?. Journal of Public Economics, 87, (3–4, 681701, 10.1016/S0047-2727(01)00094-9.Google Scholar
Eckel, C.C., & Grossman, P.J.. (2006). Subsidizing charitable giving with rebates or matching: Further laboratory evidence. Southern Economic Journal, 72, (4, 794807.Google Scholar
Eckel, C.C., & Grossman, P.J.. (2008). Subsidizing charitable contributions: A natural field experiment comparing matching and rebate subsidies. Experimental Economics, 11, (3, 234252, 10.1007/s10683-008-9198-0.Google Scholar
Falk, A., Meier, S., & Zehnder, C.. (2013). Do lab experiments misrepresent social preferences? The case of self-selected student samples. Journal of the European Economic Association, 11, (4, 839852, 10.1111/jeea.12019.CrossRefGoogle Scholar
Harrison, G.W., Lau, M.I., & Elisabet Rutström, E.. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior & Organization, 70, (3, 498507, 10.1016/j.jebo.2008.02.011.CrossRefGoogle Scholar
Harrison, G.W., Lau, M.I., & Yoo, H.I.. (2020). Risk attitudes, sample selection, and attrition in a longitudinal field experiment. Review of Economics and Statistics, 102, (3, 552568, 10.1162/rest_a_00845.CrossRefGoogle Scholar
Heckman, J.J.. (1979). Sample selection bias as a specification error. Econometrica, 47, 153161, 10.2307/1912352.CrossRefGoogle Scholar
Henderson, D.J., & Sperlich, S.. (2023). A complete framework for model-free difference-in-differences estimation. Foundations and Trends® in Econometrics, 12, (3, 232323, 10.1561/0800000046.Google Scholar
Holt, C.A., & Laury, S.K.. (2002). Risk aversion and incentive effects. American Economic Review, 92, (5, 16441655, 10.1257/000282802762024700.CrossRefGoogle Scholar
Santos Silva, J.M.C., & Tenreyro, S.. (2006). The log of gravity. Review of Economics and Statistics, 88, (4, 641658, 10.1162/rest.88.4.641.CrossRefGoogle Scholar
Slonim, R., Wang, C., Garbarino, E., & Merrett, D.. (2013). Opting-in: Participation bias in economic experiments. Journal of Economic Behavior & Organization, 90, 4370, 10.1016/j.jebo.2013.03.013.CrossRefGoogle Scholar
Snowberg, E., & Yariv, L.. (2021). Testing the waters: Behavior across participant pools. American Economic Review, 111, (2, 687719, 10.1257/aer.20181065.CrossRefGoogle Scholar
Figure 0

Fig. 1 Scatter plot of advertised participation payment versus registration decision, participation decision and participation decision conditional on registration with fitted values from Probit regression and bootstrapped confidence bounds for the initial study

Figure 1

Fig. 2 Contour plot of the predicted change in registration rate between two advertised amounts for the initial study (lower offer vs higher offer)

Figure 2

Table 1 Effect of advertised participation payment on an individual's registration decision, participation decision and participation decision conditional on registration based (999 bootstrapped standard errors are in parentheses below each point estimate) for the initial study

Figure 3

Fig. 3 Average estimated coefficient in Probit regression and average p-value of the coefficient from 1000 simulations for the given number of emails not read in the initial study

Figure 4

Table 2 Effect of advertised participation payment on an individual's registration/participation decision (999 bootstrapped standard errors are in parentheses below each point estimate) for the large show-up payment study (all those who registered participated)

Figure 5

Fig. 4 Scatter plot of advertised participation payment versus registration decision with fitted values from Probit regression and bootstrapped confidence bounds for the study with large show-up payments

Figure 6

Fig. 5 Contour plot of the predicted change in registration rate between two advertised amounts for the study with large show-up payments (lower offer vs higher offer)

Figure 7

Table 3 Comparison between the two groups with different advertised participation payments in terms of characteristics

Figure 8

Table 4 Comparison of characteristics between those who registered and those who did not for both advertised participation amounts

Supplementary material: File

Zhong et al. supplementary material

Zhong et al. supplementary material
Download Zhong et al. supplementary material(File)
File 188.1 KB