1 Introduction
Why does the level of cooperation vary across societies and organizations? A natural answer is that rules, and the strength of their enforcement, might differ. One expects high levels of cooperation where formal enforcement punishes defectors, for instance by the means of high fines. When strong formal enforcement is absent, cooperation can be sustained if cooperative values are prevalent enough in the group. For this second driver of cooperation, learning about the cooperativeness of the group becomes essential.
In this paper we study the interaction between these two drivers of cooperation. We explore a simple intuition: formal enforcement does not only affect the individual decisions to cooperate, it also impacts on the capacity to learn about the group's cooperativeness. In contexts with high fines for those who do not cooperate, it is difficult to tell apart people who cooperate because of the threat of fines from those who are intrinsically cooperative types. The shadow of the law thus affects learning about the group and hence future cooperation. Consider for instance the situation of a taxpayer who needs to decide whether to truthfully report income or evade taxes. Taxpayers are disciplined by fines (Bérgolo et al., Reference Bérgolo, Ceni, Cruces, Giaccobasso and Pérez-Truglia2021) but at the same time high fines elicit tax compliance behavior that do not reveal the general intrinsic honesty of the population, which matters for future decisions.Footnote 1 In this paper, we provide evidence, both theoretical and experimental, on this interaction between fines and learning.
We rely on a lab experiment where participants play a series of indefinitely repeated prisoner's dilemma. At the beginning of each game, it is randomly determined whether a formal enforcement in the form of a fine will be imposed in all rounds of the game when a participant chooses to deviate rather than cooperate.Footnote 2 At the end of the game, each participant is re-matched with a new one and it is randomly determined whether the new game is played with fines. The design ensures that each participant (i) has a different history of exposure to fines and of past behavior of partners, and that this history both (ii) does not depend on self-selection into particular environments, and (iii) is independent from the current environment faced by each individual. Each experimental subject thus faces a different history of past cooperation observed in different enforcement environments.
Our first main result shows that, in early games, past enforcement negatively affects current cooperation. We argue that the interaction between cooperation-enforcing institutions and learning can potentially explain such a pattern. Consider the case where the population is fairly non cooperative (i.e., less cooperative than expected). In this case, experiencing a fine can speed up learning the bad news, since observing deviation in an environment with fines is a strong indicator that the partner is non-cooperative. Conversely, learning will be slow within a cooperative group in an environment with fines. In such a context, it will not be possible to learn whether cooperation is driven by fines or by partners’ willingness to cooperate. This interaction between fines and learning is the key driving force of our model, whose findings are confirmed in the data.
The finding that enforcement in the past decreases current cooperation in early games contrasts with what would be expected based on the literature documenting the behavioral spillovers of enforcement institutions—i.e., the fact that enforcement institutions faced either in the past (e.g., Peysakhovich & Rand Reference Peysakhovich and Rand2016; Duffy & Fehr, Reference Duffy and Fehr2018; Galizzi & Whitmarsh, Reference Galizzi and Whitmarsh2019, for a survey) or in other games (Engl et al., Reference Engl, Riedl and Weber2021), affect the current willingness to cooperate through behavioral channels.Footnote 3 Galbiati et al. (Reference Galbiati, Henry and Jacquemet2018) show that enforcement institutions foster future cooperation through indirect spillovers—fines increase the likelihood that the current partner cooperates, which in turn induces more cooperation in the future through indirect reciprocity (Nowak & Roch, Reference Nowak and Roch2007).Footnote 4 This previous study uses the same experiment as in the current paper, but focuses only on games occurring late in the experiment under the assumption that learning has converged. In this paper, we focus on the interaction between fines and rational learning. In early games, there is uncertainty about intrinsic values in the group (i.e., about whether the average partner is of a cooperative or a non-cooperative type). In such circumstances, our theoretical model shows that partners’ behavior in previous games brings information about how cooperative the group is and thus affects current behavior. From an empirical point of view, observing learning becomes challenging due to the simultaneous effect of behavioral spillovers. We propose two identification strategies to identify learning separately from spillovers.
The main identification strategy exploits the idea that behavioral spillovers do not last (as shown in Galbiati et al. Reference Galbiati, Henry and Jacquemet2018) while learning is cumulative: whether cooperation was observed one or two periods ago does not matter for learning, as the information it delivers remains the same, but will matter for spillovers if they decay over time. Thanks to the assumption that spillovers are short-lived, we can disentangle the two by regressing current cooperation levels on variables that are history dependent (spillovers) and history independent (learning). Our results show that replacing in the history one signal of deviation without fine by a signal of cooperation without fine, increases current cooperation by 10%; while replacing it by a signal of cooperation with fine increases current cooperation only by 5%. This is coherent with rational learning dynamics.
As a robustness check, we also provide a structural analysis (provided in the “Alternative identification strategy” in Appendix 2) which identifies learning in early games conditional on behavioral spillover parameters, under the assumption that learning has converged in late games. We first generalize our theoretical model to the case in which individual values evolve over time as a result of past experience. This extended model allows us to express the probability of cooperation as a function of both learning and spillover parameters that we can estimate with our experimental data. The results confirm that lab participants behave in accordance with the learning dynamics described by the model: cooperation by the partner in the previous game, if it was played with a fine, has a smaller positive effect than if this cooperation took place in a game without fines. This learning effect may imply that current fines negatively affect future cooperation. If the group is non cooperative, fines may speed up learning, since more individuals will be observed deviating in a coercive environment.
By documenting the dynamic interaction between enforcement and learning about group values, our study provides several contributions to the existing literature.Footnote 5 Acemoglu and Jackson (Reference Acemoglu and Jackson2015), study how norms of cooperation can emerge in an environment where current generations learn from observed cooperation in past generations. They do not consider, however, the effect of institutional variations in the past and their interactions with learning. A recent literature shows that formal rules (Sliwka, Reference Sliwka2007; Van Der Weele, Reference Van Der Weele2009; Deffains & Fluet, Reference Deffains and Fluet2020) or principals’ interventions (Friebel & Schnedler, Reference Friebel and Schnedler2011; Galbiati et al., Reference Galbiati, Schlag and Van Der Weele2013) can convey information on their own about either the distribution of preferences or values in a group, or the type of the principal (Falk & Kosfeld, Reference Falk and Kosfeld2006; Bowles, Reference Bowles2008), thus leading to ambiguous contemporaneous effects of sanctions. The main focus of our study is rather on the information delivered by agents actions’ depending on the enforcing institutions, as in Benabou and Tirole (Reference Benabou and Tirole2011). In their setup, individuals care about their social-image, which is based on inferences made by other group members about their types. Institutions shape equilibrium behaviors and thus the inference induced by different actions. In our study, we rather focus on the informativeness of actions about cooperative types under different enforcement environments. We show that differential learning due to enforcement institutions lead to countervailing spillover effects on future cooperation as long as learning is in progress: strong enforcement weakens the signal of cooperativeness sent by cooperative types, and slows down future cooperation.
Our results are also informative on the impact of enforcement on learning, and could thus be very relevant for the performance of young organizations where members have not yet learned about the cooperativeness of the others. Such an effect of enforcement on the extent to which cooperative decisions reveal intrinsic cooperativeness also echoes Ali and Bénabou (Reference Ali and Bénabou2020)'s social image model. In their framework, a benevolent principal needs to learn about the values prevalent in a group of agents who care about their social-image. Values evolve over time and the principal needs to decide on the optimal level of transparency to learn the prevalent values. A key result of the model is that some privacy is needed to achieve this goal. A high level of transparency leads to pro-social decisions that are mainly driven by the social-image motives and not representative of individual values. Our results provide empirical support to this key insight on the interaction between learning and enforcement. Last, we also contribute to the experimental literature on repeated games. Our results show how learning generates interdependence across games even when subjects are randomly re-matched. This suggests that independence across games is not granted even in settings with random matching and incentive compatible choices within each game. This point is coherent with previous findings showing that learning about the properties of the group matters for subjects’ choices (Dal Bó & Fréchette, Reference Dal Bó and Fréchette2011; Gill & Rosokha, Reference Gill and Rosokha2020)Footnote 6 and with the theoretical results of Azrieli et al. (Reference Azrieli, Chambers and Healy2018), showing how uncertainty about the population can generate failure of incentive compatibility of the random incentive system.
2 Descriptive experimental evidence
2.1 Experimental design
The design of the baseline experiment closely follows the experimental literature on infinitely repeated games and in particular Dal Bó and Fréchette (Reference Dal Bó and Fréchette2011). Subjects in the experiment play infinitely repeated games implemented through a random continuation rule. At the end of each round, the computer randomly determines whether or not another round is to be played in the current repeated game (“match”). This probability of continuation is fixed at and is independent of any choices players make during the match. Participants therefore play a series of matches of random length, with expected length of 4 rounds. At the end of each match, players are randomly and anonymously reassigned to a new partner to play the next match. This corresponds to a quasi-stranger design since there is a non-zero probability of being matched more than once with the same partner during the experiment. The experiment terminates once the match being played at the 15th minute ends.
The stage-game in all interactions is a prisoner's dilemma. Enforcing institutions are randomly assigned: at the beginning of each match, the computer randomly determines whether the match is played with a fine imposed in case of defection (payoffs in Table 1b) or without (Table 1a); the two events occur with equal probability. The result from this draw applies to both players of the current match, and to all its rounds. The fine when imposed is set at so that the resulting stage-game payoff matrix is isomorphic to Dal Bó and Fréchette (Reference Dal Bó and Fréchette2011) treatment, in which cooperation is a sub-game perfect and risk dominant action. When matched with a new partner, subjects are not provided with any information about the partner's history. Players however receive full feedback at the end of each round about the actions taken within the current match.
(a) Baseline game |
||
---|---|---|
C |
D |
|
C |
40 ; 40 |
12 ; 60 |
D |
60 ; 12 |
35 ; 35 |
(b) With fine |
||
C |
D |
|
C |
40 ; 40 |
12 ; 60-F |
D |
60-F ; 12 |
35-F ; 35-F |
2.2 Experimental data
Our data come from three sessions of the experiment conducted at Ecole Polytechnique experimental laboratory. The 46 participants are both students (85% of the experiment pool) and employees of the university (15%). Individual earnings are computed as the sum of all tokens earned during the experiment, with an exchange rate equal to 100 tokens for 1 Euro. At the end of the experiment, participants are asked to answer a socio-demographic questionnaire about their gender, age, level of education, labor market status (student/worker/unemployed) as well as the Dohmen et al. (Reference Dohmen, Falk, Huffman, Sunde, Schupp and Wagner2011) self-reported measure of risk-aversion. Participants earned on average 12.1 Euros from an average of 20 matches, each featuring 3.8 rounds. This data delivers 934 game observations, 48% of which are played with no fine.
All our analysis in this paper will be based on the action chosen in the first round of each match. While this first round decision captures the effect of past history of play on individual behavior, the decisions made within the course of a match are a mix of this component and of the strategic interaction with the current partner, and would thus be noisy measures of learning. To render this restriction meaningful, and also to be consistent with the model we introduce in Sect. 3, we thus restrict the sample to player-game observations for which the first round decision summarizes the future history (“Replication of the statistical analysis on the full sample” in Appendix 3 provides a replication of our statistical analysis on the full sample). As explained in more detail in the “Data description” in Appendix 1, if subjects choose among the following repeated-game strategies, Always Defect (AD), Tit-For-tat (TFT) or Grim Trigger (GT), the first round decision is a sufficient statistic for the future sequence of play. While AD dictates defection at the first round, TFT and GT induce cooperation at the first round and are both observationally equivalent if the partner chooses within the set restricted to these three strategies and give rise to the same expected payoff. The resulting working sample is made of 785 games among which 50.3% are played with a fine. Our outcome variable of interest is the first round decision made by each player in each of these matches. Importantly, all lagged variables are computed according to actual past experience: one's own cooperation at previous match, partner's decision and whether the previous match was played with a fine are all defined according to the match played just before the current one, whether or not this previous match belongs to the working sample.
2.3 Learning to cooperate: descriptive evidence
Figure 1 provides an overview of the cooperation rate observed in each of the two institutional environments. The overall average cooperation rate is 32%, with a strong gap depending on whether a fine enforces cooperation: the average cooperation rate jumps from 19% in the baseline, to 46% with a fine. This is clear evidence of a strong disciplining effect of current enforcement. Figure 1a documents the time trend of cooperation over matches. The vertical line identifies the point in time beyond which we no longer observe a balanced panel—the number of matches played within the duration of the experiment is individual specific, since it depends on game lengths. Time trends beyond this point are to a large extent driven by the size of the sample. Focusing on the balanced panel, our experiment replicates in both environments the standard decrease in cooperation rates: from 15% at the initial match in the baseline, 69% with a fine, to 11% and 41% at the 13th game. The time trends are parallel between the two conditions. Note that since the history of past enforcement is both individual specific and random, it is statistically the same for the two curves for any match number.
Figure 1b reorganizes the same data but at the individual level, and displays the cumulative distribution of cooperation in a given environment. We observe variations in both the intensive and the extensive margin of cooperation in the adjustment to the fine—resulting in first order stochastic dominance of the distribution of cooperation with no fine. First, regarding the extensive margin, we observe a switch in the mass probability of subjects who always choose the same first round response: 45% never cooperate with a fine, while only 26% do so with a fine, and the share of subjects who always cooperate raises from 4 to 17% when a fine is implemented. More than half the difference in mass at 0 thus moves to 1. Turning to the intensive margin, the distribution of cooperative decisions with no fine is more concentrated towards the left: 70% of individuals who switch between cooperation and defection cooperate less than 30% of the time with no fine, while it is the case of only 40% of individuals who switch from one match to the other in the fine environment.
We now turn to the main focus of the paper. To present the evidence graphically, we restrict to early games where the uncertainty about group cooperativeness is large.Footnote 7 Fig. 2a documents the surprising effect of fines experienced in previous matches on current cooperation. Comparing the two left-hand side bars to the right-hand side ones unambiguously demonstrates that current enforcement has a strong disciplining effect. For instance, restricting to matches where no fine was experienced in the past, the average rate of cooperation increases from 0.25 to 0.54 in environments with enforcement (bars 1 and 3). On the contrary, enforcement in the past induces a fall in current cooperation. For instance, comparing the two bars on the right hand side, corresponding to matches where a fine is currently in place, having played the previous match with fines decreases cooperation from an average of 0.54 to 0.38 (bars 3 and 4). Such an effect of past enforcement is puzzling, since one would expect that past fines are either neutral or exert a positive effect on current cooperation through behavioral channels (e.g., Peysakhovich & Rand, Reference Peysakhovich and Rand2016).
The interaction between cooperation-enforcing institutions and learning can potentially explain such a pattern. Consider the case where the news are bad (i.e., the population is less cooperative than expected; as seems to be the case according to the evolution of cooperation over time shown in Fig. 1). In this case, experiencing a fine can speed up learning the bad news, since observing deviation in an environment with fines is a strong indicator that the partner is non-cooperative. This interaction between enforcement and learning is presented in Fig. 2b, which reports the level of cooperation depending on whether cooperation (right panel) or a deviation (left panel) has been observed in the previous match in an environment with or without a fine. Comparing the two left hand side bars to the right hand side ones shows that cooperation by the partner in the previous match increases cooperation in the current match, consistent with the idea that experimental subjects learn about the willingness to cooperate of their partners thanks to their decisions. However this learning is clearly affected by the institutional environment. When the cooperative action was taken in an environment without fines, it leads to higher levels of current cooperation. For instance, comparing the two bars on the right hand side, corresponding to matches where the partner cooperated in the previous match, if that cooperation was observed in an environment with no enforcement, the average level of cooperation is 0.69 while it falls to 0.49 if the previous match was played with fines (bars 3 and 4).
Changes in cooperation according to the history of institutional exposure however combine the effect of learning as well as the direct effect of past enforcement on cooperation behavior. To clarify the link between learning and enforcement institutions, we now turn to a theoretical model that formalizes the interaction between the institutional environment and learning about group cooperativeness.
3 A theoretical model of cooperation dynamics
In each match (we use index t for the match number), the players simultaneously choose between actions C and D to maximize their payoff in the current match. At the end of the match, players observe the partner's decision. In the case where a match is a repeated prisoner's dilemma, as is the case in the experiment, this requires the first period action in a match to fully summarize strategies. To ease exposition, we denote i the player under consideration and the partner of i in match t. Whether player experience a fine in match t is tracked by the variable and the action of player in match t is denoted .
The payoff of player i from playing in match t is denoted and is given byFootnote 8:
where is the material payoff player i expects from choosing action a in match t. This expected payoff depends in particular on the beliefs player i holds on the probability that the partner cooperates, , and on whether the current match is played with a fine, . Note that is in fact a function of , since the presence of a fine affects the probability that the partner cooperates.Footnote 9
The parameter measures player i's intrinsic values, i.e., the individual propensity to cooperate.Footnote 10 We suppose there is uncertainty on the set of group's values, i.e., the set of individual values . We consider two possible states of the world. With probability the state is high and is drawn from the normal distribution , while with probability , is drawn from , with . The value attached to cooperation by society is higher in the high state.
3.1 Benchmark model
First consider a benchmark model with no uncertainty on values ( ). We now use the specific payoffs corresponding to the prisoner's dilemma to explicitly describe the impact of fines on payoffs. Denote the monetary payoff of i in a match where is played against . Individual i, with beliefs that her partner will cooperate, chooses action C if and only if the following condition is satisfiedFootnote 11:
This condition can be re-expressed as
with the parameters defined as , and .Footnote 12
Condition (1) implies that the decision to cooperate follows a cutoff rule, such that an individual i cooperates if and only if she attaches a sufficiently strong value to cooperation , where the cutoff depends on whether the current match is played with a fine. Since there is no uncertainty, and thus no learning, all players share the same belief over the probability that the partner cooperates, given by . The cutoff value is thus defined by the indifference condition:
We show in Proposition 1 below that there always exists at least one equilibrium, and this equilibrium is of the cutoff form. There could exist multiple equilibria, but all stable equilibria share the intuitive property that individuals are more likely to cooperate in an environment with fines.
Proposition 1
In an environment with no uncertainty on values ( ), there exists at least one equilibrium. Furthermore all equilibria are of the cutoff form, i.e., individuals cooperate if and only if and, in all stable equilibria, decreases with F and with .
Proof
See “Appendix 5: Proofs”.
The benefit of cooperation is increasing in the probability that the partner cooperates. There exist equilibria where cooperation is prevalent, which indeed makes cooperation individually attractive. On the contrary there are equilibria with low levels of cooperation which makes cooperation unattractive. These equilibria can be thought of as different norms of cooperativeness in the group, driven by complementarities in cooperation.
3.2 Learning in the shadow of the law
We now consider the more general formulation with uncertainty about the group's values. We denote the belief held by player i at match t that the state is H. All group members initially share the same beliefs . They gradually learn about the group's values when observing the decisions of partners in previous matches and we show how fines impact learning.
First consider the initial match, . All members of the group share the same belief that the state is H. The equilibrium is defined by a single cutoff value as in the benchmark model,
The only difference with the benchmark model is that the probability that the partner cooperates takes into account the uncertainty about the group's values:
We now consider how beliefs about the state of the world are updated following the initial match. The updating following this initial match provides all the intuitions for the more general updating process. The update depends on the action of the partner and whether the match was played with or without a fine. The general notation we use is . For the update following the first match, we can drop the dependence on , since all individuals initially share the same belief.
Clearly, the belief that the state is H decreases if the partner chose D, while it increases if the choice was C. The update however depends as well on whether the previous match was played with a fine or not. If the partner cooperated in presence of a fine, it is a less convincing signal that society is cooperative than if he cooperated in the absence of the fine— . Similarly, deviation in the presence of a fine decreases particularly strongly the belief that the state is high— . This is summarized in the following lemmaFootnote 13:
Lemma 1
In any stable equilibrium, beliefs following the first period actions are updated in the following way:
Proof
See “Appendix 5: Proofs”.
We show in Proposition 2 that this updating property is true in general for later matches. The beliefs on how likely it is that the partner cooperates in match t, , depends both on player i's history, but also on the beliefs about the partner's history. For instance if the partner faced a lot of cooperation in previous games, she becomes more likely to cooperate. The general problem requires to keep track of the higher order beliefs. However if a stationary equilibrium exists, with the property that for all beliefs q, then the updating property of Lemma 1 is preserved. Furthermore, in the “Appendix 5: Proofs”, we show existence of such a stationary equilibrium, under a natural restriction on higher order beliefs, i.e., if we assume that a player who had belief in match t believes that players in the preceding match had the same beliefs .
Proposition 2
(Learning) In an environment with spillovers and learning, if an equilibrium exists, all equilibria are of the cutoff form, i.e., individuals cooperate if and only if . Furthermore, if in equilibrium for all beliefs q, then the beliefs are updated in the following way following the history in the previous interaction:
Proof
The “Appendix 5: Proofs”, proves the result in the more general case with spillovers.
Lemma 1 and Proposition 2 show how enforcing institutions affect learning. These results imply that having fines in the previous match can potentially decrease average cooperation in the current match. If the state is low, a fine can accelerate learning if, on average, sufficiently many people deviate in the presence of a fine. This in turn decreases cooperation in the current match.
4 Results
We now study empirically the interaction between enforcement and learning highlighted in the model. The descriptive evidence provided in Fig. 2b (Sect. 2.3), suggests that the pattern of cooperation observed in the current match is consistent with the ranking of posterior beliefs predicted in Lemma 1 and Proposition 2, .Footnote 14
The identification of learning effects is however complicated by the fact that both enforcing institutions and cooperation by the partner in previous matches can also create spillovers on current cooperation. Two types of such spillovers of past enforcing institutions can be at stake: direct spillovers, according to which the fine experienced in the immediate past directly affects preferences and increases current cooperation; and indirect spillovers, according to which fines in the past increase cooperation of the previous partner, which in turn increases current cooperation. If such spillovers exist, they both interfere with the identification of learning effects. On the one hand, cooperation by the previous partner affects current cooperation both because it provides information on the cooperativeness of the group, but also because of indirect spillovers. On the other hand, a fine in the previous period similarly impacts learning as explained in the model, but also gives rise to direct spillovers. Galbiati et al. (Reference Galbiati, Henry and Jacquemet2018) show that these spillovers are short-living: cooperation by the partner two matches ago has a much weaker effect on current cooperation than cooperation by the partner in the previous match.
We use these findings to identify separately learning from spillovers. To illustrate the idea, compare two situations, with identical institutional histories: the first where the partner in the previous match cooperated while the one two matches ago did not and the second, with the opposite behavior, the partner in the previous match deviated and the one two matches ago did not. From the point of view of learning, both situations are equivalent since the information obtained is identical: one of the two previous partners did cooperate. However, in terms of spillovers, the first situation should lead to higher levels of current cooperation: if spillovers decay over time, facing cooperation two periods ago has a smaller spillover effect than cooperation one period ago.
We exploit this idea in Fig. 3, where we examine the effect of the history, in terms of fines and behavior of the partner, in the five previous matches independently of the order in which this history occurred. Figure 3a for instance displays how average cooperation is affected by the number of matches without fines where the partner cooperated (an outcome we denote ). An increase in the number of has a very strong effect on current cooperation, with an average rate of cooperation of 0.29 when it never occurred in the 5 previous matches to full cooperation when it occurred 4 times. Another striking feature visible in Fig. 3, is that the rate of increase in cooperation is much faster as a function of the number of than as a function of the number of (cooperation of the partner in an environment with fines). More specifically, Fig. 3b shows that the average rate of cooperation increases from 0.15 when never occurred in the 5 previous matches to 0.53 when it occurred 4 times. This reflects the idea, highlighted in the model, that cooperation in the absence of fine is a much stronger signal of intrinsic cooperativeness than cooperation in the presence of fines. The behavior for and is similar. The rate of cooperation tends to decrease more sharply with the number of than with the number of , even though the pattern is less striking than in the case of cooperation.
(1) |
(2) |
(3) |
||||
---|---|---|---|---|---|---|
Coef. |
Marg. eff. |
Coef. |
Marg. eff. |
Coef. |
Marg. eff. |
|
Constant |
0.040 |
0.147 |
0.179 |
|||
(0.274) |
(0.306) |
(0.152) |
||||
|
1.356*** |
0.305*** |
1.361*** |
0.303*** |
1.355*** |
0.302*** |
(0.270) |
(0.052) |
(0.271) |
(0.051) |
(0.268) |
(0.047) |
|
|
0.410*** |
0.092*** |
0.406*** |
0.091*** |
0.433*** |
0.097*** |
(0.061) |
(0.013) |
(0.056) |
(0.011) |
(0.083) |
(0.017) |
|
|
0.210*** |
0.047*** |
0.164*** |
0.037*** |
0.167* |
0.037* |
(0.016) |
(0.006) |
(0.009) |
(0.003) |
(0.092) |
(0.020) |
|
|
0.123*** |
0.028*** |
0.180*** |
0.040*** |
0.195*** |
0.043*** |
(0.046) |
(0.008) |
(0.038) |
(0.006) |
(0.063) |
(0.014) |
|
|
0.230*** |
0.051*** |
0.199 |
0.044 |
||
(0.070) |
(0.019) |
(0.275) |
(0.061) |
|||
|
0.019 |
0.004 |
0.073 |
0.016 |
||
(0.037) |
(0.008) |
(0.111) |
(0.025) |
|||
in a row |
0.059 |
0.013* |
||||
(0.037) |
(0.008) |
|||||
in a row |
0.020 |
0.004 |
||||
(0.152) |
(0.034) |
|||||
N |
599 |
– |
599 |
– |
599 |
– |
|
1.196 |
– |
1.201 |
– |
1.208 |
– |
|
0.588 |
– |
0.591 |
– |
0.593 |
– |
LL |
220.466 |
– |
219.610 |
– |
219.454 |
– |
Probit models with individual random effects on the decision to cooperate at first stage estimated on the working sample. Standard errors (in parenthesis) are clustered at the session level. All specifications include control variables for gender, age, whether participant is a student, whether a fine applies to the first match, the decision to cooperate at first match, the length of the previous game and match number. Marginal effects are computed at sample mean, assuming random effects are 0. Significance levels: 10%, 5%, 1%.
We confirm these graphical results in Table 2 where we estimate a Probit model on , the observed decision to cooperate of participant i in the first round of match t in the experiment. All estimated models control for current enforcement. Current fines have a very strong disciplining effect on current cooperation, increasing the probability of cooperation by more than 30%. In model (1), we do not account for spillovers and examine the effect of the history in the five previous matches.Footnote 15 The ranking of the effect is perfectly coherent with the results of Proposition 2: the signal (variable in the table) has a positive significant effect compared to (the reference) and the effect is larger than (variable ). Similarly, (variable ) decreases cooperation relative to . In terms of magnitudes, replacing in the history one signal by a signal increases the probability of cooperation by 10% while replacing it by a signal only increases the probability of cooperation by 5%. Replacing in the history one signal by a signal decreases the probability of cooperation by 3%.
In models (2) and (3), we control for potential spillovers. Model (2) introduces short-living spillovers by controlling for whether the previous match was played with a fine and whether the partner cooperated in the previous match. As explained previously, identification here relies on the assumption that spillovers are short lived, whereas learning is cumulative. Controlling for spillovers does not change the ordering of histories and only marginally affects magnitudes. Finally, in model (3), we relax the identifying assumption and allow spillovers to be longer lasting. We add a control for the number of matches in a row where partners cooperated, as well as the number of fines in a row in all previous matches. Identification here relies on the assumption that learning does not depend on the order in which signals were received, while it affects the strength of the effect of spillovers. None of these controls impact the results on learning, which still strongly affects how current cooperation react to past enforcement and behavior.
We provide an alternative identification strategy in the “Alternative identification strategy” in Appendix 2, where we model explicitly the interaction between learning and spillovers. To that end, we extend the model in Sect. 3 to the assumption that the taste for cooperation, , is directly affected by the history of partner's behavior and institutional settings. Proposition D shows that updated beliefs obeys the same ranking as in Proposition 2. This model explicitly shows that the learning and spillovers parameters cannot be separately identified when both affect cooperation simultaneously. The empirical analysis provided in “The dynamics of cooperation with learning and spillovers” in Appendix 2 relies on the assumption that learning has converged in games occurring late in the experiment so as to achieve separate identification of both kinds of parameters—i.e., estimate learning parameters conditional on the estimated spillovers. We test the predictions of the model and confirm in Table 3 the ranking predicted in Proposition D. The similarity of the results between the two identification strategies confirm (i) that learning about values is a transitory issue that no longer affects cooperation once enough interactions has taken place and (ii) that the number of games implemented in our experiment gives enough room to learning so that only spillovers affect cooperative behavior in late games.
5 Conclusion
This paper studies cooperative behavior in a setting in which individuals interact without knowing each others’ propensity to cooperate. In these situations, exogenous enforcement of cooperation may affect individuals’ capacity to make inferences about the prevalent types in the society and, as a consequence, their propensity to cooperate.
We analyze this setting through the lens of a theoretical model tailored to interpret the results from an experiment where individuals play a series of infinitely repeated games with random re-matching. We rely on two different identification strategies to disentangle institution-specific learning from the effect of past enforcement on one's own willingness to cooperate (i.e., behavioral spillovers). The first relies on the fact that institution-specific learning, in contrast with spillovers, does not depend on the order in which a given history of cooperation occurred. The second, presented in the “Appendix 1”, relies on the structure of the model and the (untestable) assumption that learning has converged in games occurring late in the experiment. The results provide strong support for the main behavioral insights of the model. The presence or absence of cooperation enforcing institutions affects the dynamics of learning about others’ likely behavior: cooperation from partners faced in the past fosters cooperation today (with different partners) differently according to the institutional environment of past interactions. Past cooperation is more informative about other's cooperativeness when it is observed under weak enforcement institutions. Similarly, defection is more detrimental to cooperation when it was observed in an environment with strong enforcement.
These results show that the choice of an institutional setting must be fine-tuned to the prevalent values in the target group. Strong enforcement aims at providing incentives to cooperate, which aren’t necessary if the cooperative standards are high in the group to which enforcement applies. Our results show that such a mismatch between the institutional arrangement and the prevalent values comes with a cost whenever there is imperfect information about these values. Providing incentives to cooperate will slow down the virtuous dynamic in cooperation that would result from learning about the group values thanks to cooperative behavior observed without enforcement. Similarly, weak enforcement within an intrinsically non-cooperative group hinders the rate of learning that would occur from observing deviations under a stronger (and better suited) enforcement policy. These countervailing effects of the mismatch between values and enforcement typically applies to situations in which learning has not yet converged. In young organizations, in which the members of the group need to learn about each others, offering incentives that best fit the underlying values in the group is key to the early success of the organization.
From a methodological point of view, our results show that games played in a sequence are related to one another even under random rematching for two reasons: first, under imperfect information about other players’ preferences, the action observed in past matches provide information about the prevalent preferences in the population. Second, behavioral spillovers induce path-dependence in the willingness to cooperate across games. This suggests in particular that identifying spillovers, the focus of a large recent literature (see Galizzi & Whitmarsh, Reference Galizzi and Whitmarsh2019, for a survey), can be challenging when the group members are also learning about prevalent values. This might lead to an underestimation of the size of spillovers in the case where the group has a low level of cooperation, since having fines might speed up learning and thus initially have a negative effect on cooperation. The similarity in the results between our two identification strategies however confirms that learning is transitory, and enough repetitions allows learning to converge so that path-dependence only results from behavioral spillovers in interactions that happen late enough in the sequence.
Acknowledgements
This paper supersedes “Learning, Spillovers and Persistence: Institutions and the Dynamics of Cooperation”, CEPR DP n$$^{\text {o}}$$o 12128. We thank Bethany Kirkpatrick for her help in running the experiment, and Gani Aldashev, Maria Bigoni, Frédéric Koessler, Bentley McLeod, Nathan Nunn, Jan Sonntag, Sri Srikandan and Francisco Ruiz Aliseda as well as participants to seminars at ENS-Cachan, ECARES, Middlesex, Montpellier, PSE and Zurich, and participants to the 2018 Behavioral Public Economic Theory workshop in Lille, the 2019 Behavioral economics workshop in Birmingham, the 2018 Psychological Game Theory workshop in Soleto, the 2016 ASFEE conference in Cergy-Pontoise, the 2016 SIOE conference in Paris, the 2017 JMA (Le Mans) and the ESA European Meeting (Dijon) for their useful remarks on earlier versions of this paper. Jacquemet gratefully acknowledges funding from ANR-17-EURE-001.
Appendix 1: Data description
Our data delivers 934 game observations, 48% of which are played with no fine. Figure 4a displays the empirical distribution of game lengths in the sample split according to the draw of a fine. With the exception of two-rounds matches, the distributions are very similar between the two environments. This difference in the share of two-rounds matches mainly induces a slightly higher share of matches longer than 10 rounds played with a fine. In both environments, one third of the matches we observe lasts one round, and one half of the repeated matches last between 2 and 5 rounds. A very small fraction of matches (less than 5% with a fine, less than 2% with no fine) feature lengths of 10 rounds or more.
As explained in the text, Sect. 2.2, for matches that last more than one round (2/3 of the sample), we thus reduce the observed outcomes to the first round decision in each match, consistently with the theory. The first round decision is a sufficient statistic for the future sequence of play if subjects choose among the following repeated-game strategies: Always Defect (AD), Tit-For-tat (TFT) or Grim Trigger (GT). While AD dictates defection at the first round, both TFT and GT induce cooperation and are observationally equivalent if the partner chooses within the set restricted to these three strategies and give rise to the same expected payoff. Figure 4b displays the distribution of strategies we observe in the experiment (excluding games that last one round only). Decisions are classified in each repeated game and for each player based on the observed sequence of play. For instance, a player who starts with C and switches forever to D when the partner starts playing D will be classified as playing GT. In many instances, TFT and GT cannot be distinguished (so that the classes displayed in Fig. 4b overlap): it happens for instance for subjects who always cooperate against a partner who does the same (in which case, TFT and GT also include Always Cooperate, AC), or if defection is played forever by both players once it occurs. Last, the Figure also reports the share of Always Cooperate that can be distinguished from other match strategies—when AC is played against partners who do defect at least once.
All sequences of decisions that do not fall in any of these strategies cannot be classified—this accounts for 14% of the games played without a fine, and 24% of those played with fine. The three strategies on which we focus are thus enough to summarize a vast majority of match decisions: AD accounts for 70% of the repeated-game observations with no fine, and 41% with a fine, while TFT and GT account for 14% and 34% of them.
Appendix 2: Alternative identification strategy
The empirical evidence presented in the paper relies on the insights from the model to provide a reduced-form statistical analysis of the interaction between learning and enforcement institutions. As a complement to this evidence, this section provides a structural analysis which takes into account both learning and behavioral spillovers. To that end, we first generalize the model presented in Sect. 3.2 to the case in which individual values evolve over time as a result of past experience. We then estimate the parameters of the model. This provides an alternative strategy to separately estimate learning and spillovers.
The dynamics of learning with behavioral spillovers
Consistent with Galbiati et al. (Reference Galbiati, Henry and Jacquemet2018), we allow both for past fines and past behaviors of the partners to affect values:
According to this simple specification, personal values evolve through two channels. First, direct spillovers increase the value attached to cooperation in the current match if the previous one was played with a fine, as measured by parameter . Second, indirect spillovers, measured by , increase the value attached to cooperation if in the previous match the partner cooperated.Footnote 16
Introducing behavioral spillovers in the benchmark
We start by introducing spillovers in the benchmark model. Under the assumption that values follow the process in (3) and , the indifference condition (1) remains unchanged,Footnote 17 but now is no longer constant and equal to since past shocks affect values. In this context, individual i cooperates at t if and only if:
The cutoff value is defined in the same way as before:
The main difference with the benchmark model is in the value of . There is now a linkage between the values of the cutoffs at match t, , and the values of the cutoffs in all the preceding matches through . Indeed, when an individual evaluates the probability that her current partner in t, player , will cooperate, she needs to determine how likely it is that she received a direct and/or an indirect spillover from the previous period. The probability of having a direct spillover is given by and is independent of any equilibrium decision. By contrast, the probability of having an indirect spillover is linked to whether the partner of in her previous match cooperated or not. This probability in turn depends on the cutoffs in , , which also depends on whether that individual himself received indirect spillovers, i.e on the cutoff in . Overall, these cutoffs in t depend on the entire sequence of cutoffs.
In the remaining, we focus on stationary equilibria, such that is independent of t. We show in Proposition C that such equilibria do exist.
Proposition C
(Spillovers) In an environment with spillovers ( and ) and no uncertainty on values, there exists a stationary equilibrium. Furthermore all equilibria are of the cutoff form, i.e individuals cooperate if and only if .
Proof
See “Appendix 5: Proofs”.
Proposition C proves the existence of an equilibrium and presents the shape of the cutoffs. The Proposition also allows to express the probability that a random individual cooperates as:
where
The dynamics of cooperation with learning and spillovers
We now solve the full model with uncertainty about the group's values and with spillovers. As in the main text, we denote the belief held by player i at match t that the state is H.
In this expanded model, the beliefs on how likely it is that the partner cooperates in match t, , depends on the probability that the partner experienced spillovers. In addition, the probability that the partner j had an indirect spillover itself depends on whether his own partner k in the previous match did cooperate, and thus depends on the beliefs of that partner in the previous match. The general problem requires to keep track of the higher order beliefs. The proof of the following Proposition shows the existence of such a stationary equilibrium, under a natural restriction on higher order beliefs, i.e if we assume that a player who had belief in match t believes that players in the preceding match had the same beliefs .
Proposition D
In an environment with spillovers and learning, if an equilibrium exists, all equilibria are of the cutoff form, i.e individuals cooperate if and only if . Furthermore, if in equilibrium for all beliefs q, then the beliefs are updated in the following way given the history in the previous interaction:
Proof
See “Appendix 5: Proofs”.
Proposition D derives a general property of equilibria. The Proposition also allows to express the probability of cooperation for given belief as:
where
Note that the parameters , and in Eq. (6) depend on . Compared to the case without learning, there are 6 additional parameters, reflecting the updating of beliefs. According to the result in Proposition D, these parameters, both in the case where the current match is played with a fine and when it is not, are such that:
Overall, having fines in the previous match can potentially decrease average cooperation in the current match. There are two countervailing effects. On the one hand, a fine in the previous match increases the direct and indirect spillovers and thus increases cooperation. On the other hand, if the state is low, a fine can accelerate learning if, on average, sufficiently many people deviate in the presence of a fine. This then decreases cooperation in the current match.
Statistical implementation of the model
The main behavioral insights from the model are summarized by Eq. (6), which involves both learning and spillover parameters. As the equation clearly shows, exogenous variations in legal enforcement are not enough to achieve separate identification of learning and spillover parameters—an exogenous change in any of the enforcement variables, or past behavior of the partner, involves both learning and a change in the values . In the main text, identification relies on the assumption that spillovers are short-living in the sense that their effect on behavior is smaller the earlier they happen in one's own history—while learning should not depend on the order in which a given sequence of actions happens. In this section, we report the results from an alternative identification strategy that relies on the assumption that learning has converged once a large enough number of matches has been played. Under this assumption, in late games, behavior is described by Eq. (5), which involves only spillover parameters. Exogenous variations in enforcement thus provide identification of spillover parameters in late games, which in turn allows to identify learning parameters in early ones.
To that end, as explained in the text, we split the matches in three groups, in such a way that one third of the observed decisions are classified as “early”, and one third as “late”. We use matches, rather than periods, as a measure of time since we focus on games for which the first stage decision summarizes all future actions within the current repeated game—hence ruling out learning within a match. Observed matches are accordingly defined as “early” up to the 7th, as late after the 13th—we disregard data coming from intermediary stages.Footnote 18 Denote the dummy variable equal to 1 in early games and to 0 in late games. Under the identifying assumption that learning has converged in late games, the model predicts that behavior in the experiment is described by:
which is the structural form of a Probit model on the individual decision to cooperate. This probability results from equilibria of the cutoff form involving the primitives of the model. Denoting observation specific unobserved heterogeneity, the vector of unknown parameters embedded in the above equation, the associated set of observables describing participant i experience up to t and the latent function generating player i willingness to cooperate at match t, observed decisions inform about the model parameters according to:
The structural parameters govern the latent equation of the model. Our empirical test of the model is thus based on estimated coefficients, , rather than marginal effects, .
In the set of covariates, both current ( ) and past enforcement ( ) are exogenous by design. The partner's past decision to cooperate, , is exogenous to as long as player i and j have no other player in common in their history. Moreover, due to the rematching of players from one match to the other, between subjects correlation arises over time within an experimental session. We address these concerns in three ways. We include the decision to cooperate at the first stage of the first match in the set of control variables, as a measure of individual unobserved ex ante willingness to cooperate. To further account for the correlation structure in the error of the model, we specify a panel data model with random effects at the individual level, control for the effect of time thanks to the inclusion of the match number, and cluster the errors at the session level to account in a flexible way for within sessions correlation.
Table 3 reports the estimation results from several specifications, in which each piece of the model is introduced sequentially. The parameters of interest are the learning parameters .Footnote 19 Columns (1) and (2) focus on the effect of past and current enforcement. While we do not find any significant change due to moving from early to late games per se (the Early variable is not significant), the effect of current enforcement on the current willingness to cooperate is much weaker in early games. This is consistent with participants becoming less confident that the group is cooperative, thus less likely to cooperate, as time passes—i.e., prior belief over-estimate the average cooperativeness of the group. The disciplining effect of current fines is thus stronger in late games.
Variable |
Model |
(1) |
(2) |
(3) |
(4) |
(5) |
---|---|---|---|---|---|---|
Parameter |
||||||
Constant |
( ) |
1.986*** |
2.020*** |
2.290*** |
2.302*** |
2.291*** |
(0.392) |
(0.362) |
(0.298) |
(0.317) |
(0.220) |
||
|
( ) |
1.448*** |
1.454*** |
1.480*** |
1.473*** |
1.472*** |
(0.164) |
(0.149) |
(0.150) |
(0.142) |
(0.137) |
||
Early |
( ) |
0.285 |
0.292 |
0.348 |
0.460 |
0.453 |
(0.411) |
(0.415) |
(0.440) |
(0.383) |
(0.377) |
||
Early |
( ) |
0.698*** |
0.698*** |
0.646** |
0.644** |
0.643** |
(0.243) |
(0.245) |
(0.258) |
(0.261) |
(0.271) |
||
|
( ) |
0.049 |
0.306** |
0.094 |
0.085 |
|
(0.120) |
(0.140) |
(0.193) |
(0.306) |
|||
|
( ) |
0.693*** |
0.674*** |
|||
(0.169) |
(0.121) |
|||||
Early |
( ) |
1.066*** |
0.430 |
0.448* |
||
(0.186) |
(0.363) |
(0.246) |
||||
Early |
( ) |
0.233 |
0.228** |
0.230** |
||
(0.168) |
(0.096) |
(0.094) |
||||
Early |
( ) |
0.876** |
0.631* |
0.621* |
||
(0.423) |
(0.329) |
(0.334) |
||||
|
0.029 |
|||||
(0.380) |
||||||
N |
553 |
553 |
553 |
553 |
553 |
|
|
1.063 |
1.064 |
1.063 |
1.060 |
1.060 |
|
|
0.531 |
0.531 |
0.530 |
0.529 |
0.529 |
|
LL |
234.677 |
234.624 |
224.416 |
220.033 |
220.031 |
Probit models with individual random effects on the decision to cooperate at first stage estimated on the working sample restricted to early (before the 7th) and late (beyond the 13th) games. Standard errors (in parenthesis) are clustered at the session level. All specifications include control variables for gender, age, whether participant is a student, whether a fine applies to the first match, the decision to cooperate at first match, the length of the previous game and match number. Significance levels: 10%, 5%, 1%.
Column (3) introduces learning parameters. As stressed above, the learning parameters play a role before beliefs have converged. They are thus estimated in interaction with the Early dummy variable. Once learning is taken into account, enforcement spillovers turn-out significant. More importantly, the model predicts that learning is stronger when observed decisions are more informative about societal values, which in turn depends on the enforcement regime under which behavior has been observe: cooperation is more informative about cooperativeness under weak enforcement, while defection is stronger signal of non-cooperative values under strong enforcement. This results in a clear ranking between learning parameters—see Eq. (7). We use defection under weak enforcement as a reference for the estimated learning parameters. The results show that cooperation under weak enforcement ( ) leads to the strongest increase in the current willingness to cooperate. Observing this same decision but under strong enforcement institutions rather than weak ones ( ) has almost the same impact as observing defection under strong institutions (the reference): in both cases, behavior is aligned with the incentives implemented by the rules and barely provides any additional insights about the distribution of values in the group. Last, defection under strong institutions ( ) is informative about a low willingness to cooperate in the group, and results in a strongly significant drop in current cooperation.
Column (4) adds indirect spillovers, induced by the cooperation of the partner in the previous game. The identification of learning parameters in this specification is quite demanding since both past enforcement and past cooperation are included as dummy variables. We nevertheless observe a statistically significant effect of learning in early games, with the expected ordering according to how informative the signal delivered by a cooperative decision is, with the exception of —i.e., when cooperation has been observed under fines. Finally, column (5) provides a robustness check for the reliability of the assumption that learning has converged in late games. To that end, we further add the interaction between observed behavior from partner in the previous game and the enforcement regime. Once learning has converged, past behavior is assumed to affect the current willingness to cooperate through indirect spillovers only. Absent learning, this effect should not interact with the enforcement rule that elicited this behavior. As expected, this interaction term is not significant: in late games, it is cooperation per se, rather than the enforcement regime giving rise to this decision, that matters for current cooperation.
Appendix 3: Replication of the statistical analysis on the full sample
In this section, we replicate the results on the full sample of observations: instead of restricting the analysis to the working sample made of decisions consistent with the subset of repeated-game strategies described in Sect. 2.2, we include all available observations. As already mentioned in the text, the matches we exclude from the working sample all appear in late games so that Fig. 2a is not affected by the choice of the working sample. Figure 5 below replicates Fig. 3 in the paper; and Table 4 replicates Table 2. In both instances, the data is more noisy but the qualitative conclusions all remain the same.
(1) |
(2) |
(3) |
(4) |
(5) |
(6) |
|
---|---|---|---|---|---|---|
Enforcement |
Spillovers |
est6 |
||||
Constant |
0.029 |
0.037 |
0.126 |
|||
(0.736) |
(0.733) |
(0.847) |
||||
|
1.172*** |
0.310*** |
1.168*** |
0.307*** |
1.165*** |
0.306*** |
(0.279) |
(0.034) |
(0.278) |
(0.034) |
(0.265) |
(0.031) |
|
|
0.347** |
0.092*** |
0.321** |
0.084** |
0.361* |
0.095** |
(0.142) |
(0.032) |
(0.149) |
(0.034) |
(0.185) |
(0.040) |
|
|
0.196** |
0.052** |
0.122 |
0.032 |
0.101 |
0.027 |
(0.090) |
(0.022) |
(0.086) |
(0.022) |
(0.148) |
(0.037) |
|
|
0.049 |
0.013* |
0.090* |
0.024** |
0.133** |
0.035** |
(0.036) |
(0.008) |
(0.052) |
(0.011) |
(0.060) |
(0.015) |
|
|
0.195** |
0.051*** |
0.089 |
0.023 |
||
(0.083) |
(0.017) |
(0.232) |
(0.059) |
|||
|
0.146** |
0.039* |
0.258* |
0.068** |
||
(0.074) |
(0.022) |
(0.137) |
(0.031) |
|||
in a row |
0.079 |
0.021 |
||||
(0.081) |
(0.019) |
|||||
in a row |
0.071 |
0.019 |
||||
(0.109) |
(0.030) |
|||||
N |
694 |
– |
694 |
– |
694 |
– |
|
1.099 |
– |
1.100 |
– |
1.113 |
– |
|
0.547 |
– |
0.547 |
– |
0.553 |
– |
LL |
300.754 |
– |
298.972 |
– |
298.235 |
– |
Marginal effects are computed at sample mean, assuming random effects are 0. Significance levels: 10%, 5%, 1%.
Appendix 4: Replication of the main results with bootstrapped standard errors
As explained in the main text, the statistical analysis presented in Table 2 clusters the standard errors at the session level, so as to take into account in a flexible way the possible correlations between subjects due to random rematching of subjects within pairs. While this approach is conservative (since it does not impose any structure on the correlation between subjects over time), the number of clusters is small and there is a risk of small sample downward bias in the estimated standard errors. Table 5 provides the results from a robustness exercise replicating Table 2 in the text with bootstrapped standard error based a delete-one jackknife procedure (see Bell & McCaffrey, Reference Bell and McCaffrey2002; Cameron et al. Reference Cameron, Gelbach and Miller2008).
(1) |
(2) |
(3) |
(4) |
(5) |
(6) |
|
---|---|---|---|---|---|---|
Enforcement |
Spillovers |
est6 |
||||
Main constant |
0.040 |
0.147 |
0.179 |
|||
(0.167) |
(0.217) |
(0.258) |
||||
|
1.356** |
0.305** |
1.361** |
0.303** |
1.355** |
0.302** |
(0.281) |
(0.056) |
(0.280) |
(0.055) |
(0.285) |
(0.052) |
|
|
0.410** |
0.092** |
0.406** |
0.091** |
0.433** |
0.097** |
(0.078) |
(0.015) |
(0.074) |
(0.014) |
(0.095) |
(0.018) |
|
|
0.210*** |
0.047** |
0.164*** |
0.037*** |
0.167 |
0.037 |
(0.018) |
(0.007) |
(0.006) |
(0.003) |
(0.111) |
(0.024) |
|
|
0.123 |
0.028* |
0.180** |
0.040** |
0.195 |
0.043 |
(0.048) |
(0.009) |
(0.041) |
(0.007) |
(0.073) |
(0.016) |
|
|
0.230* |
0.051* |
0.199 |
0.044 |
||
(0.060) |
(0.017) |
(0.323) |
(0.072) |
|||
|
0.019 |
0.004 |
0.073 |
0.016 |
||
(0.041) |
(0.009) |
(0.140) |
(0.031) |
|||
in a row |
0.059 |
0.013 |
||||
(0.053) |
(0.011) |
|||||
in a row |
0.020 |
0.004 |
||||
(0.182) |
(0.041) |
|||||
N |
599 |
599 |
599 |
599 |
599 |
599 |
|
1.196 |
1.196 |
1.201 |
1.201 |
1.208 |
1.208 |
|
0.588 |
0.588 |
0.591 |
0.591 |
0.593 |
0.593 |
LL |
220.466 |
220.466 |
219.610 |
219.610 |
219.454 |
219.454 |
Marginal effects are computed at sample mean, assuming random effects are 0. Significance levels: 10%, 5%, 1%.
Appendix 5: Proofs
Proof of Proposition 1
As derived in the main text, if an equilibrium exists, it is necessarily such that players use cutoff strategies. Reexpressing characteristic Eq. (2), we can show that the cutoffs are determined by the equation , where g is given by
The function g has the following properties: when x converges to and when x converges to . Thus, since g is continuous, there is at least one solution to the equation . At least one equilibrium exists.
If g is non monotonic, there could exist multiple equilibria. However, in all stable equilibria, is such that g is decreasing at , i.e.,
Using the implicit theorem we have:
where is the density corresponding to distribution . For stable equilibria, the denominator is negative as shown in (8), so that overall
Similarly,
Again, in stable equilibria the denominator is negative by (8). Furthermore we have since an increase in the mean of the normal distribution decreases for any x. Overall we get
Proof of Lemma 1
We first show the result: . According to Baye's rule, the belief that the state is H following a deviation by the partner in the first match (which has been played with a fine) is:
Furthermore, since , we have . Similarly we have:
Thus,
Using the fact that is decreasing in x as shown in Property 1 below, and the fact that in stable equilibria, we have , as shown in Proposition 1, implies directly that . The proof that follows similar lines.
Property 1
is increasing in x.
Proof
Denote (resp. ) the density of (resp. ). Given that (resp. ) is the density of a normal distribution with standard deviation and mean (resp. ), it is the case that:
Thus is increasing in x. In particular for , we have: . By definition, . Integrating with respect to y between and x thus yields:
Consider now the function . The derivative of this function is given by , which is positive by Eq. (11). This establishes Property 1 that is increasing in x.
Proof of Proposition D (that generalizes Proposition 2)
In the first part of the proof we assume a stationary equilibrium exists and is such that the equilibrium cutoffs are always higher without a fine in the current match for any given belief q. We then derive the property on updating of beliefs. In the second part of the proof we show existence under a natural restriction on beliefs.
Part 1: We first derive the properties on updating. We have
We can express the probability that the partner in match cooperated, by considering all the possible environments this individual might have faced in the past, in particular what his partner in match , individual chose:
Denote
and
Using expression (12), we have:
We then use all possible values of the vector in turn. Take such a value v for this vector and denote
We clearly have . Furthermore, we can write
We have imply that . Thus, using Property 2 below, it implies that and thus .
Property 2
where is increasing in x.
Proof
The derivative of the ratio is given by
We showed in the proof of Property 1 that: . Furthermore, we also showed that is increasing and since this implies: . Combining these two results in condition (13) establishes Property 2.
Part 2: We show that an equilibrium exists if we assume that a player who has belief in match t believes that other players in match t and shared the same belief .
If a stationary equilibrium exists, it is necessarily such that players use cutoff strategies where the cutoff is defined by:
We have:
Furthermore, we have
we assumed that a player who had belief in match believes that all other players in that match share the same belief . Under this restriction, we have
We get a similar expression as in the proof of Proposition 2:
This implies that for each belief q, there is a system of equation equivalent to system A in the proof if Proposition 2. We thus have a solution of this system for each value q.