7. The possibility of co-operation: lessons from experimental social psychology

Back to contents - Previous file - Next file

7.1 Co-operation in repeated games
7.2 Co-operation in games with communication
7.3 Co-operation in one-shot games

A host of experimental studies in social psychology have been conducted during the last decades with a view to testing the behavioural assumptions of economists. A large number of these studies have been actually concerned with examining the relevance of the freerider problem in the provision of public goods as well as the validity of the conclusions derived from the prisoner's dilemma. It is not our intention to offer a detailed review of the results contained in this burgeoning literature. What we purport to do is more modestly to call attention to some significant results that are particularly relevant to the issues raised in the preceding three chapters.

7.1 Co-operation in repeated games

Do experiments tend to bear out the hypothesis that, when actors interact closely on a recurrent basis within a PD payoff structure, co-operation is a distinct possibility? The theory predicts that co-operation might emerge throughout most of the game when it is being repeated infinitely—an hypothesis that is obviously impossible to test strictly in laboratory conditions—or when the game horizon is finite but indefinite (players know that the game will stop one day, but they ignore when exactly this will happen). In the latter circumstances, the subjective probabilities that a given period might be the last, such as they are assessed by the actors, play a critical role in determining whether co-operation is an equilibrium outcome.

As a matter of fact, results from one study designing PD experiments appear to show that significantly more co-operative choices tend to be made when the subjects of the experiment are led into thinking that there is a higher chance of the game continuing after each play (Roth and Murnighan, 1978), thereby confirming a major result of game theory. However, Roth considers that the results are equivocal because, even when this chance is sufficiently high to make co-operation an equilibrium outcome, a majority of first-period choices were found to be non-co-operative in the same study (Roth, 1988: 999). What must be borne in mind here is that, in computing the minimum probability of continuing (after each play) required to make cooperation an equilibrium outcome, Roth and Murnighan did not allow for the possibility that subjects might well entertain doubts about which strategy the others are following. Yet we know that when such a possibility exists, even with a high probability that the game continues, players may choose not to co-operate if they strongly believe that their opponent(s) will follow a 'nasty' strategy. Seen in this perspective, the results obtained by Roth and Murnighan need not appear as 'equivocal' any more.

Another interesting experiment has been designed by Selten and Stoecker (1986). In this experiment, the subjects were invited to play a series of twenty-five successive ten-period repeated PD: in other words, they were proposed to participate in a repeated play of a repeated game. The underlying idea of the researchers was to give subjects the opportunity to learn from previous experience. The findings indicate that the typical outcome was initial periods of mutual co-operation, followed by an initial defection, followed by non-co-operation in the remaining periods. Recall that, on the basis of the theory, we would have expected generalized free-riding to take place from the very beginning of each repeated game. This is all the more so as the number of periods in each game is not only finite but also rather small. The problem with the theory here is perhaps that it unrealistically assumes that people are able to calculate the strategic implications of their present choices by using the sophisticated chain of reasoning implied by the backwards induction argument.

Interestingly, the same study shows that the first defection occurs earlier and earlier in subsequent repeated games. The explanation advanced by Selten and Stoecker is that players actually learn from their experiences. This explanation has been summarized by Roth in the following way: 'in the initial rounds players learned to co-operate (and consequently exhibited more periods of mutual co operation starting from the very beginning and breaking down only near the end). In the later rounds, players learned about the dangers of not defecting first, and co-operation began to unravel. There is a sense in which this observed behaviour mirrors the game-theoretical observation that the equilibrium recommendation is not a good one, but that all other patterns of play are unstable' (Roth, 1988: 1000). The findings of Selten and Stoecker are essentially consistent with many other observations of finitely repeated games in which cooperation obtains for some periods, but breaks down near the end (ibid.).

To sum up, evidence from experimental studies does not systematically invalidate the predictions of the theory of repeated or extended games. Yet, these studies also show that cooperation is possible even in repeated PD games that are played only a limited number of times (the game horizon is finite and rather short). This may be due to two different reasons. First, as pointed out above, people probably do not use such a sophisticated device as the backwards induction argument when they reason about their strategic choices. This interpretation is actually confirmed by a series of recent experiments conducted by Ostrom et al. (1994). In these experiments (about which more will be said soon), it was found that the subjects frequently debated which strategy to adopt despite the high levels of information that were given to them. In particular, subjects found the task of determining optimal strategies difficult. As a result, 'many individuals utilize heuristics learned from childhood experiences. On playgrounds around the world, children arguing about the allocation of toys, space, use of facilities, etc., are taught, depending on the situation, to use principles such as: share and share alike (equal division); first in time, first in right; take turns; share on the basis of need; and flip a coin (use a randomizing device)' (Ostrom et al., 1994: 21718). Second, it must be borne in mind that, as demonstrated by Kreps and his associates, people may well co-operate until near the end of finitely repeated games if they entertain some slight doubts about the type of strategy followed by their opponent(s). What the two explanations call into question is people's ability to behave in perfectly rational ways: in the first case, it is in fact argued that people have limited computing abilities while, in the second, they suspect that their opponent(s) may genuinely behave 'irrationally' (that is, in a way that does not maximize his (their) payoffs) by following a tit-for-tat strategy instead of the dominant strategy of defection.

Axelrod's computer tournament may be recalled here since it suggests that behaviour will eventually converge towards co-operation, a finding even more favourable to co operation than those obtained in experiments with human subjects. What needs to be stressed is that the length of the repeated game in Axelrod's exercise was much longer (200 periods) than in all experimental studies: his tournament therefore mirrors more closely a repeated game with an infinite or indefinite time-horizon than a finitely repeated game. Furthermore, Roth is probably right when he suspects that the difference in results may have something to do with the difference between computer simulations and actual experiments: even though the computer simulations designed by Axelrod were conducted 'with an element of experimental flavour' (tournament entries were solicited from invited players), the point remains that 'experiments with human subjects introduce a certain amount of open-ended complexity in the form of human behaviour, that is absent from a tournament in which individuals are represented by short (or even moderately long) computer programs' (Roth, 1988: 1001).

7.2 Co-operation in games with communication

To assess the impact of communication on the predisposition of individuals to cooperate, let us consider a series of one-shot experiments conducted by Dawes et al. (1977).' Subjects were asked to give one of two responses, 'co-operate' or 'defect', which resulted in payoffs in the form of a multiperson PD: by defecting one gets a benefit at the expense of the other parties. Four different versions of this basic experiment were run by the authors. In the first place, subjects were not allowed to talk to one another before deciding whether to co-operate or to defect. In the second place, talk was permitted provided it was not related to the experiment itself. In the third place, subjects were allowed to discuss the experiment but any explicit declaration about their choices was forbidden. In the fourth and last version, all restrictions were lifted, implying that it became possible for the subjects to make promises about their choices.

An essential feature of all four versions of the experiment is that choices were confidential (subjects were required to mark their choices in private and they were promised that their decisions would never be disclosed to the other players) so that a defector had no reason to fear retaliation, and there was therefore no practical way to enforce promises to co-operate. As emphasized by Frank, since confidentiality meant that promises were not binding, communication should have made no difference (Frank, 1988: 224). Contrary to this prediction, however, unanimous defection did not occur in any of the four versions. Yet, the more people were allowed to communicate, the less often they defected: in version I (no communication), 73 per cent of the subjects defected; in version 2 (irrelevant communication), 65 per cent did; in version 3 (open communication) and in version 4 (open communication + promises), only 26 and 16 per cent, respectively, of the subjects chose to defect.

There are thus two striking results in this experiment. First, even in the polar case of complete anonymity (version 1), more than one-fourth of the subjects chose to cooperate at a positive cost to themselves, a cost of which they were obviously fully aware. This is a really surprising outcome given that the game is played during only one period, so that co-operation cannot serve to build one's reputation. Second, cooperation appears to increase with communication. Commenting on this last result, Frank remarked that 'decisions about cooperation are based not on reason but on emotion':

To cheat a stranger and to cheat someone you have met personally amount to precisely the same thing in rational terms. Yet in emotional terms, they are clearly very different. Face-toface discussion, even if not directly relevant to the game itself, transforms the other players from mere strangers into real people (Frank, 1988: 224)

The authors of the study actually pointed out that the affect level was very high among the subjects (particularly when explicit promises to co-operate were made). Thus, we are told that comments such as, 'If you defect on the rest of us you're going to have to live with it the rest of your life', were not at all uncommon. Moreover, the mere knowledge that someone defected, even without knowing the identity of the defector, often poisoned the atmosphere of the entire group. And when, in a preliminary version of their experiment, Dawes et al. told one group their choices would later be revealed, the three subjects who defected were the target of a great deal of hostility: they were, literally speaking, considered as genuine betrayers of the group.

Another series of laboratory experiments has led to the conclusion that discussion raises the co operation rate by a significant margin but only in so far as the subjects believe their effort is going to benefit members of their own group. In other words, 'group identity appears to be a crucial factor in eschewing the dominating [non-co-operative] strategy' (Dawes and Thaler, 1988: 194-5). Moreover, when discussion is permitted, it is very common for people to make promises to contribute. Are these promises important in generating co-operation? Evidence seems to indicate that promise making is related to co-operation only when every member of the group promises to co-operate. Indeed, in systematic experiments designed to test the impact of promises on co-operation, it was found that in such groups with universal promising, the rate of co-operation was substantially higher than in other groups. In other groups where promising was not universal, there was no relationship between each subject's choice to co-operate or defect and (a) whether or not a subject made a promise to cooperate, or (b) the number of other people who promised to co-operate. According to Robyn Dawes and Richard Thaler, 'these data are consistent with the importance of group identity if (as seems reasonable) universal promising creates—or reflects— group identity' (ibid.: 195). To put it another way, discussion followed by universal promising has the effect of establishing trust among the members of the group, thereby transforming a collection of people into a collective being with a specific (group) identity. People become willing to co-operate because direct communication leading to universal promise-making has given them the assurance that they will not be 'suckers' if they co-operate. One way of interpreting the change that has taken place is to say that discussion together with universal promising transforms the players' payoffs from a PD to an AG structure while instilling in them confidence that other people will co-operate from the beginning of the game.

In a recent series of aforementioned controlled experiments at Indiana University, Ostrom et al. (1994) have attempted to reproduce the decision problem of an agent in a CPR situation. The produce is allocated between the agents according to the amount of individual contribution. In other words, the return from the CPR for individual i is given by:

is the total output of the CPR. As b is positive, it is assumed that there are decreasing returns to scale (a negative externality) in the exploitation of the CPR (see Chapter 5).

In the experiments carried out, each subject received information about the values of the parameters a and b, and was given an initial endowment in the form of tokens susceptible of being invested in the CPR. In such a problem, the level of group investment that is collectively rational is equal to a/2b, while the Nash symmetric equilibrium level is given by:

Typically, in the different experiments, eight subjects interacted during thirty rounds. The main results (see Chapters 5, 6, and 7) are as follows:

  1. The individual actions do not correspond to a Nash equilibrium.
  2. An increase in the initial endowment tends to promote overexploitation of the CPR.
  3. Communication fosters co-operation.

In the present context, it is this last result that we want to emphasize. In situations where the individual actions of each subject are kept secret, but communication is possible at certain points of the experiment, the actions tend to be more co-operative than when communication is forbidden. It is noteworthy that the impact of communication is rather short-lived. As a result, an increase in the frequency of communication has the effect of getting the total investment level nearer to the collectively rational outcome. Therefore, in the words of the authors, 'these experiments provide strong evidence for the power of face-to-face communication in a repeated CPR dilemma where decisions are made privately' (Ostrom et a/., 1994: 167).

Another important result that comes out of this experimental work is that, when the opportunity to communicate is costly (someone has to invest time and effort to create and maintain arenas for face-to-face communication), the problem of providing the institution for communication is not trivial. Indeed, it reduces the speed with which an agreement can be reached and the efficacy of dealing with players who break an agreement. Yet, it is striking that all groups eventually succeeded in providing the communication mechanisms (but only once) and in dealing (to some degree) with the CPR dilemma. According to the authors, the advantage of communication is that it provides 'an opportunity for individuals to offer and extract promises of co-operation for non-enforceable contracts'. They even go so far as saying that 'keeping promises appears to be a more fundamental, shared norm than "co-operation per se"', an interpretation that rests on the fact that, when actions are observable, people make violent reproaches when someone is breaking a promise, is being uncooperative or is taking advantage of others who are keeping a promise (Ostrom et a/.: 168).

As we have already pointed out in connection with the first series of experiments reported above, emotions seem to play an important role as soon as communication and promise-making become possible. This is again borne out by yet another result achieved by Ostrom et a/., namely that when offered the opportunity to do so subjects are even willing to pay a fee to place a fine on another subject far more than predicted: in other words, they overuse sanctioning mechanisms (ibid.: 192). In net terms, this overreaction to defections may lead to a worse collective outcome than could be obtained under less informed situations (in which only the collective outcome is known to the participants).

It is important to note that communication need not always be explicit. As a matter of fact, in (small) groups whose members are in close and more or less continuous interaction with one another, explicit discussion and promise-making can often be dispensed with in routine decision problems. People know each other well enough and their mutual expectations are sufficiently structured to enable them to make their decisions straightaway. In this regard, mention ought to be made of experimental work which took place within a small group setting. This study was conducted in a Nepalese village by Bromley and Chapagain ( 1984) and it essentially consisted of asking the 140 sampled household heads about their intentions with respect to a willingness to contribute toward the enhancement of a village asset. A first conclusion reached by the authors is that freeriding is not a dominant feature in the village studied: indeed, they found a substantial interest on the part of their respondents to contribute to a collective village asset and to refrain from exploitative behaviour with respect to a village forest or a village grazing area. A second important conclusion is that a majority of the same respondents indicated that their behaviour was not much affected by the likely behaviour of others: 'A clear majority do not free-ride, nor would they if they thought others would' (ibid.: 872). In fact, across the various experiments conducted by the authors, approximately one-third of the respondents only considered the likely actions of others to be decisive in their own resource-use decisions. One should nevertheless be wary of jumping to the conclusion that many Nepalese villagers are unconditional (and therefore irrational) co-operators and that the above-reported evidence offers only weak support for the AG model. As has been aptly pointed out by Bromley and Chapagain themselves: while the villagers seem to imply that they do not much care about what others intend to do, we believe it is reasonable to assume that the villagers know what is expected of them, and that others know likewise. Hence, while claiming that the actions of others are not generally of concern to them, they may be secure in the knowledge that the resource-use decisions of the others will not be greatly out of line with some accepted norm. We hypothesize the presence of a 'background ethic' or norm that influences collective resource use decisions. This norm has evolved over time as the members of a village struggle with the daily task of making a living. I he majority care about the collective welfare, a minority will take more than is 'safe or fair', and both will do so irrespective of what they think others will do. (Bromley and Chapagain, 1984: 872)

In other words, we have here a mixed situation in which a majority of AG players interact with a minority of PD players and where not only the proportion of the former but also the degree of trust among themselves are high enough to induce them to cooperate. In the interpretation of Bromley and Chapagain, norms establish trust without the need for any explicit communication.

7.3 Co-operation in one-shot games

Do we have evidence that people can co-operate when their relationships are anonymous and when they play the equivalent of a one-period game? The answer is 'yes': individuals do not seem to exploit free-riding opportunities in the manner predicted by the PD paradigm. As a matter of fact, reciprocal altruism (altruistic acts performed in the expectation of a future personal gain—that is, in a more exact parlance, selfishness with foresight) and tit for tat cannot explain co operation in many experiments because the games are played only once or defection simply cannot be detected.

Such findings tend to indicate that people do not necessarily have the preference structure characteristic of the PD, and that normative considerations play an important role in Western societies where the experiments have been conducted (see, in particular, Rapoport and Chammah, 1965; Darley and Latané, 1970; Eiser, 1978).

One of the most well-known studies here is that made by Marwell and Ames ( 1979). What these authors show is that the free-rider problem rarely prevented groups from making substantial investments in a public good consisting of a group exchange where cash earnings from invested tokens were returned to all the members of the group by a pre-set formula, regardless of who had done the investing. It would seem that normative factors such as fairness influence contribution decisions. Indeed, most of the subjects appear to believe that there is a 'fair' contribution to be made to the public good and this belief influences their decision regarding how much to invest. In the words of the authors, 'what does make a difference in investment seems to be whether subjects are "concerned with being fair" in making their investments' (Marwell and Ames, 1979: 1357) 2

In the same study, Marwell and Ames have also confirmed Olson's prediction that public goods are more likely to be provided by groups in which some individual member has an interest in the good that is greater than its cost. Their results indeed show that the groups containing such a member invest substantially more in public goods than do other groups. Yet, even though a single member has an interest in providing the public good alone, the other members with a lower interest in this good have also been found to contribute significantly. In other words, in contrast to Olson's notion that 'the weak will exploit the strong', 'these individuals do not particularly take advantage of the fact that they have a high-interest member in the group by reducing their own investments' (Marwell and Ames, 1979: 1355).

Finally, it is worth mentioning that the amounts of individual contributions arc not much higher when the group is small (four persons) than when it is large (eighty persons). The 'incentive dilution' argument does not seem to significantly affect the investment decision. Note that the players never interacted with one another so that it was possible to tell them in various experiments that there were any number of members in their group and have them make their investment decisions in terms of this assumption.

Victim-in-distress experiments are interesting in that they test the people's ability to help in situations akin to the chicken game (doing nothing is worse than anything else). Thus, in one study, a team of psychologists staged a mock distress scene in a New York subway in order to discover whether people would come to the aid of a fellow passenger who had suddenly collapsed (Piliavin et al., 1969, quoted from Frank, 1988: 217). In one version of the experiment the victim was made to appear as seriously ill while, in a second version, the intent was to make him appear drunk. The authors found that, in the first version, the victim received help from at least one passenger in 95 per cent of the cases while, in the second version, he was assisted in as many as 50 per cent of the cases (a good result given that the victim was made to appear as being clearly responsible for his state of distress).

An interesting fact about this experiment is the following: in the stage distress scenes, there were, on the average, more than eight other passengers present in the end of the car where the victim fell. And yet in almost all of the cases reported under the first version of the experiment, at least one person quickly helped. Moreover, it was found that the likelihood of assistance did not go down as the number of bystanders increased. Therefore, the diffusion-of-responsibility explanation—when there are several observers present, the pressures to intervene do not focus on anyone and the responsibility is shared among all the onlookers, as a result of which each may be less likely to help—does not seem to be valid here (Frank, 1988: 218-19).

What makes the subway experiment resemble the chicken game is that, since bystanders were at short distance from one another, each could easily know that no one else had come to the victim's aid. And, as Robert Frank observed, 'each person might want very much of someone to help the victim, and yet at the same time not want to be the one to do it. If none of the subway bystanders acted, each would know immediately that the victim was still in jeopardy' (ibid.: 219). This is precisely what differentiates the subway experiment from the tragic story of Kitty Genovese who could scream for more than half an hour as she was brutally stabbed and raped (in New York City) without any of her thirty-eight neighbours coming to her rescue.

There is another series of experiments that deserve to be mentioned in the present context. They have been conducted by Hornstein et al. (1968).3 In what is typically a one-shot game, it has been found that an astonishingly high 45 per cent of 'lost' wallets were returned completely intact to their owners in New York City (during the spring of 1968). In large groups, cooperation may therefore arise in spite of the fact that explicit communication and promisemaking are impossible. This presumably happens when moral norms serve as a substitute for such processes of direct exchange of words and promises, such as is the case when people adhere to a Kantian ethics. It is precisely the function of moral norms to unite together, without the mediation of words, people who do not directly interact with one another.

In addition, Hornstein and his associates were able to show that the return rate was significantly higher when the subjects of the experiments were exposed to a positive attitude of benevolence on the part of a third party. The interpretation offered by the authors is that the third party served as a role model for the subjects. A related lesson is that feelings or sentiments, not reason, motivate human decisions in situations where our own acts have a significant influence on others' well-being: exposure to different kinds of persons or acts (benevolent or malevolent) evokes particular emotions which drive people to behave in certain ways (Frank, 1988: 216). Another experiment which confirms the important function of role models (or the fact that altruism or morality is encouraged by the observation of it) is that reported by Singer (1973). A helpless-looking woman was standing near a car with a flat tyre. It was found that drivers passing this woman were more likely to come to her rescue when they previously had the opportunity to observe helping behaviour in a similar type of situation.

Such experiments would seem to suggest that role models serve as a signalling device reminding people that there are honest people around. The result would be to enhance people's trust in others' predisposition towards fair dealing. Yet, this is to neglect the emotional dimension rightly emphasized by Frank. It is actually more satisfactory to view role models as privileged agents who reactivate emotional capacities associated with primary socialization processes. Moreover, if one still wants to cling to the rational egoist's model of the economists, one may consider that role models have the effect of increasing—or restoring to previous levels—the values of the payoffs attached by people to the outcome of joint co-operation. Note carefully, however, that contrary to a well-established tradition in economic theorizing, this latter interpretation assumes that individual preferences are not stable. It also enables us to better understand the potential role of political leaders in diffusing or reinforcing co operation-fostering norms. Political leaders now appear as norm reactivators. When they publicly behave in cooperative ways, they naturally arouse in people the emotions associated with that type of behaviour provided, of course, that people have sufficiently strong feelings of identification with them. Of course, political leaders as role models fulfil other functions than simply reactivating inclinations to co-operate among the people. As we have discussed in Chapter 6 (Sect. 2), they may thus play powerful roles in shaping people's sense of duty and what is right.