**5.2
Co-ordinated contributions**

Back to contents - Previous file - Next file

An important class of problems that arise in connection with the management of common property resources requires symmetric and co-ordinated actions to be overcome. Examples abound both in appropriation and in provision problems. For instance, in a fishery where the use of dynamite is an available technical option, it is obvious that self-restraint must be practiced by everybody if the destruction of the fishing-ground is to be avoided. Protection of the breedinggrounds gives rise to the same problem. To quote examples from other sectors, restricted use of fire for the clearing of agricultural lands or management of water control infrastructures (including control of soil salinity and water-logging problems through sub-surface drainage) also require co-ordinated actions. Important issues of provision, such as steep-slope management and anti-erosion control in mountainous terrain, programmes of pest control, or certain surveillance actions requiring a critical amount of effort (e.g. guarding coastal fishing-grounds against the encroachments of mechanized boats) obviously belong to the above class of problems.

*The one-shot assurance game*

The game form suitable for representing this kind of situation is known as the assurance game (see Sen, 1967, 1973, 1985; Runge, 1981, 1984b, 1986; Dasgupta, 1988; Taylor, 1987; see also Ullmann-Margalit, 1977: 41; Collard, 1978: 12-13; 3644, 80-9; Field, 1984: 699-700; Levi, 1988: ch. 3). In this game, a minimal effort must be contributed by all players if they are to receive any benefit from their own action.

To return to a familiar example, consider the case in which two fishermen must independently decide whether to put one or two boats at sea for the catching of fish. Let us assume that their payoffs for the various possible outcomes are as given in Figure 5.11.

The important point to note is that, contrary to what obtains in a PD game, the net payoff accruing to a player when he freerides on the public good provided by the other player (6 units) is smaller than the net payoff he would receive by co-operating (8 units). Nevertheless, if actors think it best to co-operate with each other, they still find it very unpleasant to be exploited by free-riders: contrary to what is observed in the chicken game, each player prefers mutual defection (where he gets a payoff of 2 units) to being a 'sucker' (which causes him to receive a payoff of only I unit). In short, universal cooperation is the most preferred outcome. Then comes generalized freeriding. Least preferred are those outcomes in which a mismatch of actions occurs. This payoff structure actually determines three possible equilibria, two in pure strategies each fisherman puts out one boat or each fisherman puts out two boats — and one in mixed strategy. The Pareto-optimal outcome (each fisherman puts one boat out to sea), is only one of the two equilibria in pure strategies. Which equilibrium will be selected actually depends on prior expectations regarding the other's intended action.

**FIG. 5.11. A fishing assurance game
**

Clearly, therefore, the best policy for each party depends on what he thinks the other will do. In actual fact, optimal choice for each fisherman is to put out only one boat if the probability that the other fisherman will choose the same strategy is assessed by him to be in excess of 1/3, and his optimal choice is to put out two boats if this probability is less than 1/3. Denoting by p the probability that the other fisherman puts out one boat, the value of 1/3 is obtained by solving the following equation:

8p +1 (1 - p) = 6p + 2(1 - p)

which establishes the condition for each fisherman to be indifferent between putting out one boat and putting out two boats to sea. (As is implicit from the above equation, the equilibrium in mixed strategy is such that each fisherman puts out one boat with probability 1/3 and two boats with probability 2/3.)

Thus, there is no certainty that
the game will equilibrate at the more favourable of the three
(Nash) equilibrium points. It is noteworthy, however, that
players need not have complete assurance that others will also
co-operate to adopt the same strategy: probabilities
significantly smaller than I may provide sufficient incentive for
cooperation. Still, the possibility exists that the worst
equilibrium outcome will emerge even though the assumption of
common knowledge implies that *each player knows that the other
also prefers the co-operative outcome. *This is because there
is a genuine trust problem, that is, a problem of assurance
regarding the other person's intended action. Thus, A may know
that B would prefer joint co-operation, yet he entertains the
fear that B. even though he has corresponding knowledge about his
own preference, will choose the maximin strategy ('defect') due
to mistrust in what he will himself eventually decide to do. And
B can reason in the same way with respect to A's presumed
behaviour. *The trust problem is clearly reciprocal since it is
basically a problem of mutual expectations*: A may fear that B
will abstain from co-operating not because B prefers to free-ride
but because B's expectations about his own (A's) behaviour may be
pessimistic, and vice versa for B vis-à-vis A.

Now, if some form of rudimentary co-ordination device such as pre-play communication (say, in the form of 'cheap talk') is allowed and if the signals sent by the players are interpretable in an unambiguous way, co-operation or joint contribution by both players is much more likely to arise because the players then have the opportunity to reassure one another and to form optimistic expectations about their mutual behaviours. What is worth emphasizing is that the nature of interactions in small groups is highly conducive to pre-play communication and, therefore, if both players' profile is that of an AG player, the Pareto-superior outcome is very likely to be established even in this one-shot game. (Remember that we have reached the same conclusion, but for repeated games, when we analysed situations structured as PD.)

*Leadership in co-ordination
problems*

The uncertainty surrounding the players' decisions in a co-ordination problem is overcome as soon as either of the two players can take the initiative in the game with a view to signalling to the other his intention to co-operate. In game-theoretical terms, a particular way of representing the possibility of leadership is by specifying a two-stage assurance game. When the game is played in such a fashion, co-operation by both players is certain to occur: indeed, knowing that the other player will follow suit, the leader has an incentive to make a co-operative move. In other words, by co-operating in the first stage of the game, the leader does not incur the least risk of being 'exploited' by the follower. The outcome (co-operate, co-operate) is clearly a subgame-perfect equilibrium. This is illustrated in Figure 5.12 in which the same payoffs as those assumed in Figure 5.11 have been represented in an extensive form.

If a pure problem of distrust (such as is implicit in the assurance game) can be easily surmounted as soon as one of the players can send a signal or make a first move to the effect that he is determined to co-operate, then, a fortiori, the same problem is solved when the game can be repeated. Assume, for instance, that one of the players follows a cautious strategy (start by defecting and, thereafter, co-operating only if the other player has co-operated in the previous round). The other player's best reply to that strategy is obviously not to replicate it but, instead, to start by co-operating (say, because he follows a strategy of unconditional co-operation) so as to trigger an uninterrupted chain of universal co-operation. Clearly, the cautious strategy is not a Nash equilibrium strategy. However, a 'bad' strategy such as one of unconditional defection is a best reply to itself and therefore supports a Nash equilibrium. (This obviously follows from the fact that, if AG players like best to co-operate, they do not want to be 'exploited'.) What needs to be stressed is that such a strategy is not subgame-perfect since, if by mistake a player co-operates, the other player's best response to that mistake is to co-operate, thus deviating from his Nash equilibrium path. To put it in another way, the commitment of one player to unconditional defection is not credible. Notice that the possibility of a co-operative outcome in such a repeated game is an application of the aforementioned folk theorem and its extension by Benoit and Krishna (1985). Just to give a simple example, a repeated assurance game underlies the observation that in lobster fisheries molesting another fisherman's trap is rarely done because by refraining from doing so a fisherman improves the chances that his own traps will not be molested (Sutinen, Rieser, and Gauvin, 1990: 341).

**FIG. 5.12. A sequential assurance
game **

*Threshold effects and
freeriding in N-player co-ordination games*

An interesting feature that arises in connection with co-ordination problems is the existence of threshold effects. As a matter of tact, in many cases, a collective action can bear fruit only if the number of contributors reaches a critical size. To analyse such kinds of situation, we need to consider an N-player assurance game. Let us assume that a given public good (say, the maintenance and management of an irrigation system) yields individual benefits to each member of a group equal to b(m), where m stands for the number of voluntary contributors. Each contributor incurs a fixed cost of c units and, therefore, the total cost for the group is equal to c x m. The choice facing player i can then be represented as in Figure 5.13.

First assume that both and are positive, implying increasing returns to provision of the public good. Assume also that b(1) - c < 0, so that if no other player contributes to the public good, player i also chooses not to contribute. Yet, there exists a critical size m* such that b(m) - c > b(m* - 1) or c < b(m*) - b(m* - 1): once a certain number, m*, of other players agree to contribute, player i has an incentive to follow suit since the cost of individual contribution is less than the marginal individual benefit of that contribution. It is evident that, since , if b(m*) - c > b(m* - 1), then b(j) > b(j - 1) + c, " j > m*. Therefore, as long as at least m* other players contribute, player i prefers to co-operate rather than free ride.

**FIG. 5.13. A N-player assurance
game **

In the above game, there are two Nash equilibria in pure strategies. The first equilibrium is characterized by universal defection: given that no one else contributes, player i has no incentive to undertake the collective action alone (we are therefore not in a chicken game). The second equilibrium is characterized by the fact that the collectively optimal level of the public good is provided: everybody contributes to that equilibrium. To avoid falling into the 'bad' equilibrium, a subgroup of players may decide to undertake the collective action in concert, regardless of what the others do. Here lies an important rationale for leadership and the function of the leader consists of mobilizing a sufficient number of contributors rather than showing the good example as assumed in the previous subsection.

It deserves to be noted that an interesting problem which can be raised within the framework of an N-player assurance game is actually a limit case of that analysed above, namely the case in which b( j) = 0, " j < n and b(n) > c. In other words, the collective action can succeed or the public good can be provided only if everyone participates; if only a single agent defects, the public good disappears. The protection of an endangered species or of a breeding-ground illustrates such a possibility that perfectly fits with the description of what an assurance game is about.

Let us now consider the case
where there are decreasing returns to scale in the provision of
the public good:
is negative. In this case, there again exists a critical number
of contributors, m*, below which no individual player has any
incentive to contribute. Yet, there now also exists an upper
threshold number of contributors, say m**, beyond which the
individual marginal benefit of contributing falls short of cost
c. The two Nash equilibria in pure strategies are easy to
identify: the 'bad' equilibrium in which nobody contributes and a
'nice' equilibrium in which just m** players contribute while the
others defect. As long as the size of the group, n, is small
(below m**), everyone participates in the collective action under
the 'nice' equilibrium. However, in large groups whose size
exceeds the threshold m**, the public good is only partially
produced by a subgroup of players and the amount provided is not
Pareto-optimal. It is actually less than the collectively
rational amount which would require m0 contributors, with m^{0}
= argmax(nb(m^{0}) - m^{0}c). The collectively
rational (cooperative) outcome requires that the collective
marginal benefit is equal to the marginal cost c, that is . It is to be compared
to the individually rational (Nash) outcome, m**, which is by
definition such that Bearing in mind the assumption of decreasing returns
to public-good provision, it is evident that m° > m**.

In the latter circumstances (the group is large and n > m**), a fraction of the players does not contribute in equilibrium and freeride on the others' efforts. Of course, the wider the gap between the size of the group, n, and the equilibrium threshold number of contributors, m**, the larger the proportion of freeriders. In actual fact, the problem facing the players resembles that of an N-player chicken game, in which the Nash equilibrium would be suboptimal.

In community settings, a large proportion of such freeriders may cause serious tensions to arise. The community may possibly overcome these tensions, however. Thus, it may resort to a co-ordinated solution which has the effect of rotating over time the burden of contributions among the various agents. One option here is to use a correlated equilibrium solution in which contributors are selected through a lottery mechanism. It may also, at a given point of time, ensure that contributors with respect to a given collective action are allowed to abstain from participating in other collective actions so as to distribute equally the costs of public-good provision over a series of different activities. If the above kind of solutions are not applied, an exclusionary process is likely to ensue. This is apparently the case referred to by Ostrom and Gardner (1993) when analysing the Thambesi irrigation system in Nepal. Here, as pointed out earlier, maintenance of the headworks can be carried out by a limited number of the water users and, in particular, the work can be done by head-enders alone. The implication of this situation is that tail-enders may find themselves in a low bargaining position whenever important matters are to be discussed (Ostrom and Gardner, 1993: 97-9).