Holding Pandora: Cooperation, Law, and God

By MatTehCat | MatTehCat's Blogs | 8 Feb 2023


 

Introduction

“Logic may indeed be unshakeable, but it cannot withstand a man who is determined to live.”

― Franz Kafka, The Trial

Paul sat on the edge of a cliff, looking over the precipice. The sun was setting behind him as the night sky and stars came rolling in. The blend of both day and night cast a purple hue about the heavens. He looked down at the churning seas beneath the cliff, clashing against the rocky shore, casting a violent cacophony of jade and eggshell white. There was pain in his heart, tears in his eyes, though his face showed no grief. “How can I go on,” he quietly whispered to himself. He looked up, towards the horizon; the sea merged with the heavens. He recalled a poem from his youth, one about a lost sailor who was hurled into the protean sea and stars, only to be saved by a mermaid who carried him back to shore. “Why would she do that,” he whispered, “Why would she save him? Why would they lie?”

The last blog post I wrote covered the topic of reason. I am not sure I answered the core question of that paper: “What is reason?” or “What does it mean to be reasonable?.” However, I think that may have been the point. I concluded that societies were predicated on a desire for power, control, and the will of a people. Perhaps this is true, it probably is. Yet something about this still doesn’t paint the whole picture for me.

Over this past week, I took the time to read more about the emergence of cooperation through game theory models. Essentially, it was clear to me, and it’s clear from the literature, that social engagement is predicated on something like a game. In fact, I think it is safe to say that most life engages with itself and its environment in a way that can be classified as a game.

There are certain rules that define the nature of the game, which was clearly covered in the last blog post I wrote, and these rules define the limits of the game. But within those limits is an open question: what’s the goal? This is partly the issue that the last paper left unresolved. While reason may be a vehicle to justify control, the will of a people, or power, that desire for power, the will of a people, or control must still be aimed at some end. It could be happiness, liberty, and life, but are these sufficient goals, or are they only the necessary product of a larger goal? This is the question I aim to answer and think I can.

I will begin by exploring some of the conditions of the meta-game that I think define social interactions and what they mean about the nature of the social interactions that take place every day. I will then explore some interesting games that I read about this past week that I think define the meta-game defining all other games. Penultimately, I will discuss how this relates to law. Lastly, I will conclude the paper by synthesizing what this means existentially and philosophically, and hopefully, I will be able to answer my central question.

 

What are the Rules?

In order to cooperate, there must be certain rules of engagement. If there were no rules of engagement constraining the field of play, cooperation would be impossible because there would be absolutely nothing constraining the kinds of behavior from one moment to the next; i.e., it would be utterly impossible to predict or be confident about your partner’s action.

Hence, we need some rules. Thankfully, Martin Nowak has provided us with an outline of the kinds of rules that give rise to cooperative behavior. Overall, he covers five of the rules found throughout many of the games discussed in the literature on the emergence of cooperative behavior. He highlights the key takeaway from many of the studies I will be reviewing later on, mainly: “a population of only cooperators bas the highest average fitness, while a population of only defects has the lowest” (Nowak, 2006). This is deceptively simple. In a world full of alternatives, different desires, and goals, how do you get people to cooperate? Regardless, as Nowak notes: “the fitness of individuals depends on the frequency… of cooperators in the population” (ibid).

 

First and foremost, a necessary but by no means sufficient ingredient for cooperation, there’s kin selection. Here’s the idea, as Nowak notes: “natural selection can favor cooperation if the donor and the recipient of an altruistic act are genetic relatives” (ibid). Very simply, the relatedness of any two individuals must be greater than the cost-benefit ratio of the cooperative act.

r>c/b

However, this isn’t enough to fully explain cooperation. Cooperation may also occur between unrelated individuals or members of a different species, like the relationship between cats, dogs, camels, horses, cows, sheep, etc. In a game of direct reciprocity, both players may cooperate or defect from each other, the former has the greatest payoff for both while the latter has the least, or one may cooperate and the other may defect, which produces the highest payoff for the defector, and that payoff is greater than cooperating for the defector. Hence, there’s a dilemma; both should defect, yet if they cooperate, they earn more than if both defected.

There are many different strategies for this game. One is a tit-for-tat strategy and another is a win-stay, lose-shift strategy, which typically proves to be more effective than the former. Win-stay, lose-shift is simple: stay if you’re winning, shift if you’re betrayed. Nowak explains the relationship between the two as follows: “Tit-for-tat is an efficient catalyst of cooperation in a society where nearly everybody is a defector, but once cooperation is established, win-stay, lose-shift is better able to maintain [cooperation]” (ibid).

There are also limits to the emergence of direct reciprocity. I.e., if the probability of another, w, “of another encounter between the same two individuals exceeds the [cost-benefit] ratio” (ibid) of an altruistic act, then direct reciprocity can emerge:

w>c/b

 

Thirdly, there’s indirect reciprocity. I.e., by directly cooperating with others, and other animals, we gain a reputation, and we desire to maintain that reputation. Indirect reciprocity suggests that players are more likely to cooperate with others based on the reputation of the person receiving the help. For example, if the person has a better reputation, they’re more likely to receive help from a giver, while if they’re less likely to receive help from a giver, then they have a relatively worse reputation.

Interestingly, indirect reciprocity seems to mainly be a human phenomenon, possibly because it is dependent on significant cognitive capabilities. Language or the ability to engage in abstract communication also seems to be necessary for the emergence of indirect reciprocity. And, as has been noted, “the evolution of morality and social norms” also seem to be the product of indirect reciprocity (Alexander, 1987; Ohtsuki and Iwasa, 2004; Brant and Sigmund, 2004).

Like our other rules, for indirect reciprocity to work, i.e., for it to promote cooperation, “the probability, q, to know someone’s reputation [has to] exceed[] the cost-to-benefit ratio of the altruistic act.”

q>c/b

Effective cooperation is also dependent on another rule: network reciprocity. Effectively, network reciprocity occurs when individuals ‘cluster’ together, and then help each other. This is the direct result of the fact that spatial structures emerge as a result of repeated interactions between to cooperators or members of a group. One rule defines whether network reciprocity can lead to cooperation: “the benefit-to-cost ratio must exceed the average number of neighbors, k, per individual.

b/c>k

Lastly, there is group selection. Group selection is a controversial selection process, but I see no legitimate reason as to why it is not a potential mechanism by which selection for cooperation can occur. Group selection also goes by the name ‘multi-level’ selection.

In a group selection model, one group is divided into two. As has been previously mentioned cooperators cooperate, defectors defect, and individuals are more likely to reproduce based on their net return. “If a group reaches a certain size, it can split into two” (ibid). Because an environment can only hold so many members, when this occurs, another group has to be sacrificed. This means that there’s cooperation between groups, specifically because some groups reproduce more efficiently than others because they’re able to net greater returns. Cooperators reproduce more effectively on their own, and defectors reproduce less effectively; therefore, cooperators succeed in the competition of group selection. The difficulty: ensuring your group of cooperators is not invaded by defectors who gain the up hand in a group of only cooperators. Group selection is constrained by a simple rule: “if n is the maximum of group size and m the number of groups, then group selection allows the evolution of cooperation,” i.e.:

b/c>1+n/m

For example, if b=2, c=1, n=2, m=4, then

2/1>1+2/4

i.e.,

2>1.5

These are the five rules that effectively constrain the development of cooperation in a species. However, what is so fascinating, as we shall see, is that none of these processes are rational. Andrew M. Colman, in what is effectively a review of the literature on cooperation, astutely identifies the main issue explored in the last blog I wrote: societies are not predicated on reason, they cannot be.

Implicit in many models of early game theory analysis was the assumption that rationality, reasonability, was the norm, whereas irrationality was “abnormal or pathological” (Colman, 2003). This view, it seems, cannot be the case if cooperation is to exist. If it is the case, then cooperation will not work. Yet it does work.

 

A Leap of Faith

All of the rules we reviewed are different strategies that, under different circumstances, can lead to cooperative behavior. If “most members of a population adopt [an evolutionarily stable strategy], then no mutant strategy can invade the population by natural selection” (ibid, 140). An evolutionary stables strategy is effectively a Nash Equilibrium, but just because Nash Equilibriums exist doesn’t mean that every Nash Equilibrium is an evolutionarily stable strategy. Effectively, the emergence of an evolutionarily stable strategy in a population doesn’t need to be, and likely isn’t, the result of rationalistic processes.

Three conditions define rational preferences: Completeness, Transitivity, and Context-free ordering. Completeness is defined as follows: for any pair of alternatives in a set, an individual will prefer one of the two and is indifferent about the other. Transitivity is defined as follows: given a set of three choices in a set, all choices are as preferable as the other. Context-free ordering is defined as follows: in any context, where an individual prefers one choice to the other, that individual will choose that choice, regardless of the context. Colman notes that human decisions “frequently violated [transitivity and context-free ordering” (ibid, 141) conditions. I.e., people frequently do not act in conditionally rational ways. These conditions are considered to be the ‘weak ordering principle.’

Coleman defines Reason, citing Russell (1954), as follows ‘Reason has a perfectly clear and precise meaning. It signifies the choice of the right means to an end that you wish to achieve. It has nothing to do with the choice of ends.’ Again, the correct choice is defined by your goal, and the goals are defined by what you desire or will, such that your choice is what you desire or will and desires and will define your choice. Notice, once again, the seeming tautology of Reason. What’s fascinating about this definition is that “a player can express a preference or indifference not only between any pair of outcomes, but also between an outcome and a gamble involving a pair of outcomes, or between a pair of gambles, and that the weak ordering principle applies to these preferences” (ibid). I.e., a person may not only choose between two preferences within a set defined by a game but may have a preference about one game over another and the preference that particular game offers.

Coleman defines a choice as reasonable or rational if “no alternative yields a preferable outcome” (ibid, 141). Again, we shall see that cooperation cannot rely on this definition of reasonable or rational.

The irrationality of cooperation can be best expressed through game theory. Game theory is “applicable to any social interaction involving two or more decision makers… each with two or more ways of acting… so that the outcome depends on the strategic choices of all the players, each player having well-defined preferences among the possible outcomes, enabling corresponding [payoffs] to be assigned” (ibid, p 142). These games are valid if their reasoning is sound. They are idealized versions of real scenarios. These games can fail to make accurate predictions if they ignore essential features of a scenario. “To be both relevant and useful,” Colman notes, “a game must incorporate the important properties of the interaction and must also generate inferences that are not obvious without its help” (ibid).

These kinds of games can be both normative and positive. A normative game is conditional. The objectives of game theory are primarily normative; the players in these games seek to maximize their payoffs. Game theory games are positive in so far as the players of a game are limited by their nature. Game theory “becomes a positive theory by the addition of a bridging hypothesis or weak rationality, according to which people try to do the best for themselves in any circumstances.” People err from perfect rationality because of 1. “computational limitations or bounded rationality; 2. “incomplete specification of problems”; or 3. “systemic irrationality” (ibid).

These games are predicated on a few assumptions, as well. The main assumption is about common knowledge and rationality (CKR). Common knowledge is defined as “the specification[s] of the game, including the players’ strategy sets and payoff functions in the game,” plus “everything that can be deduced logically from and from CKR2.” CKR2 assumes that the “players are rational in the sense of expected utility (EU) theory… hence they [allegedly] always choose strategies that maximize their individual expected utilities, relative to their knowledge and beliefs at the time of acting” (ibid). Introduced by Aumann Lewis in 1969, “A proposition is common knowledge among a set of players if every player knows it to be true, knows that every other player knows it to be true, knows that every other player knows that every other player knows it to be true, and so on.” There are significant reasons for doubting the validity of common knowledge and rationality.

For two players to cooperate, they must settle on an equilibrium, i.e., they must settle on an equilibrium point. An equilibrium point “is a pair of strategies that are best replies to each other, i.e., a best reply being a strategy that maximizes a player’s payoff, given the strategy chosen by the other player” (ibid, 143). If a game has a ‘uniquely rational’ solution, it must be the equilibrium point. Both players are generally recognized as being utility-maximizers. To deduce something rational about a game, that deduction must be a kind of common knowledge. This implies that “if it is uniquely rational for Player I to choose Strategy X and Player II Strategy Y, then X and Y must be best replies to each other because each player anticipates the other’s strategy and necessarily chooses a best reply to it.” Secondly, because “X and Y are best replies to each other, they constitute an equilibrium point by definition.” Thus, “if a game has a uniquely rational solution, then it must be an equilibrium point.” Sometimes collective common knowledge and reasoning is insufficient “to allow players to reason to an equilibrium solution” (ibid).

Specifically, it’s not quite clear what a rational player should do. In other words, what exactly is the best rational choice is indeterminate. For example, it is alleged that a fundamental solution is the core (Gillies, 1953). The core of a cooperative game is “the set of undominated outcomes” (ibid, 144). The core is satisfied if 1. “it includes only divisions of the payoff such that the players receive at least as much as they could guarantee by acting independently”; 2. “every proper subset of the player receives at least as much it could guarantee for itself by acting together”; and 3. “the totality of players receives at least as much as it could guarantee for itself by acting collectively as a grand coalition so that nothing is wasted.” (ibid). In many games, the core is never satisfied. For example, “if three people try to divide a sum of money among themselves by majority vote, then any proposed division can be outvoted by a two-person coalition with the will and the power to enforce a solution that is better for both of its members.” This is precisely what was identified with the issue of “reasonability.” Effectively, democracy seems to be, by this conception of rational and reasonable, actually pathological.

Another issue is the issue of focal points. If both players recognize a choice as a focal point, and if both recognize this as common knowledge by common reasoning, then both will choose this focal point, coordinating appropriately. But again, how do they settle on this focal point? Some argue that a focal point “emerges from its representation within the common language shared by the players” (Crawford and Haller, 1990). If this common language were removed, or if it simply were not common, then the goal or focal point would be filtered out, “reducing the prospects of coordination to chance levels” (italics added, ibid). Again, if the language defining the goal is imprecise or unclear, or if there is no common language, i.e., if two people speak completely different languages and cannot communicate, there can be no focal point between them. If neither player is able to expect the other player will choose the optimum strategy for both players, then neither will choose that strategy. I.e., the choice, if it is made, is irrational.

Lastly, rational players have to deal with the payoff-dominance principle, which “is the assumption that if one equilibrium point payoff-dominates all others in a game, then rational players will play their parts in it” (ibid, 145). For example, “the fact that [a choice] is the optimum equilibrium (indeed, the optimum outcome) for both players is not ipso facto a reason for Player I to expect Player II to [make the optimum choice], because [the optimum choice] is not a utility-maximizing choice [or value-maximizing] choice for Player II in the absence of any reason to expect Player II to choose it, and vice versa” (ibid, 145). In other words, there’s a dilemma. Effectively, this means that collective common knowledge and reason and rationality, coordinated by a goal, should be impossible. This is really something worth noting: effectively, as a system of coordinated cooperation, complex societies should not exist.

In prisoner dilemma games, and in many other games that we shall explore later on, rationality is regarded as self-defeating. I.e., if both players choose what is most rational for them, both will defect because it is optimum for them to try to benefit by defecting. However, if they try to cooperate, there’s no assurance the other player will cooperate, and if one player does cooperate, the other player will benefit from not cooperating by defecting and thus will do so. Thus, neither player has an incentive to cooperate if they cannot settle on cooperating. The many examples and explanations Colman provides for this phenomenon are fascinating (ibid, 145-7). However, they force us to ask some essential questions.

Effectively, game theory cannot be a positive theory, and from a normative point of view, it is rationally “self-defeating in social dilemmas” (ibid, 147). Because the entire goal of game theoretic models is utility maximization, this creates a paradox. I.e., if one were to try to maximize their utility, they would fail to do so; if everyone tries to maximize their utility, no one can. When examining games of backward induction, this problem gets even worse.

Backward induction “is a form of reasoning resembling mathematical induction, an established method of proving theorems.” If one were to engage in backward induction for game theory games, this would be their logic: 1. “defection is the rational strategy in a one-shot [prisoner dilemma game (PDG)]; 2. “there could be a reason for cooperating on the first round if it might influence the outcome of the second round”; however, 3. “in the second round, there are be no moves to follow and thus no future effects to consider.” As a result, “the second round is effectively a one-shot PDG, and its rational outcome is joint defection.” Because the outcome of the “second round is predetermined, and the first round cannot influence it, the first round is also effectively a one-shot PDG.” I.e., you do should not cooperate on the first round if you’re a rational player, you should defect because it will not alter the second round. (ibid, 148). This follows for all finite n number PDGs. However, this seems counterintuitive. Most people would cooperate, and intelligent players tend to cooperate (Andreoni & Miller 1993; Cooper et al., 1996; Selten and Stoecker, 1986). Both the Chain-store Game and Centipede Game serve as excellent examples demonstrating the implications and reality of this phenomenon.

By the logic of the first, all players should effectively cooperate with each other – i.e., it is impossible to aggressively deter an opponent because, in the end, you will be outcompeted by overcome by an opponent in the final round. In the latter, there’s no sense in cooperating to form the centipede chain because, if one of the two players is rational, he will defect in the final round and earn more for himself than the initial player. Following this logic backward, there’s no incentive for the first player to cooperate; he will always defect. Thus, rational players, in long-form games, will not cooperate and cannot be trusted. Fascinatingly, players still cooperate in these games, and those who do end up earning more than the rational players who recognize it is futile to do so if they hope to maximize their earnings. I.e., the rational position is effectively the stupid position to take in these circumstances. (ibid, 149).

The persistence of cooperation can be explained psychologically by three mechanisms: Team Reasoning, Stackelberg Reasoning, and Epistemic and non-monotonic reasoning.

Team Reasoning helps to explain how players choose a focal point, i.e., how they settle on a goal. “A team-reasoning player maximizes the objective function of the set of players by identifying a profile of strategies that maximizes their joint or collective payoff, and then, if the maximizing profile is unique, playing the individual strategy that forms a component of it” (ibid, 150). I.e., the team player creates a collective representation of the desires of all players, and chooses the choice that maximizes their earnings as if it were maximizing his earnings. Colman takes note of this: “a conclusion about what an individual should do can follow directly, without the interposition of any assumptions about what that individual wants or seeks… no single individual’s aims need be referred to… [implying] the existence of collective preference” (ibid). Team reasoning is plausible, but it obviously cannot guarantee cooperation; it is simply necessary for there to be cooperation.

Secondly, we have Stackelberg Reasoning. “The basic idea is that players choose strategies that maximize their individual payoffs on the assumption that any choice will invariably be met by the co-player’s best reply, as if the players could anticipate each other’s choices” (italics added, ibid, 151). Obviously, the players cannot actually predetermine their co-player’s choices. However, when they act as if they do, then it is more likely that they match the equilibrium focal point of their partner. There is good evidence for this phenomenon in games like PDGs and Stag Hunt games (Colman and Stirk, 1998), which tend to be simpler than games like Battle of the Sexes and Chicken (which showed “no significant biases [towards Stackelberg Reasoning] and very small effect sizes” (Colman, 2003, 151).

Epistemic reasoning is based on common beliefs. I.e., a player only has reason to doubt that another player will defect if there’s evidence or facts that support that belief. As long as a player believes a belief to be true, and other players also believe it to be true, and there are no apparent inconsistencies with that belief, then that player may hold that belief, using it to cooperate with his fellow players.  This is a kind of non-monotonic reasoning, a kind of reasoning “in which premises may be treated s default assumptions, subject to belief revision in the light of subsequent information” (ibid, 151). Essentially, as long as a player's beliefs are fact-based, and those facts are relevant to the choice, one player may estimate the likelihood that another player will cooperate with them, aiming for the same focal point and equilibrium. If Player I has cooperated once, for example, Player II can expect him to cooperate again, and as long as Player II seeks to maximize his earnings by cooperating, then he will cooperate, as well. If he defects, then Player I has reason to defect and the game ends. Both players lose more than they could earn. Effectively, this explains why players in centipede games outcompete their more rational opponents: they are relying on common belief.

Ultimately, we may draw several conclusions from these kinds of findings. I hope I’ve made this clear but I will reiterate: societies cannot be predicated on rational considerations about another person’s or group’s willingness to cooperate. Secondly, they can use language to aim for a focal point, but this seems to only be necessary and insufficient to establish cooperation. Cooperation between any two parties at least must be predicated on faith. They can form beliefs from the facts that constitute another group or person, and they can use those beliefs to estimate the likelihood that they’ll cooperate to achieve a common goal, but it’s not clear they can ever be certain of this common goal. They can also act as if they have foreknowledge about another group’s actions, and when they do so, they essentially take a leap of faith. And lastly, they can also act as if their interests are their fellow members’ interests, and by doing so, assume that their fellow players will all be aiming for the same goal.

When individuals are related to each other, and as long as the cost-benefit ratio is less than their relatability, cooperation between players may be established. Once cooperation is established, societies can become more complex, and when members of groups are no longer related, they can engage in direct reciprocity as long as it is more likely that two individuals will meet again than the cost-benefit ratio. This could be the result of the fact that, over time, you’re better able to predict the goals of your no-related cooperator, and as such, are more likely to cooperate with them and to have them reciprocate that cooperation.

If you can gain a reputation with a group or another individual, and the probability that others know about that reputation is high and greater than the cost-benefit ratio of knowing about you or being associated with you, then cooperation can continue. Fascinatingly, this last rule helps to explain the epistemic and non-monotonic causes of cooperation in centipede-like games; if the reputation of a player is greater than the cost-benefit ratio associated with that player, then you can be confident about cooperating with that player. If he defects, he will gain a reputation as a defector, and thus the cost-benefit ratio may become greater than his reputation, making him an undesirable partner.

Spatial networks and structures can also emerge as a result of these reputation-maximizing interactions based on the likelihood of encountering that person and possibly their relatedness, resulting in even further cooperation. And, as long as the benefit-to-cost ratio is greater than the average number of neighboring cooperators, then cooperation may persist. I.e., When neighboring cooperators exceed the benefit-cost ratio of interacting with them, cooperation may cease between them, reputation may define whether cooperation occurs, and individuals may splinter based on who they are more likely to encounter than who is their neighbor; i.e., they may be more likely to only want to cooperate with those who are related to them because they can be more confident about their cooperation likelihood. In other words, large populations of unrelated individuals may splinter along ethnic lines of relatedness.

When we are dealing with groups, as long as the benefit-cost ratio is higher than the maximum group size over the number of groups, plus one to account for the splinter group, then cooperation between groups may occur. When the maximum group size exceeds the number of groups and is greater than the benefit-cost ratio for cooperating, then group selection for the more cooperative group cannot occur. I.e., when the maximum size becomes exceedingly high, cooperators can be invaded by defectors and taken advantage of. We shall see that this is a common phenomenon, and is effectively a cyclical process. I.e., population size is beneficial only to a point, after that, it fails to generate cooperative behavior and thus a group with a high population level or size makes it more likely to be taken over by a more cooperative group.

 

God and His Body of Cooperators

Let’s review: societies seem to be systems of coordinated parts, cooperating to sustain and extend their members’ will. I.e., the reasons of the members of a society are predicated on preserving their power. Yet their power is dependent on their ability to cooperate, and there is no rational that should cooperate. For all intents and purposes, the cooperating members of a society are doing so on the basis of faith; they may be confident about their partner’s willingness to cooperate, but they cannot be certain. So, are these cooperators unable to promote cooperative behavior; is their society left to the self-centered whims of the defector? No, they are not.

Much of the literature I covered this past week centered on the mechanisms by which cooperation could be maintained and promoted in a society. One of the first papers I read was by Hirshleifer and Rasmusen (1989), and it focused on the power of ostracism. Ostracism, like all of the mechanisms we will be reviewing in this section, is costly to those who ostracize. Ostracism, in part, helps to resolve the free rider problem, i.e., the defector problem.

Morality helps to resolve part of this problem. 1. “If there are direct gains or losses to individuals from ostracizing,” or 2. “if [when everyone defects] payoffs are not perfectly independent of the number of players,” or 3. “if the costs are small,” then morality can still enforce cooperation. (ibid, 17). So, we have three independent conditions, at least according to Hirshleifer and Rasmusen, that enable morality to promote cooperation rather than leaving it up to the whims of cooperators and defectors. The enforcement mechanism in question here is effectively a democratic process. As long as a large number of members of a group apply their resources to ostracizing a defector, whose defection can replicate and parasitize off the cooperator’s spoils, then they can promote cooperation. Importantly, this process primarily occurs when all members of this game have symmetric power to enforce the moral norm and ostracize the defector. In situations where power is asymmetric, the process cannot be democratic and cooperators become dependent on the individual who holds the majority of the power to ensure cooperation.

According to Hirshleifer and Rasmusen, “Morality achieves cooperation because some players want to reduce the welfare of others if those others behave wrongfully” (ibid, 20). This process, according to Hirshleifer and Rasmusen, isn’t really altruistic. If the players were altruistic, “some players would [be prevented] from enforcing cooperation, by making the altruistic unwilling to punish evildoers” (ibid). This selfish desire to punish to achieve a higher payoff for one’s self promotes cooperation among the remaining members, but there does seem to be a limit to this.

As long as the costs or benefits from enforcing morality are small, then an equilibrium can be established by ostracizing defectors or free-riders from its community. This means that equilibrium cannot be established if the costs and benefits are greater than small. Interestingly, an individual can punish his community by ostracizing himself. Socrates, for example, was capable of leaving Athens; he did not have to drink the hemlock that killed him. In other words, he killed himself. Importantly, Socrates was also regarded as the wisest man in all of Greece, if not the world, according to the Oracle of Delphi. He thus punished those who punished him by forcing them to accrue a great cost they would not have been burdened with if they had not decided to punish him. The same loss was felt by Amsterdam when they decided to ostracize Spinoza for his views. The list can go on, but the point is clear: in the real world, just because we can punish to promote cooperation doesn’t mean we always should, and sometimes those who we punish will force us to eat costs that are greater than keeping the alleged pariah around.

Robert Boyd, Herbert Gintis, and Samuel Bowles (2010) help to clarify this point a little further. Effectively, their model relies on the ratio of punishers to non-punishers in a society. When punishers are few, the cost of punishing is great. I.e., the cost of punishing a defector for not cooperating is relative to the number of punishers available and the threshold for punishers in the overall population. However, once punishers reach a certain level in the population, defined by Boyd et al. as the threshold for punishers in the population, then punishment is unable to sustain cooperation. This directly results from the fact that punishing another member of a group is costly. Not only does it cost members of a group to punish an alleged defector, but when that alleged defector is lost, the net returns for cooperating decrease because the total number of potential cooperators decreases. Thus, when punishers exceed the number of cooperators, or when punishment is very common, the benefits from cooperating cannot account for the costs of punishing. I.e., a society that punishes too much, or has too many punishers, is unable to benefit from cooperating; its punishers are likely to decrease in frequency over time, and if punishers do not reinvade or the initial population of punishers is simply gone and does not come back, that society of cooperators is liable to invasion by defectors who can take advantage of them.

Effectively, a little mercy can go a long way by enabling wiggle room for greater cooperation to occur.  Boyd et al. also note that “consistent with ethnographic observation, the model predicts that only some individuals will engage in punishment.” I.e., societies with many, frequent punishments are not stable societies. This could be either a symptom of an unstable society or a cause. To be clear, if a society requires a lot of punishment to preserve its stability, there are too many defectors and cooperators cannot generate benefits for the group by cooperating. However, because punishment is costly, not only to the punisher but for the group because it decreases the overall number of cooperators, even if cooperators were able to net benefits from cooperating, those benefits could be less than the overall costs from punishing defectors.

However, it does seem that punishers do not simply have to be self-interested, moral agents. There’s still the possibility that punishers will do so altruistically. In their 2002 paper, Fehr and Gächter argue this point. In situations where reputation formation isn’t a relevant variable, cooperation by punishment is still achievable. I.e., because punishment can be costly, and there may be no indirect benefit gained from punishing, then it can be assumed that punishment under such circumstances is altruistic; i.e., it does not benefit the punisher and costs them. However, because “punishment may well benefit… future group members,” whom selfish and rational actors have no obligation or reason to regard as being relevant, selfish actors can benefit from the act of punishing defectors. This does pose another issue, however. For example, if a rational, self-interested actor relies on others to punish him while reaping the benefits from those who, at a cost to themselves, punished others, then he will accrue more benefits in the long run for himself than those who do the punishing. Effectively, he will be parasitizing off the punishers’ actions. If other punishers recognize this, they may feel as if they should give up punishing, and as a result, people will defect from punishing the initial defector, leaving cooperators at the mercy of all defector types.

According to Fehr and Gächter, this problem may be partially resolved by the mere fact that humans are emotional creatures. Those who contribute more to the group than those who do not will react emotionally when they discover another actor is shirking their duty. The less others invest in the group, for example, the greater the anger towards that member becomes. Once anger is triggered, the possibility of being punished becomes salient to the free-rider and indicates to them that punishment will follow if they persist in their parasitic behavior. Essentially, if reason is the slave to our passions, as Hume astutely pointed out, then reasons about what I can get from free-riding will be damned; either the fear of angering others and being punished by their wrath will supersede or direct that reasoning, or anger at another’s defection will motivate an actor to engage in a costly act, without a rational benefit, to promote cooperation or get vengeance.

Really, this can be seen as a kind of desire for fairness, which was precisely the point of David Rand, Corina Tarnita, Hisashi Ohtsuki, and Martin Nowak’s 2013 paper. To study the concept of fairness, Rand et al., examined an ultimatum game. In these games “two players have to divide a certain sum of money between them.” One makes a proposal and another responds. The proposer can offer the minimum amount they think is acceptable, and the responder pretty much has to accept any non-zero offer made to them if both are rational. “Responders who reject low offers incur a cost to avoid getting a smaller payoff than the proposer (disadvantageous inequity), and proposers who offer more than needed to avoid rejection incur a cost to avoid receiving a larger payoff than the responder (advantageous inequity)” (ibid, 2581). Essentially, both want to look good in the eyes of the other and those around them by setting a ‘fair’ price. Under conditions where variation occurs, mean proposer and responder offers typically fall between 30-50% of their wealth and demands between 25-40% of the proposer’s wealth, respectively.

Fascinatingly, as selection pressure increases, specifically as selection intensity increase, proposals, and responses approach the rational self-interested approach (ibid, 2582). The authors note that, even when natural selection is not strong, selection favors fairness. (ibid, 2583). In these kinds of games, under moderate natural selection “what matters… is resisting invasion by a single (randomly chosen) other strategy.” In other words, “it is not expected absolute payoff that determines success, but rather expected relative payoff in pairwise competition with a random opponent.” Under harsher conditions, where mutation is more common, “the optimum strategy is the one that maximizes its expected absolute payoff against a randomly chosen opposing strategy.”

This led the authors to theorize two points: 1. They “predict that people who developed their strategies in settings where it was more difficult to assess the successfulness of others would make both larger offers and larger demands”; and 2. They also “predict that people who developed their strategies in settings where the behavior of others is less consistent would make higher offers, but no higher demands.” In other words, what is considered fair is relative not only to the offer made by one player to another but the circumstances that constrain the offer. I.e., when selection is harsher, when the world is less fair, offers can become greater and demands are less. Given our world is harsh, this suggests that cooperation and a desire for fairness between members of a group may have given rise to more cooperative strategies. A person simply cannot pay to be rationally self-interested in a cruel world.

Mikhail Burtsev and Peter Turchin also discuss the emergence of cooperative strategies, in their 2006 paper. These authors investigate the evolution of cooperation under conditions where “agents are endowed with a limited set of receptors, a set of elementary actions and a neural net in between.” The first strategy they identified when agents could identify each other based on shared, phenotypic similarity, was a “cooperative version of a dove strategy.” “Cooperative doves ignored out-group (phenotypic distance large) but left cells with in-group (phenotypic distance is small) members to avoid competing with them.” The second strategy that they identified was the raven strategy. In the raven strategy, “agents also left cells with in-group members, but when they detected out-group members they attacked them.” The third strategy for cooperative behavior the researchers identified was the starling strategy. In this strategy, members of the same group stayed in the same cell and “collectively [fought] with any out-group invader.” (ibid, 1042). The authors note that, with respect to the starling strategy, because members shared limited resources with each other in the cell, “agents using this cooperative defense strategy were small (had small stores of internal resource[s]), but they still had a good chance of defeating a large invader because of their advantage in numbers.” The authors described this behavior as ‘mobbing behavior,’ hence the moniker “Starling.”

The authors also note that “if carrying capacity is insufficient to support at least two agents in a cell, the starling strategy cannot invade the population.” Under these circumstances, the raven strategy dominates. However, once carrying capacity is sufficient to support at least two agents in a cell, the starling strategy can take root. Yet the starlings do not eradicate the ravens; instead, the two strategies seem to go back and forth, oscillating between each other.

Their study “suggests that cooperative defense of territory can radically change the course of evolution in resource-rich environments. When the amount of resources becomes large enough to support more than one agent and too large for a single agent to monopolize, solitary bourgeois are replaced by cooperative starlings, provided that agents can recognize in-group members” (italics added, ibid, 1042-43). The authors suggest that it is this ability to recognize in-group members that, when culturally facilitated, promotes cooperation between non-related individuals.

Yet cooperation can evolve without reciprocation, as well. In their 2001 paper, Rick Riolo, Michael Cohen, and Robert Axelrod investigated how reciprocity can emerge when populations are not necessarily related. What the authors discover is that, with the inclusion of an arbitrary tag, “a population of agents is [rapidly able] to establish a substantial degree of cooperation. Unfortunately, cooperation under these conditions isn’t sufficient to sustain itself. “Within a few generations… agents with low tolerances [to agents without a similar tag] begin to take over the population as they receive benefits from more tolerant agents, but they bear less of the cost because they donate to fewer others” (ibid, 441). These agents tend to dominate, for a time. “As these agents prosper and reproduce, their offspring begin to spread through the population” (ibid, 442). Because there are more of these agents with a similar tag, and they’re only more likely to help those with that similar tag, this establishes cooperation without reciprocity. Yet over time, the tolerance level of these agents can increase. When this happens, they are more vulnerable to intolerant mutants who can take over their population, benefiting without reciprocating.

Effectively, this creates a cycle. “As the members of this new dominate [intolerant] cluster do not contribute to the old cluster, the average donation rate in the population falls markedly. The members of the new cluster donate to each other when they happen to interact because, except for any further mutation of tags, they all inherit the same tag. As the new cluster grows to about 75%-80% of the population, the old cluster dies out and the average donation rate rebounds.” And then, as this population’s tolerance level increases, as it will, it becomes susceptible to invasion by a less tolerant cluster. Interestingly, this also seems to reflect the Starling-Raven cycle identified by Burtsev and Turchin (2006).

The authors note that “cooperation on the basis of similarity could be widely applicable in situations where repeated interactions are rare [direct reciprocity is rare] and reputations are not established [indirect reciprocity is not applicable]” (Riolo et al., 2001). Arbitrary characteristics could be pheromones, handshakes, accents, special practices or inside jokes, so on and so forth. “As the agent does not have to remember previous interactions with another agent, let alone know anything about that agent’s behavior with others, an agent only needs very limited signal-detection capability” (ibid, 443).

Of particular interest are moralistic religious institutions. In their 2016 paper, Purzycki et al. investigated the effects moralistic gods would have on human sociality. They particularly investigated this hypothesis “conceptions of moralistic and punitive gods that know people’s thoughts and behaviors promote impartiality towards distant co-religionists, and as a result contribute to the expansion of sociality” (ibid, 1). They cited evidence show that larger and more politically complex societies tend to have more supernatural punishment and moralistic deities (Johnson, 2005; Botero et al., 2014), “supernatural punishment beliefs precede social complexity” (Watts, 2015), and positive relationships “between beliefs in hell, beliefs in gods’ power to punish, and various self-reported prosocial behaviors” (Atkinson, 2011; Shariff, 2012). However, they noticed that many of the studies supporting this claim came from more Western societies, which could bias the sample in a particular direction.

To resolve this issue, the authors “tested [their predictions] with a sample of 591 participants… from eight diverse communities, including hunter-gatherers, horticulturalists, herders and farmers, as well as fully market-integrated populations engaged in wage labor or operating small businesses” (Purzycki, et al., 2016). What they found was remarkable and helps to support the finding by Riolo et al., 2001.

Specifically, they found that “Compared to those who don’t know, claiming that the moralizing god punishes increases allocations towards distant co-religionists in the self game by a factor of 4.7 and in the local co-religionist game by a factor of 5.3” (Purzycki, et al., 2016, 3). Those who are associated with a moralistic god who punishes and knows about their wrongdoings decrease bias against distant co-religionists, as well.

The authors had this to say about their findings: “People are more inclined to behave impartially towards others, they are more likely to share beliefs and behaviors that foster the development of larger-scale cooperative institutions, trade, markets and alliances with strangers.” These findings help to explain the emergence of large and complex human societies, and the religious nature of those societies. Ultimately the feel that “the present results point to the role that commitment to knowledgeable, moralistic, and punitive gods play in solidifying the social bonds that create broader imagined communities” (ibid, 3).

Cooperation is not constant, however, and it seems that even the civilizations of God can collapse. Stewart and Plotkin, in their 2014 paper, investigated the causes of decreases in cooperation. Effectively they show that “cooperation will always collapse when there are diminishing returns for mutual cooperation” (ibid, 17558). The authors note that a strategy can be robust “if, when resident in a population, no new mutant strategy is favored to spread by natural selection.” In the model these authors constructed, players are not allowed to “retain memory of prior interactions with different opponents.” I.e., indirect benefits are not factored into the equation. The authors identify, however, that it is the “ratio of benefits to costs that matters for the prospects of cooperation in the iterated Prisoner’s Dilemma, as payoffs and strategies coevolve” (ibid, 17560). Specifically, “collapse of cooperation will always occur as [contributions to a public pool] increase;” i.e., as the ratio between the function of public pool contributions over all public contributions gets smaller, as public contributions get larger (f(c)/c), collapse will occur. Essentially, why would a cooperator cooperate in a public goods game if there’s a huge public pool of contributions they can skim from, especially if there's no reputation loss? In turn, the number of defectors increases in frequency, the number of cooperators cannot compete, and cooperation between members collapses.

Of course, not all is lost. Cooperation can persist if cooperators are resilient enough to defectors. In a paper by Mao, Dworkin, Suri, and Watts (2017), the authors investigated the time it takes for cooperation between cooperators to collapse. They observed that “fully cooperative games occurred at rates between 15 and 20% for the duration of the experiment. Since players were paired randomly, and a game where neither player defected requires both players to be conditional cooperators, then a frequency of 16% of games with no defection implies a 40% frequency condition”; i.e., .161/2. This was surprising to the researchers. They noted that the number of ‘rational’ players consisted of about 60% of the player population, i.e., the likelihood of any two ‘rational’ players encountering each other was about 36% [.62]. In other words, the likelihood of a rational player encountering a cooperator was 24% [.6 x .4].

However, the authors note that “the model predicts that with sufficiently many resilient cooperators present in a population of rational cooperators, cooperation can be sustained indefinitely” (ibid, 7). Secondly, “the model also makes a prediction about how many resilient cooperators are necessary to sustain cooperation even among rational cooperators.”

The authors noted for possible explanations for the behavior of their participants. First “a number of players cited the welfare of other players as a reason for cooperating (for example, ‘I tried to get the best outcome for me and the other person’).” Secondly, “several players invoked a desire to achieve ‘fairness,’ or expressed guilt at having defected first, both of which are as consistent with norm-based accounts of cooperation.” Thirdly, “other players appear to have been motivated largely by self-interest declaring that the long-run nature of the experiment rendered cooperation rational” (italics added). Lastly, “others… appear to have cooperated reflexively, giving no further reason (for example, ‘I chose to cooperate unless the other person chose the defect’)” (ibid, 8). The third explanation is intriguing given that, based on the numbers alone, it simply wasn’t rational for them to cooperate.

Most importantly, “the overall rate of cooperation stayed above 84% throughout the experiment, meaning that players collectively extracted roughly 84% of the maximum average payout possible.” I.e., when 40% of the population consists of cooperators, and those cooperators are resilient to defectors' advances, extraction of resources was still astoundingly significant.

From this information, I think it is clear that cooperation, and stable societies, are clearly not the result of every member of the population that constitutes a society. In their 2020 paper, Donahue, Hauser, Nowak, and Hilbe investigated cooperation from multilevel channels. They identified that “individuals quickly learn to coordinate their own behaviors across different social dilemmas”; i.e., “they tend to use cooperation in more valuable interactions as a means to promote cooperation in those games with a larger temptation to defect” (ibid, 2). To be clear, once individuals “are allowed to link their different games,” the strategies the players demonstrate expand. Linkage increases flexibility, enhanced flexibility establishes cooperation “in some games,” which can promote further cooperation in already existing cooperative games. I.e., “Once individuals are allowed to link their concurrently ongoing interactions, they often learn to coordinate their behavior across games in order to enhance cooperation in each of them” (ibid, 6).

In other words, it is not the mass of people who define the course of a society or really benefit it, but an organized group of people who, out of all the games they can play with each other, settle on games that – by cooperating with each other – net them the greatest benefits.

I think it is necessary to review some of the findings from this section of the paper. First of all, it remains clear that these findings are non-rational. I.e., the most rational approach to any of these games is really to exploit your partner, i.e., defect, but if you do, you’ll both be worse off for it. Thus, cooperation within a society seems to be predicated on something like confidence or faith. When these kinds of relationships are established, however, they can be maintained through punishment mechanisms, like ostracism. But these mechanisms only work if the benefits or costs are small to the punisher. The punisher will not punish if they are anything but small.

This last finding may be predicated on a desire for fairness. Specifically, fairness seems to reflect the emotional nature of the human condition, specifically when an individual recognizes or feels as if they’re putting in more than someone’s extracting out of a partnership, i.e., when one partner realizes they’re being betrayed, they’ll punish the alleged defector. The world, primarily, is an unfair place, and it is just these kinds of conditions that give rise to fair exchanges between people playing ultimatum games. I.e., a person simply cannot pay to be rationally self-interested in a cruel world.

Interestingly, the resources a society has around them determine the kinds of games they’ll play and if they’ll engage in more cooperative behavior. One such kind of extremely cooperative behavior is the Starling strategy discussed earlier. This strategy enables both in-group cooperation and the ability to attack out-group members. As long as there are resources available for this kind of strategy, it may develop.

Fascinatingly, this process is predicated on identifiable tags. Identifiable tags allow individuals to engage in cooperative behavior without necessarily being related and without really reciprocating behavior. I.e., reputation, though it could be a relevant variable, is not necessary for cooperation to develop when identifiable tags, like a handshake or hat, are available to exploit as a mechanism for cooperation. Gods can also serve as identifiable tags, and through religious identification, enable more complex societies to develop. Societies that do not have this kind of tag or mechanism, i.e., a knowledgeable, punitive, moralistic god, are less likely to develop, or maybe just don’t, into complex societies.

However, as societies become more complex, and as the public pool of available resources increases, the likelihood of cooperation decrease; why cooperate and sacrifice when there are so many public resources? Under these conditions, a society is more likely to destabilize and collapse. However, a pool of a few cooperators, around 40% can resist defectors’ advances, and by doing so preserve productivity. In multichannel games, games where a player may choose between the game that nets him the greatest return for cooperating, he will find games that, through cooperating with a minority group compared to the majority, net him and the group greater returns. Effectively, this means that in any society, it will be a select group of highly cooperative individuals, with similar identifiable tags, and possibly worshiping the same knowledgeable, moralistic, and punitive god, who direct the course of the other members of a society. I.e., though we started with a democracy at the beginning of this paper, really the democratic practice of ostracism, we are left with the conclusion that no such ‘rational' and 'democratic’ practice really exists; it is irrationally oligarchic or elitist all the way down.

 

Competition Between the Gods

Game theory provides quite a bit of material for Legal Studies, especially prisoner dilemma games. But these are not all that important and, in fact, fail to do justice to the variety of types of behavior the legal system encounters on a daily basis. One of these kinds of games, a number of which were investigated by Richard H. McAdams in his 2008 paper, is the Assurance of Stag Hunt Game.

In Stag Hunt, one player needs to assure the other that he will “play the riskier strategy… so [that] the other should as well.” The name ‘Stag Hunt’ comes from “Rousseau’s illustration of the choice between hunting stag and hunting hare, where one succeeds in hunting stag only if the other hunter also hunts stag, and where sharing a stag with the other hunter is the best outcome, but hunting hare is safer because one can succeed on one’s own” (McAdams, 2008, 221).

The game can be structured as follows

 

Strategy A

Strategy B

Strategy A’

4,4

0,3

Strategy B’

3, 0

3,3

 

Strategy A and B are respectively for Player 1 while Strategy A’ and B’ for Player 2. Strategy A is the optimum Strategy for both Player 1 and 2, but if Player 1 doesn’t want to take the risky approach, Player 2 will receive nothing while Player 1 will receive 3. The inverse is true if Player 1 chooses the riskier strategy and Player 2 chooses the less risky strategy. If Player 1 is confident that Player 2 will choose the less risky strategy, then he should choose that strategy, as well, and vice versa. Remember, these players are really just estimating that the other player will choose a particular strategy based on either perceiving the goals of the game as if they had a mind made up of all the other players in the game as if everyone were going to choose the optimum strategy or non-monotonically, based off the relationship with the person or reputation the person has. The former two pretty much are just acts of faith while the latter requires some prior knowledge or representational or reputational model of the other player(s).

Secondly, McAdams gives us the structure of the Battle of the Sexes game.

 

Strategy A

Strategy B

Strategy A’

3,1

0,0

Strategy B’

0, 0

1,3

 

Again, Strategy A and B are respectively for Player 1 while Strategy A’ and B’ for Player 2, and this will follow for the next game, as well. As we can see if Player 1 goes for Strategy A but Player 2 goes for Strategy B’, then both will get nothing; the inverse is also true if Player 2 goes for Strategy A’ but Player 1 goes for Strategy B. As can be seen from the chart, one player essentially has to give ground to the other to get anything at all. This is very much like the ultimatum game covered earlier in this blog post. Remember, in that game the players are more likely than not to settle on a ‘fair’ deal. I.e., mean proposer and responder offers typically fall between 30-50% of their wealth and demands between 25-40% of the proposer’s wealth, respectively. The reasoning for this, again, is that natural selection has favored individuals providing fair offers and responders not asking for too much so that both may continue cooperating in a, for all intents and purposes, pretty cruel world. The problem, again, is how they settle on an equilibrium: I believe I have already addressed this issue.

Thirdly are Hark-Dove or Chicken Games, which are very similar to those covered by Burtsev and Turchin in their 2006 paper.

 

Dove (Strategy A)

Hawk (Strategy B)

Dove’ (Strategy A’)

2,2

0,4

Hawk’ (Strategy B’)

4,0

-1,-1

 

Again, we can see the if Player 1 and Player 2 both go for their optimum strategy, which they may, i.e., the Hawk strategy, then they both get negative payoffs. However, if they both cooperate, they’re more likely to go for the dove strategy, but if one goes for the Dove Strategy, the other may know they’re likely to cooperate and go for the Hark strategy, netting the optimum. Thus, the equilibrium can only be achieved by either Player 1 or 2 if they know that the other player is more willing to cooperate.

As McAdams notes, “In the Prisoners’ Assurance game [the Stag Game], each prisoner wants to reciprocate the other prisoner’s decision, and both do bet by mutual silence,” for example if they’re being interrogated. “In the Prisoners’ BOS game, one wants to cast all blame on whichever prisoner the other prisoner blames; one does best where this mutual blaming of the other prisoner,” e.g. if a prosecutor is seeking to demonstrate conspiracy and needs a confession. And lastly, “In the Prisoners’ HD game, one wants to be silent if the other prisoner snitches on the third party who committed the crime, but wants to snitch if the other prisoner is silent; thus, one does best by having the other prisoner snitch,” especially, for example, if a prosecutor’s case is airtight and all she needs is a confession from one of two criminals. (ibid, 224).

McAdams also does a great job reviewing the coordination issue in his paper; the question is how do, in particular situations given the nature of the games we are dealing with, players, groups, and whole nations, coordinate their actions, especially if there is no common language or the language is unclear, or even the reasoning about the language is unclear. For example, does Freedom mean the same thing to Poland as it does to Hungry, Sweden, Saudi Arabia, China, or the United States? I doubt that to be the case. Effectively, the coordination issue that McAdams identifies is what he calls and Prisoner Dilemma embedded into a BOS game. The structure of the game is as follows:

 

 

Cooperate A

Cooperate B

Defect

Cooperate A’

3,2*

1,1

0,4

Cooperate B’

1,1

2,3*

0,4

Defect’

4,0

4,0

1,1

* “Equilibrium possible only with iteration”

Player 1’s decisions are A, B, and Defect, while Player 2’s decisions are A’, B’, and Defect’. McAdams also provides us with a great example of the kinds of real-world dilemmas exemplified by this kind of model, specifically from Geoffrey Garret and Barry Weingast, (1993).

For example, “two nations may agree to limit their tariffs (against domestic interest groups that push for them) and sustain this agreement by threatening to breach if the other breaches. But the parties must define precisely what trading behavior constitutes cooperation for purposes of their conditionally cooperative strategies. If one nation eliminates its tariffs but enacts health or labor legislation that impedes imports from the other nation, is that defection? More precisely, under what circumstances is nontariff legislation that impedes trade consistent with cooperation? … Unless they first solve their BOS game by agreeing on Standard A or Standard B, they will eventually be in a position where one nation’s effort to cooperate under Standard A is perceived by the other nation as a defection under Standard B. The latter nation retaliates by defecting and cooperation unravels” (McAdams, 2008, 230).

These kinds of coordination issues are central to the law. They “model situations of inequality, make history and culture relevant and explain one way that law works expressively, independent of sanctions” (ibid, 230). In the last paper (or blog post) I wrote, I identified that one of the main issues between feminists, critical race theorists, and American traditionalists or originalists was, at the core, a conflict of standards. I.e., the reason for one group was really just a justification why their standard, which they deemed better, should take precedence over the others. This is, and I shall try to make this clear, borne out by the prisoner dilemma embedded into the battle of the sexes game.

McAdams notes that “coordination games frequently capture distributional conflict.” Specifically, the Battle of the Sexes and Hawk-Dove games capture this kind of distributional conflict. “The two equilibria outcomes in each game necessarily have unequal payoffs where one player prefers one equilibrium and the other player prefers the other” (ibid, 231). Part of this issue is resolved by understanding the influence and culture of the two preferences, or salient focal points, of either player. “What is focal depends on what the individuals in the situation believe about how they or others they know have solved the same or analogous situations in the past” (ibid, 232). These focal points, I think this is clear, if they are a kind of standard, are anchors that establish the position from which any party perceives their situation.

In law, these focal points can be established by a legislature through the language of a law. These focal points then influence the behavior of the citizens of society, especially if the legislatures use language that is culturally significant or salient to them. I.e., “legal actors can influence behavior merely by creating self-fulfilling expectations that the legally obligatory behavior will occur” (ibid, 234). For example, a judge who has a deep understanding of the law may resolve a hawk-dove conflict between two individuals. For example, McAdams asks “How did the law work without state enforcement?”. Hauret et al., 2007 answer this question, but so too does McAdams. “Individual litigants could enforce or resist a judgment only by gathering the support of their kin. [Power] meant having others think one had the ability to muster bodies to assist in the various procedures that made up a legal action.” Effectively, both parties could turn into hawks and go after each other. Hauret et al. suggest that this was the case until the costs of engaging in such behavior became so high that it became necessary to invest in joint enterprises (legal systems with judges) that could resolve disputes between parties, giving them a focal point. “Once the court announced a winner, it appeared that the winner would fight, and this exception made it more difficult for the loser to gather or retain kin to fight on his behalf” (McAdams, 2008).

Battle of the Sexes games are also highly relevant to the law, for our interests, particularly constitutional issues. McAdams notes that “Creating a constitution constructs a focal point. Writing down the allocation of power in a particular structure of government makes that allocation salient and creates self-fulfilling expectations that the various players will demand at least as much power as granted in the writing, forcing other players to cede that much power.” These terms are, for all intents and purposes, blend concepts. Yet again, these blend concepts only work if a population understands what they mean; reasons are relevant if they constitute a context (like the legs of a table) in such a manner that could affect the goals of particular people. For example, how many different definitions will you get if you survey people in the streets about the meaning of the word Freedom, Liberty, or Life? Just because you have a goal, and this goal generates reasons, those reasons (if they are valid), their validity is dependent on whether they reflect the goal of the people; but what does that goal even mean? Also “focal points can be based on precedent (rather than communication), [which explains] the power of unwritten constitutional law on customs that create strong expectations about how parties will coordinate” (ibid, 240-1).

The goal of these salient points, established through a constitution and laws, “is that the political branches” established by the constitution, “coordinate to avoid a constitutional crisis or breakdown” (ibid, 241). However, these salient points are obviously ambiguous and McAdams is clearly aware of this.

Interestingly, the Battle of the Sexes game is highly relevant to gender and inequality issues. McAdams claims that “Once most men (women) in a society do a certain kind of work, a woman (man) who has the same skill will be unattractive as a spouse” (ibid, 242). Effectively, gender norms can act as standards and anchors, and thus these anchors can be used to predict the behavior of a partner so that one individual can make the most effective choice. However, if you recall the nature of Battle of the Sexes games, one side, the proposer, typically must provide a fair offer and the responder cannot request too much without looking extremely selfish, which could have indirect, reputational costs. In other words, to resolve this issue, to some degree, you can alter public perceptions of a practice that imposes disproportionate costs on one sex compared to another. “The demise in footbinding,” McAdams writes, “was largely the result of collective agreements between parents within villages that those who had girls would not bind their feet and that those who had boys would not allow them to marry girls with bound feet” (ibid, 242). However, as was noted by Rand et al.’s (2013) study, this sense of fairness is rooted in evolutionary mechanisms. As such, standards cannot simply be altered that do not take the psychological and biological tendencies of both men and women into account, and I think that McAdams overlooks this (McAdams, 2008, 243).

For example, McAdams writes:


“In BOS situations between a man and woman, if the man expects the woman to settle for her less favored outcome, then the man will play the strategy associated with his most preferred outcome. If the woman, counter to expectations, also attempts to claim the larger share, they will coordinate and she will be worse off than if she did what was expected.” For example, when subjects were matched against a woman the subjects were significantly more likely to play the strategy associated with his or her preferred equilibrium than when matched against a man… [P]redictably, men earned more than women.”

While McAdams does not claim that these are the result of social standards rather than biological and psychological tendencies, if we were to recall his previous example about footbinding, I do not think it would be unreasonable to claim that he likely doesn’t think the kind of pattern of behavior he highlighted is the result of mostly psychological and biological variables. Even if he does, it is probably a significantly overlooked set of variables; i.e., the standards men and women treat each other by, the facts that constitute those standards, should be primarily defined by their biological and psychological tendencies. It is costly to deny nature and the laws that define nature.

Hawk and Dove games are also related to gender and inequality issues, as identified by McAdams. He presents this scenario:

“Assume you expect all males to play this strategy:

If the other player is female, play Hawk; if the other player is male, play Hawk if the possessor and Dove if nonposessor.

And all females to play this strategy:

If the other player is male, play Dove; if the other player is female, play Hawk if possessor and Dove if nonposessor.

Under these circumstances, it does not pay to deviate from these established norms. “The result is a convention in which all property winds up in the hands of men. The same point,” McAdams argues, “can be made by using race roles instead of or in addition to sex roles, or any other immediately observable distinguishing traits” (ibid, 247).

Assurance games, McAdams suggests, are particularly relevant for understanding democracy. Citing Weingast (1997), Adams says that democratic situations can be modeled based off “an Assurance game, where citizen groups can maintain democratic rule only by jointly challenging the official and thereby removing her from power.” In other words, “If each group seeks to oust government officials only when (and whenever) that group views the official as having overstepped its authority, the citizen response will never be sufficiently united to threaten authoritarian officials (but yet may cause constant turmoil)” (ibid, 248). I.e., for citizens to oust a tyrant, they must coordinate. As previously discussed, this likely will not be the result of all citizen cooperating, but only the most cooperative cooperators cooperating at a game of such significant value to them that they’re capable of defending themselves against defectors and resilient enough to not be undermined by defectors. I.e., it does not take many coordinating parts, just a highly cooperative, resilient group who wishes to oust someone who is the biggest obstacle to their aims.

Social movements also depend on assurance games. For “the group seeking social or legal change, reform is a public good because the enjoyment of the new rights by some individuals does not diminish the consumption of those rights by others, and the group cannot exclude the benefits from those who did not contribute to creating them” (ibid, 249). The goal is to ensure the group seeking social change will share the benefits of that social change, i.e., to assure them participating is worth the cost.

Citing Chong (2014), McAdams writes, “After a successful event – a boycott, march registration drive, etc. – the group venerated those who helped it to succeed and sometimes shamed those who refused to participate” (McAdams, 2007). This kind of behavior was also identified by another author, (Santos et al., 2018).  “The payoff,” McAdams continues, “from participating when enough others participated to make the movement successful was plausibly higher than the payoff from not participating in the same circumstances. Yet because the social rewards of participating in a failed effort were far less, the payoffs from participating in a failed effort remained lower than not participating.” This, as Chong notes, is an assurance game “where individuals prefer contributing if enough others contribute, but prefer not contributing when enough others do not contribute” (McAdams, 2008, 250).

“Assurance game[s] capture[] an important dynamic of social movements like the civil rights movement – the need for leaders to assure potential participants that there will be enough participation to succeed.” Leaders can achieve this by optimistically communicating with their followers and they must be convincing. To achieve this “They will select small easy steps to build up a track record of success, publicize even small successes, and perhaps exaggerate them as groups often exaggerate the number of protesters who participate in their events” (ibid, 251).

Social Countermovements are also highly influenced by assurance games, if not all of the games. Effectively, at the core of social countermovements is a battle over standards, and where there’s a battle over standards there’s also a battle of the sexes, and a hawk-dove battle. McAdams writes “The group disadvantaged by the prevailing norm seeks to change it. If enough such individuals switch their strategies in the HD games against the other group members – playing Hawk instead of Dove – the resulting Hawk/Hawk conflict will be costly, but it may compel the other group’s members to back down and start playing Dove. For individuals seeking social change, there is uncertainty whether enough of one’s fellow group members will stand up and play Hawk long enough to make the other group’s members back down… Given sufficient social identity or solidarity, [a group may be] willing to sacrifice for social change when enough others will do the same, and therefore seek to coordinate their actions with others” (ibid, 253). I imagined this very much like a nested doll of Games in Games, where a Battle of Sexes game defines a Hawk-Dove game, defines an Assurance Game, each upper layer of cooperation dependent on the lower for success.

I defined this section of the paper Competition Between the Gods. I did this, not only because it references another paper I wrote in the past, but because it carries through the line of reasoning that has been, in my opinion, building through this paper. Effectively, once people have faith in each other, enough to cooperate, they will do so in a way that establishes a group that maintains itself and ensures cooperation between its members through a number of mechanisms (arbitrary tags, salient slogans, salient focal points defined historically and culturally, and punishment mechanisms), but most importantly, these groups can be defined by a kind of moralistic, punitive, and knowledgeable god. Thus, when we have movements and countermovements vying for political control over a territory, specifically because a group needs resources to ensure cooperation between its members (see Burtsev & Turchin, 2006), what we really have, implicitly or explicitly, is a battle over the norms of a group, the standards of a group, and those standards are reflected by differing focal points or goals, which are themselves the product of the desires of a group, which are the product of the psychology and biology of the group and thus, we have two competing psychological and biological conceptions of how the world should look and how it should be engaged with. This battle can effectively be represented as a battle, not between groups, but gods. The more moralistic, judgmental, and knowledgeable one of these gods is, the greater the chance its group will win the battle of standards or norms, and thus generate a more complex and stable society. If it cannot generate a more complex or stable society, then it was an ineffective god, based on ineffective norms and standards. 

 

Conclusions and Discussion

Many among men are they who set high the show of honor, yet break justice.

― Aeschylus

 

At the beginning of the paper, I asked the question: “are happiness, life, and liberty, as goals, the necessary product of a larger goal? I think the answer to that question is yes. But what goal?

Over the course of this paper, I’ve reviewed the necessary rules of cooperation, the kinds of games that can be observed based on those rules, and how those games can be used to study law and politics. I have ultimately come to this conclusion: from the biological and psychological levels, both of which are constrained by the laws of nature and God, certain passions or desires emerge that, given the conditions of Man’s existence, force him to cooperate with his fellow man. He ultimately does not do this because he’s reasonable or rational. If he were truly rational or reasonable, he would never have taken the leap of faith to cooperate in the first place. Thus, his entire cooperative endeavor is irrational, it’s based on a matter of probability, not certainty, and thus faith.

Once cooperation between members of a group is established, they can preserve their cooperation by using identifying markers, cooperating with their kin, engaging in direct and indirect cooperation, and partnering with their neighbors, forming groups based on which group will net them the highest return for cooperating with them. These groups can preserve cooperation by setting goals, which reflect the desires of the members of that group. These goals, focal points, establish norms implicitly or explicitly that can be used to coordinate action and resolve conflict between members to ensure cooperation. These norms can ultimately be represented as gods that are punitive, knowledgeable, and moralistic, and that define and are defined by the goals and norms of the society that they are part of. These gods enable greater cooperation and the development and preservation of more complex societies.

These gods can also be seen as representations of the biology and psychology of the people to which they belong, reflected through the desires and goals of those people. Ultimately, two of these gods, normative sets and standards, can be at odds with each other. When this happens, when one side hopes to get a fairer deal or an upper hand, conflict can occur. This is, as I stated before, a battle between two gods, but also it is a sign that group selection and competition are occurring.

For group selection to occur, this rule has to follow:

b/c>1+n/m

That is, the ratio of benefits-to-costs (b/c) must be greater than one plus the maximum group size (n) over the number of groups (m). If the maximum group size becomes so large, or the maximum number of groups remains constant while group size increases and costs become greater than benefits, then group selection and cooperation cannot occur. Effectively this number is defined by the number of benefits that can be gained and the number of groups that an environment can hold. A great example of this is Turchin’s Starlings, Ravens, and Doves. However, what this also means is that, if people can increase the maximum number of groups possible, i.e., the ability to sustain even more groups than it was capable of sustaining before, and increase the benefits from cooperating while keeping costs low, group selection could occur nearly infinitely. However, this will still mean group competition is occurring, particularly between the groups to see who is better at increasing the two relevant variables, benefits (while keeping costs low), and the maximum number of groups any environment can sustain. A god who is able to achieve this, who is able to promote cooperation such that it sets standards and norms that motivate members of a group to engage in cooperative play that increases b, m, and keeps c low, will win in the battle of group selection, especially if they can resist defectors and maintain ingroup cohesion and are resilient.

What does this say about the kind of life we should live? I.e., what does this say about who we are, what we are, as individuals and as a species? What does this say about the good?

First of all, it seems that this says our aim really should be a kind of duty to our fellows. However, we have no real good reason to engage in this kind of behavior; it is effectively irrational. Yet it is also the basis of all societies: an act of faith to help our fellow man despite the horrors and tragedies of existence. But we cannot be so naïve, nor does it seem we are. We really cannot suffer those who are likely to betray us or whose aims are likely to undermine our ability to exist, not only as an individual but as a group capable of engaging in cooperation and extending that cooperative behavior to others. If we fail to do so, we are incapable of helping those who can help us or who do help us, and when this happens, we seem to be betraying what is really, beyond reason, good for us. Mercy seems to be appropriate for fellow cooperators, but wasted on defectors or subversives.

I would also like to take a moment to focus on what this says about us as creatures. We are clearly not rational, not reasonable, and the reasons we give seem to be a veneer to further our desires, especially to dominate the world around us to improve the lives of our fellow human beings or, worst of all, only one's self. We are, I dare say, creatures of passion: we are more romantic than rational, which is really a matter of pure scientific and empirical fact. Thus, it seems almost absurd to think that, for some length of time now (maybe over a hundred years), we have been obsessed with this thing called the rational, the reasonable, without ever understanding what kind of tool it was or to what ends it was to be properly applied. No one, as far as I can see right now, could use this as a defining feature of their worldview without being utterly mistaken about the essential nature of the human condition.

Lastly, I think I would like to make another point clear: we do not seem to be making gods or God in the image of ourselves. Rather, because we are defined by nature, and nature is defined by a set of principles, and those principles, those laws, need a lawgiver, we are, our best desires, and what’s good and beneficial for us, defined by God. When we erect a Godhead or reference a punitive, moralistic, and knowledgeable God to promote prosociality and to define our norms and standards, we are not doing so because that is an image of our ideal self; we really seem to be representing an ideal defining us that is implicitly within our nature, defined by that deity who initially defined us. It thus is only through the appropriate actions that we can bring this potential nature out of ourselves so that we may actually live our fullest lives.

I began this paper with a short passage about a man dourly reminiscing about a childhood poem he was told. He asked himself why the mermaid would save the man in the poem, and why the people who wrote the poem about her would lie. I do not think we can say they did, and I think we can answer his question. She saved the man in the poem because that’s what a good person, a good mermaid, would do. And they did not lie: they told the kind of story that best reflected the nature of the women in their lives, in all of us. We simply fail to live up to what we really are.

 

Bibliography

 

Alexander, R.D., 1987. The Biology of Moral Systems Aldine de Gruyter New York.

Andreoni, J. and Miller, J.H., 1993. Rational cooperation in the finitely repeated prisoner's dilemma: Experimental evidence. The economic journal, 103(418), pp.570-585.

Atkinson, Q.D. and Bourrat, P., 2011. Beliefs about God, the afterlife and morality support the role of supernatural policing in human cooperation. Evolution and Human Behavior, 32(1), pp.41-49.

Botero, C.A., Gardner, B., Kirby, K.R., Bulbulia, J., Gavin, M.C. and Gray, R.D., 2014. The ecology of religious beliefs. Proceedings of the National Academy of Sciences, 111(47), pp.16784-16789.

Boyd, R., Gintis, H. and Bowles, S., 2010. Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science, 328(5978), pp.617-620.

Brandt, H. and Sigmund, K., 2004. The logic of reprobation: assessment and action rules for indirect reciprocation. Journal of theoretical biology, 231(4), pp.475-486.

Burtsev, M. and Turchin, P., 2006. Evolution of cooperative strategies from first principles. Nature, 440(7087), pp.1041-1044.

Chong, D., 2014. Collective action and the civil rights movement. University of Chicago Press.

Colman, A.M. and Stirk, J.A., 1998. Stackelberg reasoning in mixed-motive games: An experimental investigation. Journal of Economic Psychology, 19(2), pp.279-293.

Colman, A.M., 2003. Cooperation, psychological game theory, and limitations of rationality in social interaction. Behavioral and brain sciences, 26(2), pp.139-153.

Cooper, R., DeJong, D.V., Forsythe, R. and Ross, T.W., 1996. Cooperation without reputation: Experimental evidence from prisoner's dilemma games. Games and Economic Behavior, 12(2), pp.187-218.

Crawford, V.P. and Haller, H., 1990. Learning how to cooperate: Optimal play in repeated coordination games. Econometrica: Journal of the Econometric Society, pp.571-595.

Donahue, K., Hauser, O.P., Nowak, M.A. and Hilbe, C., 2020. Evolving cooperation in multichannel games. Nature communications, 11(1), p.3885.

Fehr, E. and Gächter, S., 2002. Altruistic punishment in humans. Nature, 415(6868), pp.137-140.

Garrett, G. and Weingast, B.R., 2019. 7. Ideas, Interests, and Institutions: Constructing the European Community’s Internal Market. In Ideas and foreign policy (pp. 173-206). Cornell University Press.

Gillies, D.B., 1953. Some theorems on n-person games. Princeton University. Unpublished doctoral dissertation.)[aAMC].

Hauert, C., Traulsen, A., Brandt, H., Nowak, M.A. and Sigmund, K., 2007. Via freedom to coercion: the emergence of costly punishment. science, 316(5833), pp.1905-1907.

Hirshleifer, D. and Rasmusen, E., 1989. Cooperation in a repeated prisoners' dilemma with ostracism. Journal of Economic Behavior & Organization, 12(1), pp.87-106.

Johnson, D.D., 2005. God’s punishment and public goods: A test of the supernatural punishment hypothesis in 186 world cultures. Human Nature, 16, pp.410-446.

Mao, A., Dworkin, L., Suri, S. and Watts, D.J., 2017. Resilient cooperators stabilize long-run cooperation in the finitely repeated prisoner’s dilemma. Nature communications, 8(1), p.13800.

McAdams, R.H., 2008. Beyond the prisoners' dilemma: Coordination, game theory, and law. S. Cal. L. Rev., 82, p.209.

Nowak, M.A., 2006. Five rules for the evolution of cooperation. science, 314(5805), pp.1560-1563.

Ohtsuki, H. and Iwasa, Y., 2004. How should we define goodness?—reputation dynamics in indirect reciprocity. Journal of theoretical biology, 231(1), pp.107-120.

Purzycki, B.G., Apicella, C., Atkinson, Q.D., Cohen, E., McNamara, R.A., Willard, A.K., Xygalatas, D., Norenzayan, A. and Henrich, J., 2016. Moralistic gods, supernatural punishment and the expansion of human sociality. Nature, 530(7590), pp.327-330.

Rand, D.G., Tarnita, C.E., Ohtsuki, H. and Nowak, M.A., 2013. Evolution of fairness in the one-shot anonymous Ultimatum Game. Proceedings of the National Academy of Sciences, 110(7), pp.2581-2586.

Riolo, R.L., Cohen, M.D. and Axelrod, R., 2001. Evolution of cooperation without reciprocity. Nature, 414(6862), pp.441-443.

Russell, B., 2013. Human society in ethics and politics. Routledge.

Santos, F.P., Santos, F.C. and Pacheco, J.M., 2018. Social norm complexity and past reputations in the evolution of cooperation. Nature, 555(7695), pp.242-245.

Selten, R. and Stoecker, R., 1986. End behavior in sequences of finite Prisoner's Dilemma supergames A learning theory approach. Journal of Economic Behavior & Organization, 7(1), pp.47-70.

Shariff, A.F. and Rhemtulla, M., 2012. Divergent effects of beliefs in heaven and hell on national crime rates. PloS one, 7(6), p.e39048.

Stewart, A.J. and Plotkin, J.B., 2014. Collapse of cooperation in evolving games. Proceedings of the National Academy of Sciences, 111(49), pp.17558-17563.

Watts, J., Greenhill, S.J., Atkinson, Q.D., Currie, T.E., Bulbulia, J. and Gray, R.D., 2015. Broad supernatural punishment but not moralizing high gods precede the evolution of political complexity in Austronesia. Proceedings of the Royal Society B: Biological Sciences, 282(1804), p.20142556.

Weingast, B.R., 1997. The political foundations of democracy and the rule of the law. American political science review, 91(2), pp.245-263.

 

How do you rate this article?

8


MatTehCat
MatTehCat

Writer, Blogger and Vlogger creating stories, rhetorical arguments, and editorials on philosophy, psychology, religion and art.


MatTehCat's Blogs
MatTehCat's Blogs

Blogs on psychology, philosophy, poetry, religion, literature, and culture.

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.