Evolutionary Game Theory

Being first applied to the interaction of creatures within ecosystems, evolutionary biology has increasingly found relevance to all areas of socioeconomic interaction

Evolutionary Game Theory is the application of game theory concepts to situations in which a population of agents with diverse strategies interact over time to create a stable solution, through an evolutionary process of selection and duplication.The central insight of evolutionary game theory is that many behaviors involve the interaction of multiple agents in a population, and the success of any one of these agents depends on how its strategy interacts with that of others. Thus the fitness of an individual organism can not be measured in isolation; rather it has to be evaluated in the context of the full population in which it lives.2
Whereas classic game theory has been focused on static strategies, that is to say, strategies that do not change over time, evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. Here we are asking how strategies evolve over time and which kind of dynamic strategies are most successful in this evolutionary process. The key aspects of this process involve; adaptive agents with strategies; multiple iterations of these strategies interacting; successful strategies being duplicated while unsuccessful ones are removed; successful strategies growing to become more prominent within the population before reaching a stable solution.

Classic game theory was developed during the mid 20th century primarily for application in economics and political science, but in the 1970s a number of biologists started to recognize how similar the games being studied were to the interaction between animals within ecosystems. Game theory then quickly became a hot topic in biology as they started to find it relevant to all sorts of animal and microbial interactions from the feeding of bats to the territorial defense of stickleback fish. Originally evolutionary game theory was simply the application of game theory to evolving populations in biology. Asking how cooperative systems could have evolved over time from various strategies that biological creatures might have adopted. However, the development of evolutionary game theory has produced a theory which holds great promise as a general theory. More recently evolutionary game theory has become of increased interest to economists, sociologists, anthropologists and social scientists in general as well as philosophers.3

Evolution

One of the important differences between evolutionary game theory and standard game theory is that the evolutionary version does not require players to act rationally.4 When we talk about biological cells or ants we know that they do not sit in front of a payoff matrix and ask themselves what is the best payoff, in evolutionary game theory natural selection does this for us. So if we have a group of cooperators and defectors who randomly meet each other, the average payoff for the defectors is higher than the cooperators, therefore, they reproduce better. Payoffs in evolutionary biology correspond to reproductive success. So after some time evolution will have favored defectors to the point where all of the cooperators will be extinct. The basic logic is that for something to survive the course of time it must be an optimal strategy or else any other strategy that is more effective will eventually come to dominate the population. Traditionally the story of evolution is told as one of competition, and there is certainly plenty of this, but there is also mutualism, where organisms and people manage to work together cooperatively and survive in the face of defectors. Many research papers have been written on this topic of how cooperation could evolve in the face of such an evolutionary dynamic.5
The general question of interest in evolutionary game theory is in how do patterns of cooperation evolve, and what are optimal strategies to use in a game that evolves over time. The basic mechanism that underlies the evolution of cooperation is the interdependency between acts over time. In a single shot game it makes sense to always defect, but with repeated interaction cooperation becomes greatly more viable. If the game is repeated, it is no longer the case that strict defection is the best option.6 If the prisoner’s dilemma situation is repeated it allows non-cooperation to be punished more, and cooperation to be rewarded more, than the single-shot version of the problem would suggest. We can understand this better by looking at a number of experiments that were done to investigate this dynamic.

Experiments

Cover of Axelrod’s highly influential book The Evolution of Cooperation. Widely praised and much-discussed, this classic book explored how cooperation can emerge in a world where there is no central authority to police agent’s actions.

The political scientist Robert Axelrod in the late seventies did a number of highly influential computer experiments asking what is a good strategy for playing a repeated Prisoner’s Dilemma.7 Axelrod asked for various researchers to submit computer algorithms to a competition to see which algorithms would fare best against each other. Computer models of the evolution of cooperation showed that indiscriminate cooperators almost always end up losing against defectors who accept helpful acts from others but do not reciprocate. People who are cooperative and helpful indiscriminately all of the time will end up getting taken advantage of by others. However, if we have a population of pure defectors they will also lose out on the possible rewards of cooperation that would give all higher payoffs.
Many strategies have been tested; the best competitive strategies are general cooperation with a reserved retaliatory response if necessary. The most famous and one of the most successful of these is Tit for Tat with a simple algorithm. Tit for Tat is a very simple algorithm of just three rules, I start with cooperation, if you cooperate, then I will cooperate. If you defect, then I will defect. Computer tournaments in which different strategies were pitted against each other showed Tit for Tat to be the most successful strategy in social dilemmas. Tit for Tat is a common strategy in real-world social dilemmas because it is nice but firm it makes cooperation a possibility but is also quick to reprimand. It is a strategy that can be found naturally in everything from international trade policies to people borrowing and lending money. And in repeated interactions cooperation can emerge when people adopt a Tit for Tat strategy.
To go beyond Tit for Tat researchers started to use computers to simulate the process of evolution. Instead of people submitting solutions the computer itself generated mutations and selected from them with the researchers recording and analyzing the results. From these experiments, they found that if the players play randomly the winners are those that always defect. But then when everyone has come to play defect strategies if a few people play Tit for Tat strategies a small cluster can form where among themselves they get a good pay off. Evolutionary selection can then start to favor them and they do not get exploited by all the defectors because they immediately switch to defect in retaliation.

Generous Tit for Tat

One of the features to game theory that Axelrod’s experiments illustrated is just how contingent any given strategy is upon the context that it finds itself within.

However, the Tit for Tat strategy did not last long in this setting as a new solution came to emerge given this context. This strategy was a mutant of Tit for Tat that was more forgiving called generous Tit for Tat. Generous Tit for Tat is an algorithm that starts with cooperation and then will reciprocate cooperation from others, but if the other defects it will defect with some probability. Thus it uses probability to enable the quality of forgiveness. It cooperates when others do but when they defect there is still some probability that it will continue to cooperate. This is a random decision so it is not possible for others to predict when it will continue to cooperate.
It turns out that this forgiving strategy is optimal in environments where there is some degree of noise in communications, as is characteristic of real-world environments. In the real world, we often do not know for certain if our partner cheated or if someone really meant to say what they said, and these errors have to be compensated for by some degree of forgiveness. In a world of errors in action and perception, such a strategy can be a Nash equilibrium and evolutionarily stable. The more beneficial cooperation is, the more forgiving Generous Tit for Tat can be, while still resisting invasion by defectors.The extraordinary thing that now happens is that once everyone has moved towards playing Generous Tit for Tat, cooperation becomes a much stronger attractor and at this stage, players can now play an unconditional cooperative strategy without having any disadvantage. In a world of Generous Tit for Tat there is no longer a need for any other actions and thus unconditional cooperators survive.9 In order for a strategy to be evolutionarily stable, it must have the property that if almost every member of the population follows it, no mutants can successfully invade – where a mutant is an individual who adopts a novel strategy.
In many situations, cooperation is favored and it even benefits an individual to forgive an occasional defection, but cooperative societies are always unstable because mutants inclined to defect can upset any balance. And this is the downfall of the cooperative strategy. What happens next is somewhat predictable, in a world where everyone is cooperating unconditional defection is an optimal strategy once it takes hold. Thus we can see a dynamic cyclical process, as higher forms of cooperation arise and then collapse. In many ways then this reflects what we see in the real world of economies and empires rising and falling as institutional structures for cooperation are formed, mature and eventually decline.10

Indirect reciprocity

These experiments describe the evolution of systems of cooperation through direct interaction, and much of our interactions are repeated with people we have interacted with before and built up an understanding of their capacity for reciprocity. However, in large societies, we have to interact with many people that we have not interacted with before and it may only be a once off interaction. Experiments have shown that people help those who have helped others and have shown reciprocity in the past and that this form of indirect reciprocity has a higher payoff in the end.11 Reputation systems are what allow for the evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient. The idea is that you interact with others and that interaction is seen and people note whether you acted cooperatively or non-cooperatively. That information is then circulated so that others learn about your behavior. Direct reciprocity is where I help you and you help me, indirect reciprocity is where I help you and then somebody helps me because I now have a reputation for cooperating.

Today’s emerging social network technologies represent the possibility for building indirect reputations systems on a scale previously unimaginable

The result is the formation of reputation, when you cooperate that helps your reputation when you defect it reduces it. That reputation then follows us around and is used as the basis for your interaction with others. Thus reputation forms a system for the evolution of cooperation in larger societies where people may interact frequently with people that they may not know personally. But because of various reputation systems, they are able to identify those who are cooperative and enter into mutually beneficial reciprocal relations.12 The more sophisticated and secure these reputation systems, the greater the capacity for cooperative organizations. We can create large systems wherein we know who to cooperate with and thus can be cooperative ourselves, potentially creating a successful community. But of course, as the society gets bigger we have to form more complex institutions for enabling functional reputation systems. In such a way we have gone from small communities where local gossip was sufficed to know everyone’s capacity for cooperation, to large modern industrial societies where centralized organizations vouched for people’s reputation. To today’s burgeoning global reputation systems based on information technology and mediated through the internet.

Research shows that cooperators create better opportunities for themselves than non-cooperators: They are selectively preferred as collaborative partners, romantic partners, and group leaders.13 This only occurs however when people’s social dilemma choices are seen and recorded by others in some way. However this kind of indirect reciprocity is cognitively complex, no other creature has mastered it to even a fraction of what humans have. Games of indirect reciprocity lead to the evolution of social intelligence and ever more sophisticated means of communications, social and cultural institutions that are characteristic of human civilization.
The basic problem of the evolution of cooperation is thus that nice guys get taken advantage of, and thus there must be some form of supporting structure to enable cooperation.14 More than any other primate species, humans have overcome this problem through a variety of mechanisms such as reciprocating cooperative acts, forming reputations of others and the self as cooperators and caring about these reputations; we create prosocial norms about good behavior that everyone in the group will enforce on others through disapproval, if not punishment, and will enforce on themselves through feelings of guilt and shame. All of which form the fabric of our sociocultural institutions that enable advanced forms of cooperation.

1. (2017). People.goshen.edu. Retrieved 14 May 2017, from https://goo.gl/hL0iis

2. (2017). Cs.cornell.edu. Retrieved 14 May 2017, from https://goo.gl/zDAc2A

3. Alexander, J. (2002). Evolutionary Game Theory. Plato.stanford.edu. Retrieved 14 May 2017, from https://plato.stanford.edu/entries/game-evolutionary/

4. Alexander, J. (2002). Evolutionary Game Theory. Plato.stanford.edu. Retrieved 14 May 2017, from https://plato.stanford.edu/entries/game-evolutionary/

5. The direction of evolution: The rise of cooperative organization . (2017). Sciencedirect.com. Retrieved 14 May 2017, from https://goo.gl/QYHWHm

6. Prisoner’s Dilemma (Stanford Encyclopedia of Philosophy) . (2017). Plato.stanford.edu. Retrieved 14 May 2017, from https://plato.stanford.edu/entries/prisoner-dilemma/

7. (2017). Www-personal.umich.edu. Retrieved 14 May 2017, from http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf

8. (2017). Www-personal.umich.edu. Retrieved 14 May 2017, from http://www-personal.umich.edu/~axe/research/How_to_Cope.pdf

9. Martin Nowak: ‘The Evolution of Cooperation’ | 2015 ISNIE Annual Meeting. (2017). YouTube. Retrieved 14 May 2017, from https://www.youtube.com/watch?v=A8Y0kCdYoug&t=1934s

10. Martin Nowak: ‘The Evolution of Cooperation’ | 2015 ISNIE Annual Meeting. (2017). YouTube. Retrieved 14 May 2017, from https://www.youtube.com/watch?v=A8Y0kCdYoug&t=1934s

11.Martin Nowak & Karl Sigmund: ‘Evolution of indirect reciprocity’ | 2Nature 437, 1291-1298 (27 October 2005). https://goo.gl/6xGHX8

12. Martin Nowak: ‘The Evolution of Cooperation’ | 2015 ISNIE Annual Meeting. (2017). YouTube. Retrieved 14 May 2017, from https://www.youtube.com/watch?v=A8Y0kCdYoug&t=1934s

13. Encyclopedia of Group Processes and Intergroup Relations. (2017). Google Books. Retrieved 14 May 2017, from https://goo.gl/uVIXog

14. Martin Nowak: ‘The Evolution of Cooperation’ | 2015 ISNIE Annual Meeting. (2017). YouTube. Retrieved 14 May 2017, from https://www.youtube.com/watch?v=A8Y0kCdYoug&t=1934s

2017-05-14T10:57:19+00:00