# Tournaments¶

## Axelrod’s first tournament¶

Axelrod’s first tournament is described in his 1980 paper entitled ‘Effective choice in the Prisoner’s Dilemma’ [Axelrod1980]. This tournament included 14 strategies (plus a random “strategy”) and they are listed below, (ranked in the order in which they appeared).

An indication is given as to whether or not this strategy is implemented in the axelrod library. If this strategy is not implemented please do send us a pull request.

Strategies in Axelrod’s first tournament
Name Author Axelrod Library Name
Tit For Tat Anatol Rapoport TitForTat
Tideman and Chieruzzi T Nicolaus Tideman and Paula Chieruzz TidemanAndChieruzzi
Nydegger Rudy Nydegger Nydegger
Grofman Bernard Grofman Grofman
Shubik Martin Shubik Shubik
Stein and Rapoport Stein and Anatol Rapoport SteinAndRapoport
Grudger James W Friedman Grudger
Davis Morton Davis Davis
Graaskamp Jim Graaskamp Graaskamp
FirstByDowning Leslie Downing RevisedDowning
Feld Scott Feld Feld
Joss Johann Joss Joss
Tullock Gordon Tullock Tullock
(Name withheld) Unknown UnnamedStrategy
Random Unknownd Random

## Axelrod’s second tournament¶

The code for Axelrod’s second touranment was originally published by the University of Michigan Center for the Study of Complex Systems and is now available from Robert Axelrod’s personal website subject to a disclaimer which states:

“All materials in this archive are copyright (c) 1996, Robert Axelrod, unless otherwise noted. You are free to download these materials and use them without restriction.”

The Axelrod-Python organisation has published a modified version of the original code. In the following table, links to original code point to the Axelrod-Python repository.

Strategies in Axelrod’s second tournament
Original Code Author Axelrod Library Name
GRASR Unknown Not Implemented
K31R Gail Grisell GoByMajority
K32R Charles Kluepfel SecondByKluepfel
K33R Harold Rabbie Not Implemented
K34R James W Friedman Grudger
K35R Abraham Getzler Not Implemented
K36R Roger Hotz Not Implemented
K37R George Lefevre Not Implemented
K38R Nelson Weiderman Not Implemented
K39R Tom Almy Not Implemented
K41R Herb Weiner SecondByWeiner
K42R Otto Borufsen SecondByBorufsen
K43R R D Anderson Not Implemented
K44R William Adams SecondByWmAdams
K45R Michael F McGurrin Not Implemented
K46R Graham J Eatherley SecondByEatherley
K47R Richard Hufford SecondByRichardHufford
K48R George Hufford Not Implemented
K49R Rob Cave SecondByCave
K50R Rik Smoody Not Implemented
K51R John Willaim Colbert Not Implemented
K52R David A Smith Not Implemented
K53R Henry Nussbacher Not Implemented
K54R William H Robertson Not Implemented
K55R Steve Newman Not Implemented
K56R Stanley F Quayle Not Implemented
K57R Rudy Nydegger Not Implemented
K58R Glen Rowsam SecondByRowsam
K59R Leslie Downing RevisedDowning
K60R Jim Graaskamp and Ken Katzen SecondByGraaskampKatzen
K61R Danny C Champion SecondByChampion
K62R Howard R Hollander Not Implemented
K63R George Duisman Not Implemented
K64R Brian Yamachi SecondByYamachi
K65R Mark F Batell Not Implemented
K66R Ray Mikkelson Not Implemented
K67R Craig Feathers SecondByTranquilizer
K68R Fransois Leyvraz SecondByLeyvraz
K69R Johann Joss Not Implemented
K70R Robert Pebly Not Implemented
K71R James E Hall Not Implemented
K72R Edward C White Jr SecondByWhite
K73R George Zimmerman Not Implemented
K74R Edward Friedland Not Implemented
K74RXX Edward Friedland Not Implemented
K75R Paul D Harrington SecondByHarrington
K76R David Gladstein SecondByGladstein
K77R Scott Feld Not Implemented
K78R Fred Mauk Not Implemented
K79R Dennis Ambuehl and Kevin Hickey Not Implemented
K80R Robyn M Dawes and Mark Batell Not Implemented
K81R Martyn Jones Not Implemented
K82R Robert A Leyland Not Implemented
K83R Paul E Black SecondByWhite
K84R T Nicolaus Tideman and Paula Chieruzzi SecondByTidemanChieruzzi
K85R Robert B Falk and James M Langsted Not Implemented
K86R Bernard Grofman Not Implemented
K87R E E H Schurmann Not Implemented
K88R Scott Appold SecondByAppold
K89R Gene Snodgrass Not Implemented
K90R John Maynard Smith TitFor2Tats
K91R Jonathan Pinkley Not Implemented
K92R Anatol Rapoport TitForTat
K93R Unknown Not Implemented
KPAVLOVC Unknown WinStayLoseShift
KRANDOMC Unknown Random
KTF2TC Unknown TitFor2Tats
KTITFORTATC Unknown TitForTat

## Stewart and Plotkin’s Tournament (2012)¶

In 2012, Alexander Stewart and Joshua Plotkin ran a variant of Axelrod’s tournament with 19 strategies to test the effectiveness of the then newly discovered Zero-Determinant strategies.

The paper is identified as doi: 10.1073/pnas.1208087109 and referred to as [Stewart2012] below. Unfortunately the details of the tournament and the implementation of strategies is not clear in the manuscript. We can, however, make reasonable guesses to the implementation of many strategies based on their names and classical definitions.

The following classical strategies are included in the library:

Strategies in Stewart and Plotkin’s tournament
S&P Name Long Name Axelrod Library Name
ALLC Always Cooperate Cooperator
ALLD Always Defect Defector
EXTORT-2 Extort-2 ZDExtort2
HARD_MAJO Hard majority HardGoByMajority
HARD_JOSS Hard Joss Joss
HARD_TFT Hard tit for tat HardTitForTat
HARD_TF2T Hard tit for 2 tats HardTitFor2Tats
TFT Tit-For-Tat TitForTat
GRIM Grim Grudger
GTFT Generous Tit-For-Tat GTFT
TF2T Tit-For-Two-Tats TitFor2Tats
WSLS Win-Stay-Lose-Shift WinStayLoseShift
RANDOM Random Random
ZDGTFT-2 ZDGTFT-2 ZDGTFT2

ALLC, ALLD, TFT and RANDOM are defined above. The remaining classical strategies are defined below. The tournament also included two Zero Determinant strategies, both implemented in the library. The full table of strategies and results is available online.

### Memory one strategies¶

In 2012 Press and Dyson [Press2012] showed interesting results with regards to so called memory one strategies. Stewart and Plotkin implemented a number of these. A memory one strategy is simply a probabilistic strategy that is defined by 4 parameters. These four parameters dictate the probability of cooperating given 1 of 4 possible outcomes of the previous round:

• $$P(C\,|\,CC) = p_1$$
• $$P(C\,|\,CD) = p_2$$
• $$P(C\,|\,DC) = p_3$$
• $$P(C\,|\,DD) = p_4$$

The memory one strategy class is used to define a number of strategies below.

### GTFT¶

Generous-Tit-For-Tat plays Tit-For-Tat with occasional forgiveness, which prevents cycling defections against itself.

GTFT is defined as a memory-one strategy as follows:

• $$P(C\,|\,CC) = 1$$
• $$P(C\,|\,CD) = p$$
• $$P(C\,|\,DC) = 1$$
• $$P(C\,|\,DD) = p$$

where $$p = \min\left(1 - \frac{T-R}{R-S}, \frac{R-P}{T-P}\right)$$.

GTFT came 2nd in average score and 18th in wins in S&P’s tournament.

### TF2T¶

Tit-For-Two-Tats is like Tit-For-Tat but only retaliates after two defections rather than one. This is not a memory-one strategy.

TF2T came 3rd in average score and last (?) in wins in S&P’s tournament.

### WSLS¶

Win-Stay-Lose-Shift is a strategy that shifts if the highest payoff was not earned in the previous round. WSLS is also known as “Win-Stay-Lose-Switch” and “Pavlov”. It can be seen as a memory-one strategy as follows:

• $$P(C\,|\,CC) = 1$$
• $$P(C\,|\,CD) = 0$$
• $$P(C\,|\,DC) = 0$$
• $$P(C\,|\,DD) = 1$$

WSLS came 7th in average score and 13th in wins in S&P’s tournament.

### RANDOM¶

Random is a strategy that was defined in Axelrod’s first tournament, note that this is also a memory-one strategy:

• $$P(C\,|\,CC) = 0.5$$
• $$P(C\,|\,CD) = 0.5$$
• $$P(C\,|\,DC) = 0.5$$
• $$P(C\,|\,DD) = 0.5$$

RANDOM came 8th in average score and 8th in wins in S&P’s tournament.

### ZDGTFT-2¶

This memory-one strategy is defined by the following four conditional probabilities based on the last round of play:

• $$P(C\,|\,CC) = 1$$
• $$P(C\,|\,CD) = 1/8$$
• $$P(C\,|\,DC) = 1$$
• $$P(C\,|\,DD) = 1/4$$

This strategy came 1st in average score and 16th in wins in S&P’s tournament.

### EXTORT-2¶

This memory-one strategy is defined by the following four conditional probabilities based on the last round of play:

• $$P(C\,|\,CC) = 8/9$$
• $$P(C\,|\,CD) = 1/2$$
• $$P(C\,|\,DC) = 1/3$$
• $$P(C\,|\,DD) = 0$$

This strategy came 18th in average score and 2nd in wins in S&P’s tournament.

### GRIM¶

Grim is not defined in [Stewart2012] but it is defined elsewhere as follows. GRIM (also called “Grim trigger”), cooperates until the opponent defects and then always defects thereafter. In the library this strategy is called Grudger.

GRIM came 10th in average score and 11th in wins in S&P’s tournament.

### HARD_JOSS¶

HARD_JOSS is not defined in [Stewart2012] but is otherwise defined as a strategy that plays like TitForTat but cooperates only with probability $$0.9$$. This is a memory-one strategy with the following probabilities:

• $$P(C\,|\,CC) = 0.9$$
• $$P(C\,|\,CD) = 0$$
• $$P(C\,|\,DC) = 1$$
• $$P(C\,|\,DD) = 0$$

HARD_JOSS came 16th in average score and 4th in wins in S&P’s tournament.

HARD_JOSS as described above is implemented in the library as Joss and is the same as the Joss strategy from Axelrod’s first tournament.

### HARD_MAJO¶

HARD_MAJO is not defined in [Stewart2012] and is presumably the same as “Go by Majority”, defined as follows: the strategy defects on the first move, defects if the number of defections of the opponent is greater than or equal to the number of times it has cooperated, and otherwise cooperates,

HARD_MAJO came 13th in average score and 5th in wins in S&P’s tournament.

### HARD_TFT¶

Hard TFT is not defined in [Stewart2012] but is elsewhere defined as follows. The strategy cooperates on the first move, defects if the opponent has defected on any of the previous three rounds, and otherwise cooperates.

HARD_TFT came 12th in average score and 10th in wins in S&P’s tournament.

### HARD_TF2T¶

Hard TF2T is not defined in [Stewart2012] but is elsewhere defined as follows. The strategy cooperates on the first move, defects if the opponent has defected twice (successively) of the previous three rounds, and otherwise cooperates.

HARD_TF2T came 6th in average score and 17th in wins in S&P’s tournament.

### Calculator¶

This strategy is not unambiguously defined in [Stewart2012] but is defined elsewhere. Calculator plays like Joss for 20 rounds. On the 21 round, Calculator attempts to detect a cycle in the opponents history, and defects unconditionally thereafter if a cycle is found. Otherwise Calculator plays like TFT for the remaining rounds.

### Prober¶

PROBE is not unambiguously defined in [Stewart2012] but is defined elsewhere as Prober. The strategy starts by playing D, C, C on the first three rounds and then defects forever if the opponent cooperates on rounds two and three. Otherwise Prober plays as TitForTat would.

Prober came 15th in average score and 9th in wins in S&P’s tournament.

### Prober2¶

PROBE2 is not unambiguously defined in [Stewart2012] but is defined elsewhere as Prober2. The strategy starts by playing D, C, C on the first three rounds and then cooperates forever if the opponent played D then C on rounds two and three. Otherwise Prober2 plays as TitForTat would.

Prober2 came 9th in average score and 12th in wins in S&P’s tournament.

### Prober3¶

PROBE3 is not unambiguously defined in [Stewart2012] but is defined elsewhere as Prober3. The strategy starts by playing D, C on the first two rounds and then defects forever if the opponent cooperated on round two. Otherwise Prober3 plays as TitForTat would.

Prober3 came 17th in average score and 7th in wins in S&P’s tournament.

### HardProber¶

HARD_PROBE is not unambiguously defined in [Stewart2012] but is defined elsewhere as HardProber. The strategy starts by playing D, D, C, C on the first four rounds and then defects forever if the opponent cooperates on rounds two and three. Otherwise Prober plays as TitForTat would.

HardProber came 5th in average score and 6th in wins in S&P’s tournament.

### NaiveProber¶

NAIVE_PROBER is a modification of Tit For Tat strategy which with a small probability randomly defects. Default value for a probability of defection is 0.1.

## Beaufils et al.’s tournament (1997)¶

In 1997, [Beaufils1997] the authors used a tournament to describe a new strategy of their called “Gradual”. The description given in the paper of “Gradual” is:

This strategy acts as tit-for-tat, except when it is time to forgive and remember the past. It uses cooperation on the first move and then continues to do so as long as the other player cooperates. Then after the first defection of the other player, it defects one time and cooperates two times; after the second defection of the opponent, it defects two times and cooperates two times, … after the nth defection it reacts with n consecutive defections and then calms down its opponent with two cooperations.

This is the only description of the strategy however the paper does include a table of results of the tournament. The scores of “Gradual” against the opponents (including itself) are:

Score of Gradual reported in [Beaufils1997]
Name Name used in [Beaufils1997] Score (1000 turns)
Cooperator coop 3000
Defector def 915
Random rand 2815
Tit For Tat tft 3000
Grudger spite 3000
Cycler DDC p_nst 2219
Cycler CCD p_kn 3472
Go By Majority sft_mj 3000
Suspicious Tit For Tat mist 2996
Prober prob 2999
Win Stay Lose Shift pav 3000

The following code reproduces the above:

>>> import axelrod as axl
>>> players = [axl.Cooperator(),
...            axl.Defector(),
...            axl.Random(),
...            axl.TitForTat(),
...            axl.Grudger(),
...            axl.CyclerDDC(),
...            axl.CyclerCCD(),
...            axl.GoByMajority(),
...            axl.SuspiciousTitForTat(),
...            axl.Prober(),
...            axl.WinStayLoseShift(),
...            ]
>>> axl.seed(1)
>>> turns = 1000
>>> tournament = axl.Tournament(players, turns=turns, repetitions=1)
>>> results = tournament.play(progress_bar=False)
>>> for average_score_per_turn in results.payoff_matrix[-2]:
...     print(round(average_score_per_turn * turns, 1))
3000.0
915.0
2763.0
3000.0
3000.0
2219.0
3472.0
3000.0
2996.0
2999.0
3000.0
3000.0


The OriginalGradual strategy implemented has the following description:

A player that punishes defections with a growing number of defections but after punishing for punishment_limit number of times enters a calming state and cooperates no matter what the opponent does for two rounds.

The punishment_limit is incremented whenever the opponent defects and the strategy is not in either calming or punishing state.

Note that a different version of Gradual appears in [CRISTAL-SMAC2018]. This was brought to the attention of the maintainers of the library by one of the authors of [Beaufils1997] and is documented here https://github.com/Axelrod-Python/Axelrod/issues/1294.

The strategy implemented in [CRISTAL-SMAC2018] and defined here as Gradual has the following description:

Similar to OriginalGradual, this is a player that punishes defections with a growing number of defections but after punishing for punishment_limit number of times enters a calming state and cooperates no matter what the opponent does for two rounds.

This version of Gradual is an update of OriginalGradual and the difference is that the punishment_limit is incremented whenever the opponent defects (regardless of the state of the player).

This highlights the importance of best practice and reproducible computational research. Both strategies implemented in this library are fully tested and documented clearly and precisely.