Strategies index¶
Here are the docstrings of all the strategies in the library.

class
axelrod.strategies.adaptive.
Adaptive
(initial_plays: List[axelrod.action.Action] = None) → None[source]¶ Start with a specific sequence of C and D, then play the strategy that has worked best, recalculated each turn.
Names:
 Adaptive: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Adaptive'¶

class
axelrod.strategies.alternator.
Alternator
[source]¶ A player who alternates between cooperating and defecting.
Names
 Alternator: [Axelrod1984]
 Periodic player CD: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Alternator'¶

class
axelrod.strategies.ann.
ANN
(weights: List[float], num_features: int, num_hidden: int) → None[source]¶ Artificial Neural Network based strategy.
A single layer neural network based strategy, with the following features: * Opponent’s first move is C * Opponent’s first move is D * Opponent’s second move is C * Opponent’s second move is D * Player’s previous move is C * Player’s previous move is D * Player’s second previous move is C * Player’s second previous move is D * Opponent’s previous move is C * Opponent’s previous move is D * Opponent’s second previous move is C * Opponent’s second previous move is D * Total opponent cooperations * Total opponent defections * Total player cooperations * Total player defections * Round number
Original Source: https://gist.github.com/mojones/550b32c46a8169bb3cd89d917b73111a#fileannstrategytestL60
Names
 Artificial Neural Network based strategy: Original name by Martin Jones

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'ANN'¶

class
axelrod.strategies.ann.
EvolvedANN
→ None[source]¶ A strategy based on a pretrained neural network with 17 features and a hidden layer of size 10.
Names:
 Evolved ANN: Original name by Martin Jones.

name
= 'Evolved ANN'¶

class
axelrod.strategies.ann.
EvolvedANN5
→ None[source]¶ A strategy based on a pretrained neural network with 17 features and a hidden layer of size 5.
Names:
 Evolved ANN 5: Original name by Marc Harper.

name
= 'Evolved ANN 5'¶

class
axelrod.strategies.ann.
EvolvedANNNoise05
→ None[source]¶ A strategy based on a pretrained neural network with a hidden layer of size 10, trained with noise=0.05.
Names:
 Evolved ANN Noise 05: Original name by Marc Harper.

name
= 'Evolved ANN 5 Noise 05'¶

axelrod.strategies.ann.
activate
(bias: List[float], hidden: List[float], output: List[float], inputs: List[int]) → float[source]¶  Compute the output of the neural network:
 output = relu(inputs * hidden_weights + bias) * output_weights

axelrod.strategies.ann.
compute_features
(player: axelrod.player.Player, opponent: axelrod.player.Player) → List[int][source]¶ Compute history features for Neural Network: * Opponent’s first move is C * Opponent’s first move is D * Opponent’s second move is C * Opponent’s second move is D * Player’s previous move is C * Player’s previous move is D * Player’s second previous move is C * Player’s second previous move is D * Opponent’s previous move is C * Opponent’s previous move is D * Opponent’s second previous move is C * Opponent’s second previous move is D * Total opponent cooperations * Total opponent defections * Total player cooperations * Total player defections * Round number

axelrod.strategies.ann.
split_weights
(weights: List[float], num_features: int, num_hidden: int) → Tuple[List[List[float]], List[float], List[float]][source]¶ Splits the input vector into the the NN bias weights and layer parameters.

class
axelrod.strategies.apavlov.
APavlov2006
→ None[source]¶ APavlov attempts to classify its opponent as one of five strategies: Cooperative, ALLD, STFT, PavlovD, or Random. APavlov then responds in a manner intended to achieve mutual cooperation or to defect against uncooperative opponents.
Names:
 Adaptive Pavlov 2006: [Li2007]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Adaptive Pavlov 2006'¶

class
axelrod.strategies.apavlov.
APavlov2011
→ None[source]¶ APavlov attempts to classify its opponent as one of four strategies: Cooperative, ALLD, STFT, or Random. APavlov then responds in a manner intended to achieve mutual cooperation or to defect against uncooperative opponents.
Names:
 Adaptive Pavlov 2011: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Adaptive Pavlov 2011'¶

class
axelrod.strategies.appeaser.
Appeaser
[source]¶ A player who tries to guess what the opponent wants.
Switch the classifier every time the opponent plays D. Start with C, switch between C and D when opponent plays D.
Names:
 Appeaser: Original Name by Jochen Müller

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Appeaser'¶

class
axelrod.strategies.averagecopier.
AverageCopier
[source]¶ The player will cooperate with probability p if the opponent’s cooperation ratio is p. Starts with random decision.
Names:
 Average Copier: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Average Copier'¶

class
axelrod.strategies.averagecopier.
NiceAverageCopier
[source]¶ Same as Average Copier, but always starts by cooperating.
Names:
 Average Copier: Original name by Owen Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Nice Average Copier'¶
Additional strategies from Axelrod’s first tournament.

class
axelrod.strategies.axelrod_first.
Davis
(rounds_to_cooperate: int = 10) → None[source]¶ Submitted to Axelrod’s first tournament by Morton Davis.
A player starts by cooperating for 10 rounds then plays Grudger, defecting if at any point the opponent has defected.
This strategy came 8th in Axelrod’s original tournament.
Names:
 Davis: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Davis'¶

class
axelrod.strategies.axelrod_first.
Feld
(start_coop_prob: float = 1.0, end_coop_prob: float = 0.5, rounds_of_decay: int = 200) → None[source]¶ Submitted to Axelrod’s first tournament by Scott Feld.
This strategy plays Tit For Tat, always defecting if the opponent defects but cooperating when the opponent cooperates with a gradually decreasing probability until it is only .5.
This strategy came 11th in Axelrod’s original tournament.
Names:
 Feld: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 200, 'stochastic': True}¶

name
= 'Feld'¶

class
axelrod.strategies.axelrod_first.
Grofman
[source]¶ Submitted to Axelrod’s first tournament by Bernard Grofman.
Cooperate on the first two rounds and returns the opponent’s last action for the next 5. For the rest of the game Grofman cooperates if both players selected the same action in the previous round, and otherwise cooperates randomly with probability 2/7.
This strategy came 4th in Axelrod’s original tournament.
Names:
 Grofman: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Grofman'¶

class
axelrod.strategies.axelrod_first.
Joss
(p: float = 0.9) → None[source]¶ Submitted to Axelrod’s first tournament by Johann Joss.
Cooperates with probability 0.9 when the opponent cooperates, otherwise emulates TitForTat.
This strategy came 12th in Axelrod’s original tournament.
Names:
 Joss: [Axelrod1980]
 Hard Joss: [Stewart2012]

name
= 'Joss'¶

class
axelrod.strategies.axelrod_first.
Nydegger
→ None[source]¶ Submitted to Axelrod’s first tournament by Rudy Nydegger.
The program begins with tit for tat for the first three moves, except that if it was the only one to cooperate on the first move and the only one to defect on the second move, it defects on the third move. After the third move, its choice is determined from the 3 preceding outcomes in the following manner.
\[A = 16 a_1 + 4 a_2 + a_3\]Where \(a_i\) is dependent on the outcome of the previous \(i\) th round. If both strategies defect, \(a_i=3\), if the opponent only defects: \(a_i=2\) and finally if it is only this strategy that defects then \(a_i=1\).
Finally this strategy defects if and only if:
\[A \in \{1, 6, 7, 17, 22, 23, 26, 29, 30, 31, 33, 38, 39, 45, 49, 54, 55, 58, 61\}\]Thus if all three preceding moves are mutual defection, A = 63 and the rule cooperates. This rule was designed for use in laboratory experiments as a stooge which had a memory and appeared to be trustworthy, potentially cooperative, but not gullible.
This strategy came 3rd in Axelrod’s original tournament.
Names:
 Nydegger: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Nydegger'¶

class
axelrod.strategies.axelrod_first.
RevisedDowning
(revised: bool = True) → None[source]¶ This strategy attempts to estimate the next move of the opponent by estimating the probability of cooperating given that they defected (\(p(CD)\)) or cooperated on the previous round (\(p(CC)\)). These probabilities are continuously updated during play and the strategy attempts to maximise the long term play. Note that the initial values are \(p(CC)=p(CD)=.5\).
Downing is implemented as RevisedDowning. Apparently in the first tournament the strategy was implemented incorrectly and defected on the first two rounds. This can be controlled by setting revised=True to prevent the initial defections.
This strategy came 10th in Axelrod’s original tournament but would have won if it had been implemented correctly.
Names:
 Revised Downing: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Revised Downing'¶

class
axelrod.strategies.axelrod_first.
Shubik
→ None[source]¶ Submitted to Axelrod’s first tournament by Martin Shubik.
Plays like TitForTat with the following modification. After each retaliation, the number of rounds that Shubik retaliates increases by 1.
This strategy came 5th in Axelrod’s original tournament.
Names:
 Shubik: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Shubik'¶

class
axelrod.strategies.axelrod_first.
SteinAndRapoport
(alpha: float = 0.05) → None[source]¶ This strategy plays a modification of Tit For Tat.
 It cooperates for the first 4 moves.
 It defects on the last 2 moves.
 Every 15 moves it makes use of a chisquared test to check if the opponent is playing randomly.
This strategy came 6th in Axelrod’s original tournament.
Names:
 SteinAndRapoport: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Stein and Rapoport'¶

original_class
¶ alias of
SteinAndRapoport

strategy
(opponent)¶

class
axelrod.strategies.axelrod_first.
TidemanAndChieruzzi
→ None[source]¶ This strategy begins by playing Tit For Tat and then follows the following rules:
1. Every run of defections played by the opponent increases the number of defections that this strategy retaliates with by 1.
 The opponent is given a ‘fresh start’ if:
 it is 10 points behind this strategy
 and it has not just started a run of defections
 and it has been at least 20 rounds since the last ‘fresh start’
 and there are more than 10 rounds remaining in the match
 and the total number of defections differs from a 5050 random sample by at least 3.0 standard deviations.
A ‘fresh start’ is a sequence of two cooperations followed by an assumption that the game has just started (everything is forgotten).
This strategy came 2nd in Axelrod’s original tournament.
Names:
 TidemanAndChieruzzi: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length', 'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Tideman and Chieruzzi'¶

class
axelrod.strategies.axelrod_first.
Tullock
(rounds_to_cooperate: int = 11) → None[source]¶ Submitted to Axelrod’s first tournament by Gordon Tullock.
Cooperates for the first 11 rounds then randomly cooperates 10% less often than the opponent has in previous rounds.
This strategy came 13th in Axelrod’s original tournament.
Names:
 Tullock: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 11, 'stochastic': True}¶

name
= 'Tullock'¶

class
axelrod.strategies.axelrod_first.
UnnamedStrategy
[source]¶ Apparently written by a grad student in political science whose name was withheld, this strategy cooperates with a given probability P. This probability (which has initial value .3) is updated every 10 rounds based on whether the opponent seems to be random, very cooperative or very uncooperative. Furthermore, if after round 130 the strategy is losing then P is also adjusted.
Fourteenth Place with 282.2 points is a 77line program by a graduate student of political science whose dissertation is in game theory. This rule has a probability of cooperating, P, which is initially 30% and is updated every 10 moves. P is adjusted if the other player seems random, very cooperative, or very uncooperative. P is also adjusted after move 130 if the rule has a lower score than the other player. Unfortunately, the complex process of adjustment frequently left the probability of cooperation in the 30% to 70% range, and therefore the rule appeared random to many other players.
Names:
 Unnamed Strategy: [Axelrod1980]
Warning: This strategy is not identical to the original strategy (source unavailable) and was written based on published descriptions.

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 0, 'stochastic': True}¶

name
= 'Unnamed Strategy'¶
Additional strategies from Axelrod’s second tournament.

class
axelrod.strategies.axelrod_second.
Black
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Paul E Black (K83R) and came in fifteenth in that tournament.
The strategy Cooperates for the first five turns. Then it calculates the number of opponent defects in the last five moves and Cooperates with probability prob_coop`[`number_defects], where:
prob_coop[number_defects] = 1  (number_defects^ 2  1) / 25
Names:
 Black: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': True}¶

name
= 'Black'¶

class
axelrod.strategies.axelrod_second.
Borufsen
[source]¶ Strategy submitted to Axelrod’s second tournament by Otto Borufsen (K32R), and came in third in that tournament.
This player keeps track of the the opponent’s responses to own behavior:
 cd_count counts: Opponent cooperates as response to player defecting.
 cc_count counts: Opponent cooperates as response to player cooperating.
The player has a defect mode and a normal mode. In defect mode, the player will always defect. In normal mode, the player obeys the following ranked rules:
 If in the last three turns, both the player/opponent defected, then cooperate for a single turn.
 If in the last three turns, the player/opponent acted differently from each other and they’re alternating, then change next defect to cooperate. (Doesn’t block third rule.)
 Otherwise, do titfortat.
Start in normal mode, but every 25 turns starting with the 27th turn, reevaluate the mode. Enter defect mode if any of the following conditions hold:
 Detected random: Opponent cooperated 718 times since last mode evaluation (or start) AND less than 70% of opponent cooperation was in response to player’s cooperation, i.e. cc_count / (cc_count+cd_count) < 0.7
 Detect defective: Opponent cooperated fewer than 3 times since last mode evaluation.
When switching to defect mode, defect immediately. The first two rules for normal mode require that last three turns were in normal mode. When starting normal mode from defect mode, defect on first move.
Names:
 Borufsen: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Borufsen'¶

class
axelrod.strategies.axelrod_second.
Cave
[source]¶ Strategy submitted to Axelrod’s second tournament by Rob Cave (K49R), and came in fourth in that tournament.
First look for overlydefective or apparently random opponents, and defect if found. That is any opponent meeting one of:
 turn > 39 and percent defects > 0.39
 turn > 29 and percent defects > 0.65
 turn > 19 and percent defects > 0.79
Otherwise, respond to cooperation with cooperation. And respond to defcts with either a defect (if opponent has defected at least 18 times) or with a random (50/50) choice. [Cooperate on first.]
Names:
 Cave: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Cave'¶

class
axelrod.strategies.axelrod_second.
Champion
[source]¶ Strategy submitted to Axelrod’s second tournament by Danny Champion.
This player cooperates on the first 10 moves and plays Tit for Tat for the next 15 more moves. After 25 moves, the program cooperates unless all the following are true: the other player defected on the previous move, the other player cooperated less than 60% and the random number between 0 and 1 is greater that the other player’s cooperation rate.
Names:
 Champion: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Champion'¶

class
axelrod.strategies.axelrod_second.
Colbert
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by William Colbert (K51R) and came in eighteenth in that tournament.
In the first eight turns, this strategy Coopearates on all but the sixth turn, in which it Defects. After that, the strategy responds to an opponent Cooperation with a single Cooperation, and responds to a Defection with a chain of responses: Defect, Defect, Cooperate, Cooperate. During this chain, the strategy ignores opponent’s moves.
Names:
 Colbert: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 4, 'stochastic': False}¶

name
= 'Colbert'¶

class
axelrod.strategies.axelrod_second.
Eatherley
[source]¶ Strategy submitted to Axelrod’s second tournament by Graham Eatherley.
A player that keeps track of how many times in the game the other player defected. After the other player defects, it defects with a probability equal to the ratio of the other’s total defections to the total moves to that point.
Names:
 Eatherley: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Eatherley'¶

class
axelrod.strategies.axelrod_second.
Getzler
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Abraham Getzler (K35R) and came in eleventh in that tournament.
Strategy Defects with probability flack, where flack is calculated as the sum over opponent Defections of 0.5 ^ (turns ago Defection happened).
Names:
 Getzler: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Getzler'¶

class
axelrod.strategies.axelrod_second.
Gladstein
→ None[source]¶ Submitted to Axelrod’s second tournament by David Gladstein.
This strategy is also known as Tester and is based on the reverse engineering of the Fortran strategies from Axelrod’s second tournament.
This strategy is a TFT variant that defects on the first round in order to test the opponent’s response. If the opponent ever defects, the strategy ‘apologizes’ by cooperating and then plays TFT for the rest of the game. Otherwise, it defects as much as possible subject to the constraint that the ratio of its defections to moves remains under 0.5, not counting the first defection.
Names:
 Gladstein: [Axelrod1980b]
 Tester: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Gladstein'¶

class
axelrod.strategies.axelrod_second.
GraaskampKatzen
[source]¶ Strategy submitted to Axelrod’s second tournament by Jim Graaskamp and Ken Katzen (K60R), and came in sixth in that tournament.
Play TitforTat at first, and track own score. At select checkpoints, check for a high score. Switch to Default Mode if:
 On move 11, score < 23
 On move 21, score < 53
 On move 31, score < 83
 On move 41, score < 113
 On move 51, score < 143
 On move 101, score < 293
Once in Defect Mode, defect forever.
Names:
 GraaskampKatzen: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'GraaskampKatzen'¶

class
axelrod.strategies.axelrod_second.
Harrington
[source]¶ Strategy submitted to Axelrod’s second tournament by Paul Harrington (K75R) and came in eighth in that tournament.
This strategy has three modes: Normal, Fairweather, and Defect. These mode names were not present in Harrington’s submission.
In Normal and Fairweather modes, the strategy begins by:
 Update history
 Try to detect random opponent if turn is multiple of 15 and >=30.
 Check if burned flag should be raised.
 Check for Fairweather opponent if turn is 38.
Updating history means to increment the correct cell of the move_history. move_history is a matrix where the columns are the opponent’s previous move and the rows are indexed by the combo of this player’s and the opponent’s moves two turns ago. [The upperleft cell must be all Cooperations, but otherwise order doesn’t matter.] After we enter Defect mode, move_history won’t be used again.
If the turn is a multiple of 15 and >=30, then attempt to detect random. If random is detected, enter Defect mode and defect immediately. If the player was previously in Defect mode, then do not reenter. The random detection logic is a modified Pearson’s Chi Squared test, with some additional checks. [More details in detect_random docstrings.]
Some of this player’s moves are marked as “generous.” If this player made a generous move two turns ago and the opponent replied with a Defect, then raise the burned flag. This will stop certain generous moves later.
The player mostly plays TitforTat for the first 36 moves, then defects on the 37th move. If the opponent cooperates on the first 36 moves, and defects on the 37th move also, then enter Fairweather mode and cooperate this turn. Entering Fairweather mode is extremely rare, since this can only happen if the opponent cooperates for the first 36 then defects unprovoked on the 37th. (That is, this player’s first 36 moves are also Cooperations, so there’s nothing really to trigger an opponent Defection.)
Next in Normal Mode:
 Check for defect and parity streaks.
 Check if cooperations are scheduled.
 Otherwise,
 If turn < 37, TitforTat.
 If turn = 37, defect, mark this move as generous, and schedule two more cooperations**.
 If turn > 37, then if burned flag is raised, then TitforTat. Otherwise, TitforTat with probability 1  prob. And with probability prob, defect, schedule two cooperations, mark this move as generous, and increase prob by 5%.
** Scheduling two cooperations means to set more_coop flag to two. If in Normal mode and no streaks are detected, then the player will cooperate and lower this flag, until hitting zero. It’s possible that the flag can be overwritten. Notable on the 37th turn defect, this is set to two, but the 38th turn Fairweather check will set this.
If the opponent’s last twenty moves were defections, then defect this turn. Then check for a parity streak, by flipping the parity bit (there are two streaks that get tracked which are something like odd and even turns, but this flip bit logic doesn’t get run every turn), then incrementing the parity streak that we’re pointing to. If the parity streak that we’re pointing to is then greater than parity_limit then reset the streak and cooperate immediately. parity_limit is initially set to five, but after it has been hit eight times, it decreases to three. The parity streak that we’re pointing to also gets incremented if in normal mode and we defect but not on turn 38, unless we are defecting as the result of a defect streak. Note that the parity streaks resets but the defect streak doesn’t.
If more_coop >= 1, then we cooperate and lower that flag here, in Normal mode after checking streaks. Still lower this flag if cooperating as the result of a parity streak or in Fairweather mode.
Then use the logic based on turn from above.
In FairWeather mode after running the code from above, check if opponent defected last turn. If so, exit FairWeather mode, and proceed THIS TURN with Normal mode. Otherwise cooperate.
In Defect mode, update the exit_defect_meter (originally zero) by incrementing if opponent defected last turn and decreasing by three otherwise. If exit_defect_meter is then 11, then set mode to Normal (for future turns), cooperate and schedule two more cooperations. [Note that this move is not marked generous.]
Names:
 Harrington: [Axelrod1980b]

calculate_chi_squared
(turn)[source]¶ Pearson’s Chi Squared statistic = sum[ (E_iO_i)^2 / E_i ], where O_i are the observed matrix values, and E_i is calculated as number (of defects) in the row times the number in the column over (total number in the matrix minus 1). Equivalently, we expect we expect (for an independent distribution) the total number of recorded turns times the portion in that row times the portion in that column.
In this function, the statistic is nonstandard in that it excludes summands where E_i <= 1.

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

detect_parity_streak
(last_move)[source]¶ Switch which parity_streak we’re pointing to and incerement if the opponent’s last move was a Defection. Otherwise reset the flag. Then return true if and only if the parity_streak is at least parity_limit.
This is similar to detect_streak with alternating streaks, except that these streaks get incremented elsewhere as well.

detect_random
(turn)[source]¶ We check if the topleft cell of the matrix (corresponding to all Cooperations) has over 80% of the turns. In which case, we label nonrandom.
Then we check if over 75% or under 25% of the opponent’s turns are Defections. If so, then we label as nonrandom.
Otherwise we calculates a modified Pearson’s Chi Squared statistic on self.history, and returns True (is random) if and only if the statistic is less than or equal to 3.

detect_streak
(last_move)[source]¶ Return true if and only if the opponent’s last twenty moves are defects.

name
= 'Harrington'¶

class
axelrod.strategies.axelrod_second.
Kluepfel
[source]¶ Strategy submitted to Axelrod’s second tournament by Charles Kluepfel (K32R).
This player keeps track of the the opponent’s responses to own behavior:
 cd_count counts: Opponent cooperates as response to player defecting.
 dd_count counts: Opponent defects as response to player defecting.
 cc_count counts: Opponent cooperates as response to player cooperating.
 dc_count counts: Opponent defects as response to player cooperating.
After 26 turns, the player then tries to detect a random player. The player decides that the opponent is random if cd_counts >= (cd_counts+dd_counts)/2  0.75*sqrt(cd_counts+dd_counts) AND cc_counts >= (dc_counts+cc_counts)/2  0.75*sqrt(dc_counts+cc_counts). If the player decides that they are playing against a random player, then they will always defect.
Otherwise respond to recent history using the following set of rules:
 If opponent’s last three choices are the same, then respond in kind.
 If opponent’s last two choices are the same, then respond in kind with probability 90%.
 Otherwise if opponent’s last action was to cooperate, then cooperate with probability 70%.
 Otherwise if opponent’s last action was to defect, then defect with probability 60%.
Names:
 Kluepfel: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Kluepfel'¶

class
axelrod.strategies.axelrod_second.
Leyvraz
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Fransois Leyvraz (K68R) and came in twelfth in that tournament.
The strategy uses the opponent’s last three moves to decide on an action based on the following ordered rules.
 If opponent Defected last two turns, then Defect with prob 75%.
 If opponent Defected three turns ago, then Cooperate.
 If opponent Defected two turns ago, then Defect.
 If opponent Defected last turn, then Defect with prob 50%.
 Otherwise (all Cooperations), then Cooperate.
Names:
 Leyvraz: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': True}¶

name
= 'Leyvraz'¶

class
axelrod.strategies.axelrod_second.
Mikkelson
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Ray Mikkelson (K66R) and came in twentieth in that tournament.
The strategy keeps track of a variable called credit, which determines if the strategy will Cooperate, in the sense that if credit is positive, then the strategy Cooperates. credit is initialized to 7. After the first turn, credit increments if the opponent Cooperated last turn, and decreases by two otherwise. credit is capped above by 8 and below by 7. [credit is assessed as postive or negative, after increasing based on opponent’s last turn.]
If credit is nonpositive within the first ten turns, then the strategy Defects and credit is set to 4. If credit is nonpositive later, then the strategy Defects if and only if (total # opponent Defections) / (turn#) is at least 15%. [Turn # starts at 1.]
Names:
 Mikkelson: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Mikkelson'¶

class
axelrod.strategies.axelrod_second.
MoreGrofman
[source]¶ Submitted to Axelrod’s second tournament by Bernard Grofman.
This strategy has 3 phases:
 First it cooperates on the first two rounds
 For rounds 37 inclusive, it plays the same as the opponent’s last move
 Thereafter, it applies the following logic, looking at its memory of the last 8* rounds (ignoring the most recent round).
 If its own previous move was C and the opponent has defected less than 3 times in the last 8* rounds, cooperate
 If its own previous move was C and the opponent has defected 3 or more times in the last 8* rounds, defect
 If its own previous move was D and the opponent has defected only once or not at all in the last 8* rounds, cooperate
 If its own previous move was D and the opponent has defected more than once in the last 8* rounds, defect
* The code looks at the first 7 of the last 8 rounds, ignoring the most recent round.
Names:  Grofman’s strategy: [Axelrod1980b]  K86R: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 8, 'stochastic': False}¶

name
= 'MoreGrofman'¶

class
axelrod.strategies.axelrod_second.
MoreTidemanAndChieruzzi
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by T. Nicolaus Tideman and Paula Chieruzzi (K84R) and came in ninth in that tournament.
This strategy Cooperates if this player’s score exceeds the opponent’s score by at least score_to_beat. score_to_beat starts at zero and increases by score_to_beat_inc every time the opponent’s last two moves are a Cooperation and Defection in that order. score_to_beat_inc itself increase by 5 every time the opponent’s last two moves are a Cooperation and Defection in that order.
Additionally, the strategy executes a “fresh start” if the following hold:
 The strategy would Defect by score (difference less than score_to_beat)
 The opponent did not Cooperate and Defect (in order) in the last two turns.
 It’s been at least 10 turns since the last fresh start. Or since the match started if there hasn’t been a fresh start yet.
A “fresh start” entails two Cooperations and resetting scores, scores_to_beat and scores_to_beat_inc.
Names:
 MoreTidemanAndChieruzzi: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'More Tideman and Chieruzzi'¶

class
axelrod.strategies.axelrod_second.
RichardHufford
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Richard Hufford (K47R) and came in sixteenth in that tournament.
The strategy tracks opponent “agreements”, that is whenever the opponent’s previous move is the some as this player’s move two turns ago. If the opponent’s first move is a Defection, this is counted as a disagreement, and otherwise an agreement. From the agreement counts, two measures are calculated:
 proportion_agree: This is the number of agreements (through opponent’s last turn) + 2 divided by the current turn number.
 last_four_num: The number of agreements in the last four turns. If there have been fewer than four previous turns, then this is number of agreement + (4  number of past turns).
We then use these measures to decide how to play, using these rules:
 If proportion_agree > 0.9 and last_four_num >= 4, then Cooperate.
 Otherwise if proportion_agree >= 0.625 and last_four_num >= 2, then TitforTat.
 Otherwise, Defect.
However, if the opponent has Cooperated the last streak_needed turns, then the strategy deviates from the usual strategy, and instead Defects. (We call such deviation an “aberration”.) In the turn immediately after an aberration, the strategy doesn’t override, even if there’s a streak of Cooperations. Two turns after an aberration, the strategy: Restarts the Cooperation streak (never looking before this turn); Cooperates; and changes streak_needed to:
floor(20.0 * num_abb_def / num_abb_coop) + 1
Here num_abb_def is 2 + the number of times that the opponent Defected in the turn after an aberration, and num_abb_coop is 2 + the number of times that the opponent Cooperated in response to an aberration.
Names:
 RichardHufford: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'RichardHufford'¶

class
axelrod.strategies.axelrod_second.
Tester
→ None[source]¶ Submitted to Axelrod’s second tournament by David Gladstein.
This strategy is a TFT variant that attempts to exploit certain strategies. It defects on the first move. If the opponent ever defects, TESTER ‘apologies’ by cooperating and then plays TFT for the rest of the game. Otherwise TESTER alternates cooperation and defection.
This strategy came 46th in Axelrod’s second tournament.
Names:
 Tester: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Tester'¶

class
axelrod.strategies.axelrod_second.
Tranquilizer
[source]¶ Submitted to Axelrod’s second tournament by Craig Feathers
Description given in Axelrod’s “More Effective Choice in the Prisoner’s Dilemma” paper: The rule normally cooperates but is ready to defect if the other player defects too often. Thus the rule tends to cooperate for the first dozen or two moves if the other player is cooperating, but then it throws in a defection. If the other player continues to cooperate, then defections become more frequent. But as long as Tranquilizer is maintaining an average payoff of at least 2.25 points per move, it will never defect twice in succession and it will not defect more than onequarter of the time.
This implementation is based on the reverse engineering of the Fortran strategy K67R from Axelrod’s second tournament. Reversed engineered by: Owen Campbell, Will Guo and Mansour Hakem.
The strategy starts by cooperating and has 3 states.
At the start of the strategy it updates its states:
It counts the number of consecutive defections by the opponent.
If it was in state 2 it moves to state 0 and calculates the following quantities two_turns_after_good_defection_ratio and two_turns_after_good_defection_ratio_count.
Formula for:
two_turns_after_good_defection_ratio:
self.two_turns_after_good_defection_ratio = ( ((self.two_turns_after_good_defection_ratio * self.two_turns_after_good_defection_ratio_count) + (3  (3 * self.dict[opponent.history[1]])) + (2 * self.dict[self.history[1]])  ((self.dict[opponent.history[1]] * self.dict[self.history[1]]))) / (self.two_turns_after_good_defection_ratio_count + 1) )
two_turns_after_good_defection_ratio_count = two_turns_after_good_defection_ratio + 1
If it was in state 1 it moves to state 2 and calculates the following quantities one_turn_after_good_defection_ratio and one_turn_after_good_defection_ratio_count.
Formula for:
one_turn_after_good_defection_ratio:
self.one_turn_after_good_defection_ratio = ( ((self.one_turn_after_good_defection_ratio * self.one_turn_after_good_defection_ratio_count) + (3  (3 * self.dict[opponent.history[1]])) + (2 * self.dict[self.history[1]])  (self.dict[opponent.history[1]] * self.dict[self.history[1]])) / (self.one_turn_after_good_defection_ratio_count + 1) )
one_turn_after_good_defection_ratio_count:
one_turn_after_good_defection_ratio_count = one_turn_after_good_defection_ratio + 1
If after this it is in state 1 or 2 then it cooperates.
If it is in state 0 it will potentially perform 1 of the 2 following stochastic tests:
1. If average score per turn is greater than 2.25 then it calculates a value of probability:
probability = ( (.95  (((self.one_turn_after_good_defection_ratio) + (self.two_turns_after_good_defection_ratio)  5) / 15)) + (1 / (((len(self.history))+1) ** 2))  (self.dict[opponent.history[1]] / 4) )
and will cooperate if a random sampled number is less than that value of probability. If it does not cooperate then the strategy moves to state 1 and defects.
2. If average score per turn is greater than 1.75 but less than 2.25 then it calculates a value of probability:
probability = ( (.25 + ((opponent.cooperations + 1) / ((len(self.history)) + 1)))  (self.opponent_consecutive_defections * .25) + ((current_score[0]  current_score[1]) / 100) + (4 / ((len(self.history)) + 1)) )
and will cooperate if a random sampled number is less than that value of probability. If not, it defects.
If none of the above holds the player simply plays tit for tat.
Tranquilizer came in 27th place in Axelrod’s second torunament.
Names:
 Tranquilizer: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Tranquilizer'¶

class
axelrod.strategies.axelrod_second.
Weiner
[source]¶ Strategy submitted to Axelrod’s second tournament by Herb Weiner (K41R), and came in seventh in that tournament.
Play TitforTat with a chance for forgiveness and a defective override.
The chance for forgiveness happens only if forgive_flag is raised (flag discussed below). If raised and turn is greater than grudge, then override TitforTat with Cooperation. grudge is a variable that starts at 0 and increments 20 with each forgiven Defect (a Defect that is overriden through the forgiveness logic). forgive_flag is lower whether logic is overriden or not.
The variable defect_padding increments with each opponent Defect, but resets to zero with each opponent Cooperate (or forgive_flag lowering) so that it roughly counts Defects between Cooperates. Whenever the opponent Cooperates, if defect_padding (before reseting) is odd, then we raise forgive_flag for next turn.
Finally a defective override is assessed after forgiveness. If five or more of the opponent’s last twelve actions are Defects, then Defect. This will overrule a forgiveness, but doesn’t undo the lowering of forgiveness_flag. Note that “last twelve actions” doesn’t count the most recent action. Actually the original code updates history after checking for defect override.
Names:
 Weiner: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Weiner'¶

class
axelrod.strategies.axelrod_second.
White
[source]¶ Strategy submitted to Axelrod’s second tournament by Edward C White (K72R) and came in thirteenth in that tournament.
 If the opponent Cooperated last turn or in the first ten turns, then Cooperate.
 Otherwise Defect if and only if:
 floor(log(turn)) * opponent Defections >= turn
Names:
 White: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'White'¶

class
axelrod.strategies.axelrod_second.
WmAdams
[source]¶ Strategy submitted to Axelrod’s second tournament by William Adams (K44R), and came in fifth in that tournament.
Count the number of opponent defections after their first move, call c_defect. Defect if c_defect equals 4, 7, or 9. If c_defect > 9, then defect immediately after opponent defects with probability = (0.5)^(c_defect1). Otherwise cooperate.
Names:
 WmAdams: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'WmAdams'¶

class
axelrod.strategies.axelrod_second.
Yamachi
→ None[source]¶ Strategy submitted to Axelrod’s second tournament by Brian Yamachi (K64R) and came in seventeenth in that tournament.
The strategy keeps track of play history through a variable called count_them_us_them, which is a dict indexed by (X, Y, Z), where X is an opponent’s move and Y and Z are the following moves by this player and the opponent, respectively. Each turn, we look at our opponent’s move two turns ago, call X, and our move last turn, call Y. If (X, Y, C) has occurred more often (or as often) as (X, Y, D), then Cooperate. Otherwise Defect. [Note that this reflects likelihood of Cooperations or Defections in opponent’s previous move; we don’t update count_them_us_them with previous move until next turn.]
Starting with the 41st turn, there’s a possibility to override this behavior. If portion_defect is between 45% and 55% (exclusive), then Defect, where portion_defect equals number of opponent defects plus 0.5 divided by the turn number (indexed by 1). When overriding this way, still record count_them_us_them as though the strategy didn’t override.
Names:
 Yamachi: [Axelrod1980b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Yamachi'¶

class
axelrod.strategies.backstabber.
BackStabber
[source]¶ Forgives the first 3 defections but on the fourth will defect forever. Defects on the last 2 rounds unconditionally.
Names:
 Backstabber: Original name by Thomas Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'BackStabber'¶

original_class
¶ alias of
BackStabber

strategy
(opponent)¶

class
axelrod.strategies.backstabber.
DoubleCrosser
[source]¶ Forgives the first 3 defections but on the fourth will defect forever. Defects on the last 2 rounds unconditionally.
If 8 <= current round <= 180, if the opponent did not defect in the first 7 rounds, the player will only defect after the opponent has defected twice inarow.
Names:
 Double Crosser: Original name by Thomas Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'DoubleCrosser'¶

original_class
¶ alias of
DoubleCrosser

strategy
(opponent)¶

class
axelrod.strategies.better_and_better.
BetterAndBetter
[source]¶ Defects with probability of ‘(1000  current turn) / 1000’. Therefore it is less and less likely to defect as the round goes on.
 Names:
 Better and Better: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Better and Better'¶

class
axelrod.strategies.bush_mosteller.
BushMosteller
(c_prob: float = 0.5, d_prob: float = 0.5, aspiration_level_divider: float = 3.0, learning_rate: float = 0.5) → None[source]¶ A player that is based on Bush Mosteller reinforced learning algorithm, it decides what it will play only depending on its own previous payoffs.
The probability of playing C or D will be updated using a stimulus which represents a win or a loss of value based on its previous play’s payoff in the specified probability. The more a play will be rewarded through rounds, the more the player will be tempted to use it.
Names:
 Bush Mosteller: [Luis2008]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Bush Mosteller'¶

class
axelrod.strategies.calculator.
Calculator
→ None[source]¶ Plays like (Hard) Joss for the first 20 rounds. If periodic behavior is detected, defect forever. Otherwise play TFT.
Names:
 Calculator: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Calculator'¶

class
axelrod.strategies.cooperator.
Cooperator
[source]¶ A player who only ever cooperates.
Names:
 Cooperator: [Axelrod1984]
 ALLC: [Press2012]
 Always cooperate: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 0, 'stochastic': False}¶

name
= 'Cooperator'¶

class
axelrod.strategies.cooperator.
TrickyCooperator
[source]¶ A cooperator that is trying to be tricky.
Names:
 Tricky Cooperator: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Tricky Cooperator'¶

class
axelrod.strategies.cycler.
AntiCycler
→ None[source]¶ A player that follows a sequence of plays that contains no cycles: CDD CD CCD CCCD CCCCD …
Names:
 Anti Cycler: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'AntiCycler'¶

class
axelrod.strategies.cycler.
Cycler
(cycle: str = 'CCD') → None[source]¶ A player that repeats a given sequence indefinitely.
Names:
 Cycler: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Cycler'¶

class
axelrod.strategies.cycler.
CyclerCCCCCD
→ None[source]¶ Cycles C, C, C, C, C, D
Names:
 Cycler CCCD: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'Cycler CCCCCD'¶

class
axelrod.strategies.cycler.
CyclerCCCD
→ None[source]¶ Cycles C, C, C, D
Names:
 Cycler CCCD: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Cycler CCCD'¶

class
axelrod.strategies.cycler.
CyclerCCCDCD
→ None[source]¶ Cycles C, C, C, D, C, D
Names:
 Cycler CCCDCD: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'Cycler CCCDCD'¶

class
axelrod.strategies.cycler.
CyclerCCD
→ None[source]¶ Cycles C, C, D
Names:
 Cycler CCD: Original name by Marc Harper
 Periodic player CCD: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Cycler CCD'¶

class
axelrod.strategies.cycler.
CyclerDC
→ None[source]¶ Cycles D, C
Names:
 Cycler DC: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Cycler DC'¶

class
axelrod.strategies.cycler.
CyclerDDC
→ None[source]¶ Cycles D, D, C
Names:
 Cycler DDC: Original name by Marc Harper
 Periodic player DDC: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Cycler DDC'¶
The player class in this module does not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising it.

class
axelrod.strategies.darwin.
Darwin
→ None[source]¶ A strategy which accumulates a record (the ‘genome’) of what the most favourable response in the previous round should have been, and naively assumes that this will remain the correct response at the same round of future trials.
This ‘genome’ is preserved between opponents, rounds and repetitions of the tournament. It becomes a characteristic of the type and so a single version of this is shared by all instances for each loading of the class.
As this results in information being preserved between tournaments, this is classified as a cheating strategy!
If no record yet exists, the opponent’s response from the previous round is returned.
Names:
 Darwin: Original name by Paul Slavin

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': True, 'memory_depth': inf, 'stochastic': False}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

genome
= [C]¶

name
= 'Darwin'¶

strategy
(opponent: axelrod.player.Player) → axelrod.action.Action[source]¶ This is a placeholder strategy.

valid_callers
= ['play']¶

class
axelrod.strategies.dbs.
DBS
(discount_factor=0.75, promotion_threshold=3, violation_threshold=4, reject_threshold=3, tree_depth=5)[source]¶ A strategy that learns the opponent’s strategy and uses symbolic noise detection for detecting whether anomalies in player’s behavior are deliberate or accidental. From the learned opponent’s strategy, a tree search is used to choose the best move.
Default values for the parameters are the suggested values in the article. When noise increases you can try to diminish violation_threshold and rejection_threshold.
Names
 Desired Belief Strategy: [Au2006]

classifier
= {'inspects_source': False, 'long_run_time': True, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

compute_prob_rule
(outcome, alpha=1)[source]¶ Uses the game history to compute the probability of the opponent playing C, in the outcome situation (example: outcome = (C, C)). When alpha = 1, the results is approximately equal to the frequency of the occurrence of outcome C. alpha is a discount factor that gives more weight to recent events than earlier ones.
Parameters
outcome: tuple of two actions.Action alpha: int, optional. Discount factor. Default is 1.

name
= 'DBS'¶

should_demote
(r_minus, violation_threshold=4)[source]¶ Checks if the number of successive violations of a deterministic rule (in the opponent’s behavior) exceeds the userdefined violation_threshold.

should_promote
(r_plus, promotion_threshold=3)[source]¶ This function determines if the move r_plus is a deterministic behavior of the opponent, and then returns True, or if r_plus is due to a random behavior (or noise) which would require a probabilistic rule, in which case it returns False.
To do so it looks into the game history: if the k last times when the opponent was in the same situation than in r_plus it played the same thing then then r_plus is considered as a deterministic rule (where K is the userdefined promotion_threshold).
Parameters
 r_plus: tuple of (tuple of actions.Action, actions.Action)
 example: ((C, C), D) r_plus represents one outcome of the history, and the following move played by the opponent.
 promotion_threshold: int, optional
 Number of successive observations needed to promote an opponent behavior as a deterministic rule. Default is 3.

class
axelrod.strategies.dbs.
DeterministicNode
(action1, action2, depth)[source]¶ Nodes (C, C), (C, D), (D, C), or (D, D) with deterministic choice for siblings.

class
axelrod.strategies.dbs.
Node
[source]¶ Nodes used to build a tree for the treesearch procedure. The tree has Deterministic and Stochastic nodes, as the opponent’s strategy is learned as a probability distribution.

class
axelrod.strategies.dbs.
StochasticNode
(own_action, pC, depth)[source]¶ Node that have a probability pC to get to each sibling. A StochasticNode can be written (C, X) or (D, X), with X = C with a probability pC, else X = D.

axelrod.strategies.dbs.
create_policy
(pCC, pCD, pDC, pDD)[source]¶ Creates a dict that represents a Policy. As defined in the reference, a Policy is a set of (prev_move, p) where p is the probability to cooperate after prev_move, where prev_move can be (C, C), (C, D), (D, C) or (D, D).
Parameters
 pCC, pCD, pDC, pDD : float
 Must be between 0 and 1.

axelrod.strategies.dbs.
minimax_tree_search
(begin_node, policy, max_depth)[source]¶ Tree search function (minimax search procedure) for the tree (built by recursion) corresponding to the opponent’s policy, and solves it. Returns a tuple of two floats that are the utility of playing C, and the utility of playing D.

axelrod.strategies.dbs.
move_gen
(outcome, policy, depth_search_tree=5)[source]¶ Returns the best move considering opponent’s policy and last move, using treesearch procedure.

class
axelrod.strategies.defector.
Defector
[source]¶ A player who only ever defects.
Names:
 Defector: [Axelrod1984]
 ALLD: [Press2012]
 Always defect: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 0, 'stochastic': False}¶

name
= 'Defector'¶

class
axelrod.strategies.defector.
TrickyDefector
[source]¶ A defector that is trying to be tricky.
Names:
 Tricky Defector: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Tricky Defector'¶

class
axelrod.strategies.doubler.
Doubler
[source]¶ Cooperates except when the opponent has defected and the opponent’s cooperation count is less than twice their defection count.
Names:
 Doubler: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Doubler'¶

class
axelrod.strategies.finite_state_machines.
EvolvedFSM16
→ None[source]¶ A 16 state FSM player trained with an evolutionary algorithm.
Names:
 Evolved FSM 16: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 16, 'stochastic': False}¶

name
= 'Evolved FSM 16'¶

class
axelrod.strategies.finite_state_machines.
EvolvedFSM16Noise05
→ None[source]¶ A 16 state FSM player trained with an evolutionary algorithm with noisy matches (noise=0.05).
Names:
 Evolved FSM 16 Noise 05: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 16, 'stochastic': False}¶

name
= 'Evolved FSM 16 Noise 05'¶

class
axelrod.strategies.finite_state_machines.
EvolvedFSM4
→ None[source]¶ A 4 state FSM player trained with an evolutionary algorithm.
Names:
 Evolved FSM 4: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 4, 'stochastic': False}¶

name
= 'Evolved FSM 4'¶

class
axelrod.strategies.finite_state_machines.
FSMPlayer
(transitions: tuple = ((1, C, 1, C), (1, D, 1, D)), initial_state: int = 1, initial_action: axelrod.action.Action = C) → None[source]¶ Abstract base class for finite state machine players.

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'FSM Player'¶


class
axelrod.strategies.finite_state_machines.
Fortress3
→ None[source]¶ Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.
Note that the description in http://www.grahamkendall.com/papers/lhk2011.pdf is not correct.
Names:
 Fortress 3: [Ashlock2006b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Fortress3'¶

class
axelrod.strategies.finite_state_machines.
Fortress4
→ None[source]¶ Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.
Note that the description in http://www.grahamkendall.com/papers/lhk2011.pdf is not correct.
Names:
 Fortress 4: [Ashlock2006b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 4, 'stochastic': False}¶

name
= 'Fortress4'¶

class
axelrod.strategies.finite_state_machines.
Predator
→ None[source]¶ Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.
Names:
 Predator: [Ashlock2006b]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 9, 'stochastic': False}¶

name
= 'Predator'¶

class
axelrod.strategies.finite_state_machines.
Pun1
→ None[source]¶ FSM player described in [Ashlock2006].
Names:
 Pun1: [Ashlock2006]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Pun1'¶

class
axelrod.strategies.finite_state_machines.
Raider
→ None[source]¶ FSM player described in http://DOI.org/10.1109/FOCI.2014.7007818.
Names
 Raider: [Ashlock2014]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Raider'¶

class
axelrod.strategies.finite_state_machines.
Ripoff
→ None[source]¶ FSM player described in http://DOI.org/10.1109/TEVC.2008.920675.
Names
 Ripoff: [Ashlock2008]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Ripoff'¶

class
axelrod.strategies.finite_state_machines.
SimpleFSM
(transitions: tuple, initial_state: int) → None[source]¶ Simple implementation of a finite state machine that transitions between states based on the last round of play.
https://en.wikipedia.org/wiki/Finitestate_machine

move
(opponent_action: axelrod.action.Action) → axelrod.action.Action[source]¶ Computes the response move and changes state.

state
¶

state_transitions
¶


class
axelrod.strategies.finite_state_machines.
SolutionB1
→ None[source]¶ FSM player described in http://DOI.org/10.1109/TCIAIG.2014.2326012.
Names
 Solution B1: [Ashlock2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'SolutionB1'¶

class
axelrod.strategies.finite_state_machines.
SolutionB5
→ None[source]¶ FSM player described in http://DOI.org/10.1109/TCIAIG.2014.2326012.
Names
 Solution B5: [Ashlock2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'SolutionB5'¶

class
axelrod.strategies.finite_state_machines.
TF1
→ None[source]¶ A FSM player trained to maximize Moran fixation probabilities.
Names:
 TF1: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'TF1'¶

class
axelrod.strategies.finite_state_machines.
TF2
→ None[source]¶ A FSM player trained to maximize Moran fixation probabilities.
Names:
 TF2: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'TF2'¶

class
axelrod.strategies.finite_state_machines.
TF3
→ None[source]¶ A FSM player trained to maximize Moran fixation probabilities.
Names:
 TF3: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'TF3'¶

class
axelrod.strategies.finite_state_machines.
Thumper
→ None[source]¶ FSM player described in http://DOI.org/10.1109/TEVC.2008.920675.
Names
 Thumper: [Ashlock2008]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Thumper'¶

class
axelrod.strategies.forgiver.
Forgiver
[source]¶ A player starts by cooperating however will defect if at any point the opponent has defected more than 10 percent of the time
Names:
 Forgiver: Original name by Thomas Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Forgiver'¶

class
axelrod.strategies.forgiver.
ForgivingTitForTat
[source]¶ A player starts by cooperating however will defect if at any point, the opponent has defected more than 10 percent of the time, and their most recent decision was defect.
Names:
 Forgiving Tit For Tat: Original name by Thomas Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Forgiving Tit For Tat'¶
Stochastic variants of Lookup table basedstrategies, trained with particle swarm algorithms.
 For the original see:
 https://gist.github.com/GDKO/60c3d0fd423598f3c4e4

class
axelrod.strategies.gambler.
Gambler
(lookup_dict: dict = None, initial_actions: tuple = None, pattern: Any = None, parameters: axelrod.strategies.lookerup.Plays = None) → None[source]¶ A stochastic version of LookerUp which will select randomly an action in some cases.
Names:
 Gambler: Original name by Georgios Koutsovoulos

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Gambler'¶

class
axelrod.strategies.gambler.
PSOGambler1_1_1
→ None[source]¶ A 1x1x1 PSOGambler trained with pyswarm.
Names:
 PSO Gambler 1_1_1: Original name by Marc Harper

name
= 'PSO Gambler 1_1_1'¶

class
axelrod.strategies.gambler.
PSOGambler2_2_2
→ None[source]¶ A 2x2x2 PSOGambler trained with a particle swarm algorithm (implemented in pyswarm). Original version by Georgios Koutsovoulos.
Names:
 PSO Gambler 2_2_2: Original name by Marc Harper

name
= 'PSO Gambler 2_2_2'¶

class
axelrod.strategies.gambler.
PSOGambler2_2_2_Noise05
→ None[source]¶ A 2x2x2 PSOGambler trained with pyswarm with noise=0.05.
Names:
 PSO Gambler 2_2_2 Noise 05: Original name by Marc Harper

name
= 'PSO Gambler 2_2_2 Noise 05'¶

class
axelrod.strategies.gambler.
PSOGamblerMem1
→ None[source]¶ A 1x1x0 PSOGambler trained with pyswarm. This is the ‘optimal’ memory one strategy trained against the set of short run time strategies in the Axelrod library.
Names:
 PSO Gambler Mem1: Original name by Marc Harper

name
= 'PSO Gambler Mem1'¶

class
axelrod.strategies.gambler.
ZDMem2
→ None[source]¶ A memory two generalization of a zero determinant player.
Names:
 ZDMem2: Original name by Marc Harper
 Unnamed [LiS2014]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': True}¶

name
= 'ZDMem2'¶
The player classes in this module do not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising them.

class
axelrod.strategies.geller.
Geller
[source]¶ Observes what the player will do in the next round and adjust.
If unable to do this: will play randomly.
This code is inspired by Matthew Williams’ talk “Cheating at rockpaperscissors — metaprogramming in Python” given at Django Weekend Cardiff in February 2014.
His code is here: https://github.com/mattjw/rps_metaprogramming and there’s some more info here: http://www.mattjw.net/2014/02/rpsmetaprogramming/
This code is way simpler than Matt’s, as in this exercise we already have access to the opponent instance, so don’t need to go hunting for it in the stack. Instead we can just call it to see what it’s going to play, and return a result based on that
This is almost certainly cheating, and more than likely against the spirit of the ‘competition’ :)
Names:
 Geller: Original name by Martin Chorley (@martinjc)

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name
= 'Geller'¶

class
axelrod.strategies.geller.
GellerCooperator
[source]¶ Observes what the player will do (like
Geller
) but if unable to will cooperate.Names:
 Geller Cooperator: Original name by Karol Langner

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name
= 'Geller Cooperator'¶

class
axelrod.strategies.geller.
GellerDefector
[source]¶ Observes what the player will do (like
Geller
) but if unable to will defect.Names:
 Geller Defector: Original name by Karol Langner

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name
= 'Geller Defector'¶

class
axelrod.strategies.gobymajority.
GoByMajority
(memory_depth: Union[int, float] = inf, soft: bool = True) → None[source]¶ Submitted to Axelrod’s second tournament by Gail Grisell. It came 23rd and was written in 10 lines of BASIC.
A player examines the history of the opponent: if the opponent has more defections than cooperations then the player defects.
In case of equal number of defections and cooperations this player will Cooperate. Passing the soft=False keyword argument when initialising will create a HardGoByMajority which Defects in case of equality.
An optional memory attribute will limit the number of turns remembered (by default this is 0)
Names:
 Go By Majority: [Axelrod1984]
 Grisell: [Axelrod1980b]
 Soft Majority: [Mittal2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Go By Majority'¶

strategy
(opponent: axelrod.player.Player) → axelrod.action.Action[source]¶ This is affected by the history of the opponent.
As long as the opponent cooperates at least as often as they defect then the player will cooperate. If at any point the opponent has more defections than cooperations in memory the player defects.

class
axelrod.strategies.gobymajority.
GoByMajority10
→ None[source]¶ GoByMajority player with a memory of 10.
Names:
 Go By Majority 10: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Go By Majority 10'¶

class
axelrod.strategies.gobymajority.
GoByMajority20
→ None[source]¶ GoByMajority player with a memory of 20.
Names:
 Go By Majority 20: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 20, 'stochastic': False}¶

name
= 'Go By Majority 20'¶

class
axelrod.strategies.gobymajority.
GoByMajority40
→ None[source]¶ GoByMajority player with a memory of 40.
Names:
 Go By Majority 40: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 40, 'stochastic': False}¶

name
= 'Go By Majority 40'¶

class
axelrod.strategies.gobymajority.
GoByMajority5
→ None[source]¶ GoByMajority player with a memory of 5.
Names:
 Go By Majority 5: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'Go By Majority 5'¶

class
axelrod.strategies.gobymajority.
HardGoByMajority
(memory_depth: Union[int, float] = inf) → None[source]¶ A player examines the history of the opponent: if the opponent has more defections than cooperations then the player defects. In case of equal number of defections and cooperations this player will Defect.
An optional memory attribute will limit the number of turns remembered (by default this is 0)
Names:
 Hard Majority: [Mittal2009]

name
= 'Hard Go By Majority'¶

class
axelrod.strategies.gobymajority.
HardGoByMajority10
→ None[source]¶ HardGoByMajority player with a memory of 10.
Names:
 Hard Go By Majority 10: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Hard Go By Majority 10'¶

class
axelrod.strategies.gobymajority.
HardGoByMajority20
→ None[source]¶ HardGoByMajority player with a memory of 20.
Names:
 Hard Go By Majority 20: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 20, 'stochastic': False}¶

name
= 'Hard Go By Majority 20'¶

class
axelrod.strategies.gobymajority.
HardGoByMajority40
→ None[source]¶ HardGoByMajority player with a memory of 40.
Names:
 Hard Go By Majority 40: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 40, 'stochastic': False}¶

name
= 'Hard Go By Majority 40'¶

class
axelrod.strategies.gobymajority.
HardGoByMajority5
→ None[source]¶ HardGoByMajority player with a memory of 5.
Names:
 Hard Go By Majority 5: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'Hard Go By Majority 5'¶

class
axelrod.strategies.gradualkiller.
GradualKiller
[source]¶ It begins by defecting in the first five moves, then cooperates two times. It then defects all the time if the opponent has defected in move 6 and 7, else cooperates all the time. Initially designed to stop Gradual from defeating TitForTat in a 3 Player tournament.
Names
 Gradual Killer: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Gradual Killer'¶

original_class
¶ alias of
GradualKiller

strategy
(opponent)¶

class
axelrod.strategies.grudger.
Aggravater
[source]¶ Grudger, except that it defects on the first 3 turns
Names
 Aggravater: Original name by Thomas Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Aggravater'¶

class
axelrod.strategies.grudger.
EasyGo
[source]¶ A player starts by defecting however will cooperate if at any point the opponent has defected.
Names:
 Easy Go: [Prison1998]
 Reverse Grudger (RGRIM): [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'EasyGo'¶

class
axelrod.strategies.grudger.
ForgetfulGrudger
→ None[source]¶ A player starts by cooperating however will defect if at any point the opponent has defected, but forgets after mem_length matches.
Names:
 Forgetful Grudger: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Forgetful Grudger'¶

class
axelrod.strategies.grudger.
GeneralSoftGrudger
(n: int = 1, d: int = 4, c: int = 2) → None[source]¶ A generalization of the SoftGrudger strategy. SoftGrudger punishes by playing: D, D, D, D, C, C. after a defection by the opponent. GeneralSoftGrudger only punishes after its opponent defects a specified amount of times consecutively. The punishment is in the form of a series of defections followed by a ‘penance’ of a series of consecutive cooperations.
Names:
 General Soft Grudger: Original Name by J. Taylor Smith

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'General Soft Grudger'¶

class
axelrod.strategies.grudger.
Grudger
[source]¶ A player starts by cooperating however will defect if at any point the opponent has defected.
This strategy came 7th in Axelrod’s original tournament.
Names:
 Friedman’s strategy: [Axelrod1980]
 Grudger: [Li2011]
 Grim: [Berg2015]
 Grim Trigger: [Banks1990]
 Spite: [Beaufils1997]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Grudger'¶

class
axelrod.strategies.grudger.
GrudgerAlternator
[source]¶ A player starts by cooperating until the first opponents defection, then alternates DC.
Names:
 c_then_per_dc: [Prison1998]
 Grudger Alternator: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'GrudgerAlternator'¶

class
axelrod.strategies.grudger.
OppositeGrudger
[source]¶ A player starts by defecting however will cooperate if at any point the opponent has cooperated.
Names:
 Opposite Grudger: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Opposite Grudger'¶

class
axelrod.strategies.grudger.
SoftGrudger
→ None[source]¶ A modification of the Grudger strategy. Instead of punishing by always defecting: punishes by playing: D, D, D, D, C, C. (Will continue to cooperate afterwards).
 Soft Grudger (SGRIM): [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 6, 'stochastic': False}¶

name
= 'Soft Grudger'¶

class
axelrod.strategies.grumpy.
Grumpy
(starting_state: str = 'Nice', grumpy_threshold: int = 10, nice_threshold: int = 10) → None[source]¶ A player that defects after a certain level of grumpiness. Grumpiness increases when the opponent defects and decreases when the opponent cooperates.
Names:
 Grumpy: Original name by Jason Young

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Grumpy'¶

strategy
(opponent: axelrod.player.Player) → axelrod.action.Action[source]¶ A player that gets grumpier the more the opposition defects, and nicer the more they cooperate.
Starts off Nice, but becomes grumpy once the grumpiness threshold is hit. Won’t become nice once that grumpy threshold is hit, but must reach a much lower threshold before it becomes nice again.

class
axelrod.strategies.handshake.
Handshake
(initial_plays: List[axelrod.action.Action] = None) → None[source]¶ Starts with C, D. If the opponent plays the same way, cooperate forever, else defect forever.
Names:
 Handshake: [Robson1990]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Handshake'¶

class
axelrod.strategies.hmm.
EvolvedHMM5
→ None[source]¶ An HMMbased player with five hidden states trained with an evolutionary algorithm.
Names:
 Evolved HMM 5: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': True}¶

name
= 'Evolved HMM 5'¶

class
axelrod.strategies.hmm.
HMMPlayer
(transitions_C=None, transitions_D=None, emission_probabilities=None, initial_state=0, initial_action=C) → None[source]¶ Abstract base class for Hidden Markov Model players.
Names
 HMM Player: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'HMM Player'¶

class
axelrod.strategies.hmm.
SimpleHMM
(transitions_C, transitions_D, emission_probabilities, initial_state) → None[source]¶ Implementation of a basic Hidden Markov Model. We assume that the transition matrix is conditioned on the opponent’s last action, so there are two transition matrices. Emission distributions are stored as Bernoulli probabilities for each state. This is essentially a stochastic FSM.

axelrod.strategies.hmm.
is_stochastic_matrix
(m, ep=1e08) → bool[source]¶ Checks that the matrix m (a list of lists) is a stochastic matrix.

class
axelrod.strategies.hunter.
AlternatorHunter
→ None[source]¶ A player who hunts for alternators.
Names:
 Alternator Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Alternator Hunter'¶

class
axelrod.strategies.hunter.
CooperatorHunter
[source]¶ A player who hunts for cooperators.
Names:
 Cooperator Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Cooperator Hunter'¶

class
axelrod.strategies.hunter.
CycleHunter
→ None[source]¶ Hunts strategies that play cyclically, like any of the Cyclers, Alternator, etc.
Names:
 Cycle Hunter: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Cycle Hunter'¶

class
axelrod.strategies.hunter.
DefectorHunter
[source]¶ A player who hunts for defectors.
Names:
 Defector Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Defector Hunter'¶

class
axelrod.strategies.hunter.
EventualCycleHunter
→ None[source]¶ Hunts strategies that eventually play cyclically.
Names:
 Eventual Cycle Hunter: Original name by Marc Harper

name
= 'Eventual Cycle Hunter'¶

class
axelrod.strategies.hunter.
MathConstantHunter
[source]¶ A player who hunts for mathematical constant players.
Names:
Math Constant Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Math Constant Hunter'¶

strategy
(opponent: axelrod.player.Player) → axelrod.action.Action[source]¶ Check whether the number of cooperations in the first and second halves of the history are close. The variance of the uniform distribution (1/4) is a reasonable delta but use something lower for certainty and avoiding false positives. This approach will also detect a lot of random players.


class
axelrod.strategies.hunter.
RandomHunter
→ None[source]¶ A player who hunts for random players.
Names:
 Random Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Random Hunter'¶

class
axelrod.strategies.inverse.
Inverse
[source]¶ A player who defects with a probability that diminishes relative to how long ago the opponent defected.
Names:
 Inverse: Original Name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Inverse'¶

class
axelrod.strategies.lookerup.
EvolvedLookerUp1_1_1
→ None[source]¶ A 1 1 1 Lookerup trained with an evolutionary algorithm.
Names:
 Evolved Lookerup 1 1 1: Original name by Marc Harper

name
= 'EvolvedLookerUp1_1_1'¶

class
axelrod.strategies.lookerup.
EvolvedLookerUp2_2_2
→ None[source]¶ A 2 2 2 Lookerup trained with an evolutionary algorithm.
Names:
 Evolved Lookerup 2 2 2: Original name by Marc Harper

name
= 'EvolvedLookerUp2_2_2'¶

class
axelrod.strategies.lookerup.
LookerUp
(lookup_dict: dict = None, initial_actions: tuple = None, pattern: Any = None, parameters: axelrod.strategies.lookerup.Plays = None) → None[source]¶ This strategy uses a LookupTable to decide its next action. If there is not enough history to use the table, it calls from a list of self.initial_actions.
if self_depth=2, op_depth=3, op_openings_depth=5, LookerUp finds the last 2 plays of self, the last 3 plays of opponent and the opening 5 plays of opponent. It then looks those up on the LookupTable and returns the appropriate action. If 5 rounds have not been played (the minimum required for op_openings_depth), it calls from self.initial_actions.
LookerUp can be instantiated with a dictionary. The dictionary uses tuple(tuple, tuple, tuple) or Plays as keys. for example.
self_plays: depth=2
op_plays: depth=1
op_openings: depth=0:
{Plays((C, C), (C), ()): C, Plays((C, C), (D), ()): D, Plays((C, D), (C), ()): D, < example below Plays((C, D), (D), ()): D, Plays((D, C), (C), ()): C, Plays((D, C), (D), ()): D, Plays((D, D), (C), ()): C, Plays((D, D), (D), ()): D}
From the above table, if the player last played C, D and the opponent last played C (here the initial opponent play is ignored) then this round, the player would play D.
The dictionary must contain all possible permutations of C’s and D’s.
LookerUp can also be instantiated with pattern=str/tuple of actions, and:
parameters=Plays( self_plays=player_depth: int, op_plays=op_depth: int, op_openings=op_openings_depth: int)
It will create keys of len=2 ** (sum(parameters)) and map the pattern to the keys.
initial_actions is a tuple such as (C, C, D). A table needs initial actions equal to max(self_plays depth, opponent_plays depth, opponent_initial_plays depth). If provided initial_actions is too long, the extra will be ignored. If provided initial_actions is too short, the shortfall will be made up with C’s.
Some wellknown strategies can be expressed as special cases; for example Cooperator is given by the dict (All history is ignored and always play C):
{Plays((), (), ()) : C}
TitForTat is given by (The only history that is important is the opponent’s last play.):
{Plays((), (D,), ()): D, Plays((), (C,), ()): C}
LookerUp’s LookupTable defaults to TitForTat. The initial_actions defaults to playing C.
Names:
 Lookerup: Original name by Martin Jones

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

default_tft_lookup_table
= {Plays(self_plays=(), op_plays=(D,), op_openings=()): D, Plays(self_plays=(), op_plays=(C,), op_openings=()): C}¶

lookup_dict
¶

lookup_table_display
(sort_by: tuple = ('op_openings', 'self_plays', 'op_plays')) → str[source]¶ Returns a string for printing lookup_table info in specified order.
Parameters: sort_by – only_elements=’self_plays’, ‘op_plays’, ‘op_openings’

name
= 'LookerUp'¶

class
axelrod.strategies.lookerup.
LookupTable
(lookup_dict: dict) → None[source]¶ LookerUp and its children use this object to determine their next actions.
It is an object that creates a table of all possible plays to a specified depth and the action to be returned for each combination of plays. The “get” method returns the appropriate response. For the table containing:
.... Plays(self_plays=(C, C), op_plays=(C, D), op_openings=(D, C): D Plays(self_plays=(C, C), op_plays=(C, D), op_openings=(D, D): C ...
with: player.history[2:]=[C, C] and opponent.history[2:]=[C, D] and opponent.history[:2]=[D, D], calling LookupTable.get(plays=(C, C), op_plays=(C, D), op_openings=(D, D)) will return C.
Instantiate the table with a lookup_dict. This is {(self_plays_tuple, op_plays_tuple, op_openings_tuple): action, …}. It must contain every possible permutation with C’s and D’s of the above tuple. so:
good_dict = {((C,), (C,), ()): C, ((C,), (D,), ()): C, ((D,), (C,), ()): D, ((D,), (D,), ()): C} bad_dict = {((C,), (C,), ()): C, ((C,), (D,), ()): C, ((D,), (C,), ()): D}
LookupTable.from_pattern() creates an ordered list of keys for you and maps the pattern to the keys.:
LookupTable.from_pattern(pattern=(C, D, D, C), player_depth=0, op_depth=1, op_openings_depth=1 )
creates the dictionary:
{Plays(self_plays=(), op_plays=(C), op_openings=(C)): C, Plays(self_plays=(), op_plays=(C), op_openings=(D)): D, Plays(self_plays=(), op_plays=(D), op_openings=(C)): D, Plays(self_plays=(), op_plays=(D), op_openings=(D)): C,}
and then returns a LookupTable with that dictionary.

dictionary
¶

display
(sort_by: tuple = ('op_openings', 'self_plays', 'op_plays')) → str[source]¶ Returns a string for printing lookup_table info in specified order.
Parameters: sort_by – only_elements=’self_plays’, ‘op_plays’, ‘op_openings’

classmethod
from_pattern
(pattern: tuple, player_depth: int, op_depth: int, op_openings_depth: int)[source]¶

op_depth
¶

op_openings_depth
¶

player_depth
¶

table_depth
¶


class
axelrod.strategies.lookerup.
Plays
(self_plays, op_plays, op_openings)¶ 
op_openings
¶ Alias for field number 2

op_plays
¶ Alias for field number 1

self_plays
¶ Alias for field number 0


class
axelrod.strategies.lookerup.
Winner12
→ None[source]¶ A lookup table based strategy.
Names:
 Winner12: [Mathieu2015]

name
= 'Winner12'¶

class
axelrod.strategies.lookerup.
Winner21
→ None[source]¶ A lookup table based strategy.
Names:
 Winner21: [Mathieu2015]

name
= 'Winner21'¶

axelrod.strategies.lookerup.
create_lookup_table_keys
(player_depth: int, op_depth: int, op_openings_depth: int) → list[source]¶ Returns a list of Plays that has all possible permutations of C’s and D’s for each specified depth. the list is in order, C < D sorted by ((player_tuple), (op_tuple), (op_openings_tuple)). create_lookup_keys(2, 1, 0) returns:
[Plays(self_plays=(C, C), op_plays=(C,), op_openings=()), Plays(self_plays=(C, C), op_plays=(D,), op_openings=()), Plays(self_plays=(C, D), op_plays=(C,), op_openings=()), Plays(self_plays=(C, D), op_plays=(D,), op_openings=()), Plays(self_plays=(D, C), op_plays=(C,), op_openings=()), Plays(self_plays=(D, C), op_plays=(D,), op_openings=()), Plays(self_plays=(D, D), op_plays=(C,), op_openings=()), Plays(self_plays=(D, D), op_plays=(D,), op_openings=())]

axelrod.strategies.lookerup.
get_last_n_plays
(player: axelrod.player.Player, depth: int) → tuple[source]¶ Returns the last N plays of player as a tuple.

axelrod.strategies.lookerup.
make_keys_into_plays
(lookup_table: dict) → dict[source]¶ Returns a dict where all keys are Plays.

class
axelrod.strategies.mathematicalconstants.
CotoDeRatio
[source]¶ The player will always aim to bring the ratio of cooperations to defections closer to the ratio as given in a sub class
Names:
 Co to Do Ratio: Original Name by Timothy Standen

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

class
axelrod.strategies.mathematicalconstants.
Golden
[source]¶ The player will always aim to bring the ratio of cooperations to defections closer to the golden mean
Names:
 Golden: Original Name by Timothy Standen

name
= '$\\phi$'¶

ratio
= 1.618033988749895¶

class
axelrod.strategies.mathematicalconstants.
Pi
[source]¶ The player will always aim to bring the ratio of cooperations to defections closer to the pi
Names:
 Pi: Original Name by Timothy Standen

name
= '$\\pi$'¶

ratio
= 3.141592653589793¶

class
axelrod.strategies.mathematicalconstants.
e
[source]¶ The player will always aim to bring the ratio of cooperations to defections closer to the e
Names:
 e: Original Name by Timothy Standen

name
= '$e$'¶

ratio
= 2.718281828459045¶
Memory Two strategies.

class
axelrod.strategies.memorytwo.
AON2
→ None[source]¶ AON2 a memory two strategy introduced in [Hilbe2017]. It belongs to the AONk (allornone) family of strategies. These strategies were designed to satisfy the three following properties:
1. Mutually Cooperative. A strategy is mutually cooperative if there are histories for which the strategy prescribes to cooperate, and if it continues to cooperate after rounds with mutual cooperation (provided the last k actions of the focal player were actually consistent).
2. Error correcting. A strategy is error correcting after at most k rounds if, after any history, it generally takes a group of players at most k + 1 rounds to reestablish mutual cooperation.
3. Retaliating. A strategy is retaliating for at least k rounds if, after rounds in which the focal player cooperated while the coplayer defected, the strategy responds by defecting the following k rounds.
In [Hilbe2017] the following vectors are reported as “equivalent” to AON2 with their respective selfcooperation rate (note that these are not the same):
1. [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], selfcooperation rate: 0.952 2. [1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], selfcooperation rate: 0.951 3. [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], selfcooperation rate: 0.951 4. [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1], selfcooperation rate: 0.952
AON2 is implemented using vector 1 due its selfcooperation rate.
In essence it is a strategy that starts off by cooperating and will cooperate again only after the states (CC, CC), (CD, CD), (DC, DC), (DD, DD).
Names:
 AON2: [Hilbe2017]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'AON2'¶

class
axelrod.strategies.memorytwo.
DelayedAON1
→ None[source]¶ Delayed AON1 a memory two strategy also introduced in [Hilbe2017] and belongs to the AONk family. Note that AON1 is equivalent to Win Stay Lose Shift.
In [Hilbe2017] the following vectors are reported as “equivalent” to Delayed AON1 with their respective selfcooperation rate (note that these are not the same):
1. [1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1], selfcooperation rate: 0.952 2. [1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1], selfcooperation rate: 0.970 3. [1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1], selfcooperation rate: 0.971
Delayed AON1 is implemented using vector 3 due its selfcooperation rate.
In essence it is a strategy that starts off by cooperating and will cooperate again only after the states (CC, CC), (CD, CD), (CD, DD), (DD, CD), (DC, DC) and (DD, DD).
Names:
 Delayed AON1: [Hilbe2017]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Delayed AON1'¶

class
axelrod.strategies.memorytwo.
MEM2
→ None[source]¶ A memorytwo player that switches between TFT, TFTT, and ALLD.
Note that the reference claims that this is a memory two strategy but in fact it is infinite memory. This is because the player plays as ALLD if ALLD has ever been selected twice, which can only be known if the entire history of play is accessible.
Names:
 MEM2: [Li2014]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'MEM2'¶

class
axelrod.strategies.memorytwo.
MemoryTwoPlayer
(sixteen_vector: Tuple[float, ...] = None, initial: axelrod.action.Action = C) → None[source]¶ Uses a sixteenvector for strategies based on the 16 conditional probabilities P(X  I,J,K,L) where X, I, J, K, L in [C, D] and I, J are the players last two moves and K, L are the opponents last two moves. These conditional probabilities are the following: 1. P(CCC, CC) 2. P(CCC, CD) 3. P(CCC, DC) 4. P(CCC, DD) 5. P(CCD, CC) 6. P(CCD, CD) 7. P(CCD, DC) 8. P(CCD, DD) 9. P(CDC, CC) 10. P(CDC, CD) 11. P(CDC, DC) 12. P(CDC, DD) 13. P(CDD, CC) 14. P(CDD, CD) 15. P(CDD, DC) 16. P(CDD, DD)) Cooperator is set as the default player if sixteen_vector is not given.
Names
 Memory Two: [Hilbe2017]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Generic Memory Two Player'¶
Memory One strategies. Note that there are Memory One strategies in other files, including titfortat.py and zero_determinant.py

class
axelrod.strategies.memoryone.
ALLCorALLD
[source]¶ This strategy is at the parameter extreme of the ZD strategies (phi = 0). It simply repeats its last move, and so mimics ALLC or ALLD after round one. If the tournament is noisy, there will be long runs of C and D.
For now starting choice is random of 0.6, but that was an arbitrary choice at implementation time.
Names:
 ALLC or ALLD: Original name by Marc Harper
 Repeat: [Akin2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'ALLCorALLD'¶

class
axelrod.strategies.memoryone.
FirmButFair
→ None[source]¶ A strategy that cooperates on the first move, and cooperates except after receiving a sucker payoff.
Names:
 Firm But Fair: [Frean1994]

name
= 'Firm But Fair'¶

class
axelrod.strategies.memoryone.
GTFT
(p: float = None) → None[source]¶ Generous Tit For Tat Strategy.
Names:
 Generous Tit For Tat: [Nowak1993]
 Naive peace maker: [Gaudesi2016]
 Soft Joss: [Gaudesi2016]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'GTFT'¶

class
axelrod.strategies.memoryone.
MemoryOnePlayer
(four_vector: Tuple[float, float, float, float] = None, initial: axelrod.action.Action = C) → None[source]¶ Uses a fourvector for strategies based on the last round of play, (P(CCC), P(CCD), P(CDC), P(CDD)). WinStay LoseShift is set as the default player if four_vector is not given. Intended to be used as an abstract base class or to at least be supplied with a initializing four_vector.
Names
 Memory One: [Nowak1990]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Generic Memory One Player'¶

class
axelrod.strategies.memoryone.
ReactivePlayer
(probabilities: Tuple[float, float]) → None[source]¶ A generic reactive player. Defined by 2 probabilities conditional on the opponent’s last move: P(CC), P(CD).
Names:
 Reactive: [Nowak1989]

name
= 'Reactive Player'¶

class
axelrod.strategies.memoryone.
SoftJoss
(q: float = 0.9) → None[source]¶ Defects with probability 0.9 when the opponent defects, otherwise emulates TitForTat.
Names:
 Soft Joss: [Prison1998]

name
= 'Soft Joss'¶

class
axelrod.strategies.memoryone.
StochasticCooperator
→ None[source]¶ Stochastic Cooperator.
Names:
 Stochastic Cooperator: [Adami2013]

name
= 'Stochastic Cooperator'¶

class
axelrod.strategies.memoryone.
StochasticWSLS
(ep: float = 0.05) → None[source]¶ Stochastic WSLS, similar to Generous TFT. Note that this is not the same as Stochastic WSLS described in [Amaral2016], that strategy is a modification of WSLS that learns from the performance of other strategies.
Names:
 Stochastic WSLS: Original name by Marc Harper

name
= 'Stochastic WSLS'¶

class
axelrod.strategies.memoryone.
WinShiftLoseStay
(initial: axelrod.action.Action = D) → None[source]¶ WinShift LoseStay, also called Reverse Pavlov.
Names:
 WSLS: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'WinShift LoseStay'¶

class
axelrod.strategies.memoryone.
WinStayLoseShift
(initial: axelrod.action.Action = C) → None[source]¶ WinStay LoseShift, also called Pavlov.
Names:
 Win Stay Lose Shift: [Nowak1993]
 WSLS: [Stewart2012]
 Pavlov: [Kraines1989]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'WinStay LoseShift'¶

class
axelrod.strategies.meta.
MemoryDecay
(p_memory_delete: float = 0.1, p_memory_alter: float = 0.03, loss_value: float = 2, gain_value: float = 1, memory: list = None, start_strategy: axelrod.player.Player = <class 'axelrod.strategies.titfortat.TitForTat'>, start_strategy_duration: int = 15)[source]¶ A player utilizes the (default) Tit for Tat strategy for the first (default) 15 turns, at the same time memorizing the opponent’s decisions. After the 15 turns have passed, the player calculates a ‘net cooperation score’ (NCS) for their opponent, weighing decisions to Cooperate as (default) 1, and to Defect as (default) 2. If the opponent’s NCS is below 0, the player defects; otherwise, they cooperate.
The player’s memories of the opponent’s decisions have a random chance to be altered (i.e., a C decision becomes D or vice versa; default probability is 0.03) or deleted (default probability is 0.1).
It is possible to pass a different axelrod player class to change the inital player behavior.
Name: Memory Decay

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

gain_loss_translate
()[source]¶ Translates the actions (D and C) to numeric values (loss_value and gain_value).

name
= 'Memory Decay'¶


class
axelrod.strategies.meta.
MetaHunter
[source]¶ A player who uses a selection of hunters.
Names
 Meta Hunter: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Hunter'¶

class
axelrod.strategies.meta.
MetaHunterAggressive
(team=None)[source]¶ A player who uses a selection of hunters.
Names
 Meta Hunter Aggressive: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Hunter Aggressive'¶

class
axelrod.strategies.meta.
MetaMajority
(team=None)[source]¶ A player who goes by the majority vote of all other nonmeta players.
Names:
 Meta Marjority: Original name by Karol Langner

static
meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Majority'¶

class
axelrod.strategies.meta.
MetaMajorityFiniteMemory
[source]¶ MetaMajority with the team of Finite Memory Players
Names
 Meta Majority Finite Memory: Original name by Marc Harper

name
= 'Meta Majority Finite Memory'¶

class
axelrod.strategies.meta.
MetaMajorityLongMemory
[source]¶ MetaMajority with the team of Long (infinite) Memory Players
Names
 Meta Majority Long Memory: Original name by Marc Harper

name
= 'Meta Majority Long Memory'¶

class
axelrod.strategies.meta.
MetaMajorityMemoryOne
[source]¶ MetaMajority with the team of Memory One players
Names
 Meta Majority Memory One: Original name by Marc Harper

name
= 'Meta Majority Memory One'¶

class
axelrod.strategies.meta.
MetaMinority
(team=None)[source]¶ A player who goes by the minority vote of all other nonmeta players.
Names:
 Meta Minority: Original name by Karol Langner

static
meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Minority'¶

class
axelrod.strategies.meta.
MetaMixer
(team=None, distribution=None)[source]¶ A player who randomly switches between a team of players. If no distribution is passed then the player will uniformly choose between sub players.
In essence this is creating a Mixed strategy.
Parameters
 team : list of strategy classes, optional
 Team of strategies that are to be randomly played If none is passed will select the ordinary strategies.
 distribution : list representing a probability distribution, optional
 This gives the distribution from which to select the players. If none is passed will select uniformly.
Names
 Meta Mixer: Original name by Vince Knight

classifier
= {'inspects_source': False, 'long_run_time': True, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

meta_strategy
(results, opponent)[source]¶ Using the numpy.random choice function to sample with weights

name
= 'Meta Mixer'¶

class
axelrod.strategies.meta.
MetaPlayer
(team=None)[source]¶ A generic player that has its own team of players.
Names:
 Meta Player: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': True, 'makes_use_of': {'length', 'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Player'¶

class
axelrod.strategies.meta.
MetaWinner
(team=None)[source]¶ A player who goes by the strategy of the current winner.
Names:
 Meta Winner: Original name by Karol Langner

meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Winner'¶

class
axelrod.strategies.meta.
MetaWinnerDeterministic
[source]¶ Meta Winner with the team of Deterministic Players.
Names
 Meta Winner Deterministic: Original name by Marc Harper

name
= 'Meta Winner Deterministic'¶

class
axelrod.strategies.meta.
MetaWinnerEnsemble
(team=None)[source]¶ A variant of MetaWinner that chooses one of the top scoring strategies at random against each opponent. Note this strategy is always stochastic regardless of the team.
Names:
 Meta Winner Ensemble: Original name by Marc Harper

meta_strategy
(results, opponent)[source]¶ Determine the meta result based on results of all players. Override this function in child classes.

name
= 'Meta Winner Ensemble'¶

class
axelrod.strategies.meta.
MetaWinnerFiniteMemory
[source]¶ MetaWinner with the team of Finite Memory Players
Names
 Meta Winner Finite Memory: Original name by Marc Harper

name
= 'Meta Winner Finite Memory'¶

class
axelrod.strategies.meta.
MetaWinnerLongMemory
[source]¶ MetaWinner with the team of Long (infinite) Memory Players
Names
 Meta Winner Long Memory: Original name by Marc Harper

name
= 'Meta Winner Long Memory'¶

class
axelrod.strategies.meta.
MetaWinnerMemoryOne
[source]¶ MetaWinner with the team of Memory One players
Names
 Meta Winner Memory Memory One: Original name by Marc Harper

name
= 'Meta Winner Memory One'¶

class
axelrod.strategies.meta.
MetaWinnerStochastic
[source]¶ Meta Winner with the team of Stochastic Players.
Names
 Meta Winner Stochastic: Original name by Marc Harper

name
= 'Meta Winner Stochastic'¶

class
axelrod.strategies.meta.
NMWEDeterministic
[source]¶ Nice Meta Winner Ensemble with the team of Deterministic Players.
Names
 Nice Meta Winner Ensemble Deterministic: Original name by Marc Harper

name
= 'NMWE Deterministic'¶

class
axelrod.strategies.meta.
NMWEFiniteMemory
[source]¶ Nice Meta Winner Ensemble with the team of Finite Memory Players.
Names
 Nice Meta Winner Ensemble Finite Memory: Original name by Marc Harper

name
= 'NMWE Finite Memory'¶

class
axelrod.strategies.meta.
NMWELongMemory
[source]¶ Nice Meta Winner Ensemble with the team of Long Memory Players.
Names
 Nice Meta Winner Ensemble Long Memory: Original name by Marc Harper

name
= 'NMWE Long Memory'¶

class
axelrod.strategies.meta.
NMWEMemoryOne
[source]¶ Nice Meta Winner Ensemble with the team of Memory One Players.
Names
 Nice Meta Winner Ensemble Memory One: Original name by Marc Harper

name
= 'NMWE Memory One'¶

class
axelrod.strategies.meta.
NMWEStochastic
[source]¶ Nice Meta Winner Ensemble with the team of Stochastic Players.
Names
 Nice Meta Winner Ensemble Stochastic: Original name by Marc Harper

name
= 'NMWE Stochastic'¶

class
axelrod.strategies.meta.
NiceMetaWinner
(team=None)¶ A player who goes by the strategy of the current winner.
Names:
 Meta Winner: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': True, 'makes_use_of': {'length', 'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Nice Meta Winner'¶

original_class
¶ alias of
MetaWinner

strategy
(opponent)¶

class
axelrod.strategies.meta.
NiceMetaWinnerEnsemble
(team=None)¶ A variant of MetaWinner that chooses one of the top scoring strategies at random against each opponent. Note this strategy is always stochastic regardless of the team.
Names:
 Meta Winner Ensemble: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': True, 'makes_use_of': {'length', 'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Nice Meta Winner Ensemble'¶

original_class
¶ alias of
MetaWinnerEnsemble

strategy
(opponent)¶

class
axelrod.strategies.mindcontrol.
MindBender
[source]¶ A player that changes the opponent’s strategy by modifying the internal dictionary.
Names
 Mind Bender: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': True, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Mind Bender'¶

class
axelrod.strategies.mindcontrol.
MindController
[source]¶ A player that changes the opponents strategy to cooperate.
Names
 Mind Controller: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': True, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Mind Controller'¶

class
axelrod.strategies.mindcontrol.
MindWarper
[source]¶ A player that changes the opponent’s strategy but blocks changes to its own.
Names
 Mind Warper: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': True, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'Mind Warper'¶
The player classes in this module do not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising them.

class
axelrod.strategies.mindreader.
MindReader
[source]¶ A player that looks ahead at what the opponent will do and decides what to do.
Names:
 Mind reader: Original name by Jason Young

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name
= 'Mind Reader'¶

class
axelrod.strategies.mindreader.
MirrorMindReader
[source]¶ A player that will mirror whatever strategy it is playing against by cheating and calling the opponent’s strategy function instead of its own.
Names:
 Protected Mind reader: Original name by Brice Fernandes

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': True, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

static
foil_strategy_inspection
() → axelrod.action.Action[source]¶ Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name
= 'Mirror Mind Reader'¶

class
axelrod.strategies.mindreader.
ProtectedMindReader
[source]¶ A player that looks ahead at what the opponent will do and decides what to do. It is also protected from mind control strategies
Names:
 Protected Mind reader: Original name by Jason Young

classifier
= {'inspects_source': True, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': True, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Protected Mind Reader'¶

class
axelrod.strategies.mutual.
Desperate
[source]¶ A player that only cooperates after mutual defection.
Names:
 Desperate: [Berg2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Desperate'¶

class
axelrod.strategies.mutual.
Hopeless
[source]¶ A player that only defects after mutual cooperation.
Names:
 Hopeless: [Berg2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Hopeless'¶

class
axelrod.strategies.mutual.
Willing
[source]¶ A player that only defects after mutual defection.
Names:
 Willing: [Berg2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Willing'¶

class
axelrod.strategies.negation.
Negation
[source]¶ A player starts by cooperating or defecting randomly if it’s their first move, then simply doing the opposite of the opponents last move thereafter.
Names:
 Negation: [PD2017]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Negation'¶

class
axelrod.strategies.oncebitten.
FoolMeForever
[source]¶ Fool me once, shame on me. Teach a man to fool me and I’ll be fooled for the rest of my life.
Names:
 Fool Me Forever: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Fool Me Forever'¶

class
axelrod.strategies.oncebitten.
FoolMeOnce
[source]¶ Forgives one D then retaliates forever on a second D.
Names:
 Fool me once: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Fool Me Once'¶

class
axelrod.strategies.oncebitten.
ForgetfulFoolMeOnce
(forget_probability: float = 0.05) → None[source]¶ Forgives one D then retaliates forever on a second D. Sometimes randomly forgets the defection count, and so keeps a secondary count separate from the standard count in Player.
Names:
 Forgetful Fool Me Once: Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Forgetful Fool Me Once'¶

class
axelrod.strategies.oncebitten.
OnceBitten
→ None[source]¶ Cooperates once when the opponent defects, but if they defect twice in a row defaults to forgetful grudger for 10 turns defecting.
Names:
 Once Bitten: Original name by Holly Marissa

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 12, 'stochastic': False}¶

name
= 'Once Bitten'¶

class
axelrod.strategies.prober.
CollectiveStrategy
[source]¶ Defined in [Li2009]. ‘It always cooperates in the first move and defects in the second move. If the opponent also cooperates in the first move and defects in the second move, CS will cooperate until the opponent defects. Otherwise, CS will always defect.’
Names:
 Collective Strategy: [Li2009]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'CollectiveStrategy'¶

class
axelrod.strategies.prober.
HardProber
[source]¶ Plays D, D, C, C initially. Defects forever if opponent cooperated in moves 2 and 3. Otherwise plays TFT.
Names:
 Hard Prober: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Hard Prober'¶

class
axelrod.strategies.prober.
NaiveProber
(p: float = 0.1) → None[source]¶ Like titfortat, but it occasionally defects with a small probability.
Names:
 Naive Prober: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Naive Prober'¶

class
axelrod.strategies.prober.
Prober
[source]¶ Plays D, C, C initially. Defects forever if opponent cooperated in moves 2 and 3. Otherwise plays TFT.
Names:
 Prober: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Prober'¶

class
axelrod.strategies.prober.
Prober2
[source]¶ Plays D, C, C initially. Cooperates forever if opponent played D then C in moves 2 and 3. Otherwise plays TFT.
Names:
 Prober 2: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Prober 2'¶

class
axelrod.strategies.prober.
Prober3
[source]¶ Plays D, C initially. Defects forever if opponent played C in moves 2. Otherwise plays TFT.
Names:
 Prober 3: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Prober 3'¶

class
axelrod.strategies.prober.
Prober4
→ None[source]¶ Plays C, C, D, C, D, D, D, C, C, D, C, D, C, C, D, C, D, D, C, D initially. Counts retaliating and provocative defections of the opponent. If the absolute difference between the counts is smaller or equal to 2, defects forever. Otherwise plays C for the next 5 turns and TFT for the rest of the game.
Names:
 Prober 4: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Prober 4'¶

class
axelrod.strategies.prober.
RemorsefulProber
(p: float = 0.1) → None[source]¶ Like Naive Prober, but it remembers if the opponent responds to a random defection with a defection by being remorseful and cooperating.
For reference see: [Li2011]. A more complete description is given in “The Selfish Gene” (https://books.google.co.uk/books?id=ekonDAAAQBAJ):
“Remorseful Prober remembers whether it has just spontaneously defected, and whether the result was prompt retaliation. If so, it ‘remorsefully’ allows its opponent ‘one free hit’ without retaliating.”
Names:
 Remorseful Prober: [Li2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': True}¶

name
= 'Remorseful Prober'¶

class
axelrod.strategies.punisher.
InversePunisher
→ None[source]¶ An inverted version of Punisher. The player starts by cooperating however will defect if at any point the opponent has defected, and forgets after mem_length matches, with 1 <= mem_length <= 20. This time mem_length is proportional to the amount of time the opponent has played C.
Names:
 Inverse Punisher: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Inverse Punisher'¶

class
axelrod.strategies.punisher.
LevelPunisher
[source]¶ A player starts by cooperating however, after 10 rounds will defect if at any point the number of defections by an opponent is greater than 20%.
Names:
 Level Punisher: [Eckhart2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Level Punisher'¶

class
axelrod.strategies.punisher.
Punisher
→ None[source]¶ A player starts by cooperating however will defect if at any point the opponent has defected, but forgets after meme_length matches, with 1<=mem_length<=20 proportional to the amount of time the opponent has played D, punishing that player for playing D too often.
Names:
 Punisher: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Punisher'¶

class
axelrod.strategies.punisher.
TrickyLevelPunisher
[source]¶ A player starts by cooperating however, after 10, 50 and 100 rounds will defect if at any point the percentage of defections by an opponent is greater than 20%, 10% and 5% respectively.
Names:
 Tricky Level Punisher: [Eckhart2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Tricky Level Punisher'¶

class
axelrod.strategies.qlearner.
ArrogantQLearner
→ None[source]¶ A player who learns the best strategies through the qlearning algorithm.
This Q learner jumps to quick conclusions and cares about the future.
Names:
 Arrogant Q Learner: Original name by Geraint Palmer

discount_rate
= 0.1¶

learning_rate
= 0.9¶

name
= 'Arrogant QLearner'¶

class
axelrod.strategies.qlearner.
CautiousQLearner
→ None[source]¶ A player who learns the best strategies through the qlearning algorithm.
This Q learner is slower to come to conclusions and wants to look ahead more.
Names:
 Cautious Q Learner: Original name by Geraint Palmer

discount_rate
= 0.1¶

learning_rate
= 0.1¶

name
= 'Cautious QLearner'¶

class
axelrod.strategies.qlearner.
HesitantQLearner
→ None[source]¶ A player who learns the best strategies through the qlearning algorithm.
This Q learner is slower to come to conclusions and does not look ahead much.
Names:
 Hesitant Q Learner: Original name by Geraint Palmer

discount_rate
= 0.9¶

learning_rate
= 0.1¶

name
= 'Hesitant QLearner'¶

class
axelrod.strategies.qlearner.
RiskyQLearner
→ None[source]¶ A player who learns the best strategies through the qlearning algorithm.
This Q learner is quick to come to conclusions and doesn’t care about the future.
Names:
 Risky Q Learner: Original name by Geraint Palmer

action_selection_parameter
= 0.1¶

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

discount_rate
= 0.9¶

find_reward
(opponent: axelrod.player.Player) → Dict[axelrod.action.Action, Dict[axelrod.action.Action, Union[int, float]]][source]¶ Finds the reward gained on the last iteration

find_state
(opponent: axelrod.player.Player) → str[source]¶ Finds the my_state (the opponents last n moves + its previous proportion of playing C) as a hashable state

learning_rate
= 0.9¶

memory_length
= 12¶

name
= 'Risky QLearner'¶

perform_q_learning
(prev_state: str, state: str, action: axelrod.action.Action, reward)[source]¶ Performs the qlearning algorithm

class
axelrod.strategies.rand.
Random
(p: float = 0.5) → None[source]¶ A player who randomly chooses between cooperating and defecting.
This strategy came 15th in Axelrod’s original tournament.
Names:
 Random: [Axelrod1980]
 Lunatic: [Tzafestas2000]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 0, 'stochastic': True}¶

name
= 'Random'¶

class
axelrod.strategies.resurrection.
DoubleResurrection
[source]¶ A player starts by cooperating and defects if the number of rounds played by the player is greater than five and the last five rounds are cooperations.
If the last five rounds were defections, the player cooperates.
Names:
 DoubleResurrection: [Eckhart2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'DoubleResurrection'¶

class
axelrod.strategies.resurrection.
Resurrection
[source]¶ A player starts by cooperating and defects if the number of rounds played by the player is greater than five and the last five rounds are defections.
Otherwise, the strategy plays like Titfortat.
Names:
 Resurrection: [Eckhart2015]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 5, 'stochastic': False}¶

name
= 'Resurrection'¶

class
axelrod.strategies.retaliate.
LimitedRetaliate
(retaliation_threshold: float = 0.1, retaliation_limit: int = 20) → None[source]¶ A player that cooperates unless the opponent defects and wins. It will then retaliate by defecting. It stops when either, it has beaten the opponent 10 times more often that it has lost or it reaches the retaliation limit (20 defections).
Names:
 Limited Retaliate: Original name by Owen Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Limited Retaliate'¶

class
axelrod.strategies.retaliate.
LimitedRetaliate2
(retaliation_threshold: float = 0.08, retaliation_limit: int = 15) → None[source]¶ LimitedRetaliate player with a threshold of 8 percent and a retaliation limit of 15.
Names:
 Limited Retaliate 2: Original name by Owen Campbell

name
= 'Limited Retaliate 2'¶

class
axelrod.strategies.retaliate.
LimitedRetaliate3
(retaliation_threshold: float = 0.05, retaliation_limit: int = 20) → None[source]¶ LimitedRetaliate player with a threshold of 5 percent and a retaliation limit of 20.
Names:
 Limited Retaliate 3: Original name by Owen Campbell

name
= 'Limited Retaliate 3'¶

class
axelrod.strategies.retaliate.
Retaliate
(retaliation_threshold: float = 0.1) → None[source]¶ A player starts by cooperating but will retaliate once the opponent has won more than 10 percent times the number of defections the player has.
Names:
 Retaliate: Original name by Owen Campbell

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Retaliate'¶

class
axelrod.strategies.retaliate.
Retaliate2
(retaliation_threshold: float = 0.08) → None[source]¶ Retaliate player with a threshold of 8 percent.
Names:
 Retaliate 2: Original name by Owen Campbell

name
= 'Retaliate 2'¶

class
axelrod.strategies.retaliate.
Retaliate3
(retaliation_threshold: float = 0.05) → None[source]¶ Retaliate player with a threshold of 5 percent.
Names:
 Retaliate 3: Original name by Owen Campbell

name
= 'Retaliate 3'¶

class
axelrod.strategies.sequence_player.
SequencePlayer
(generator_function: function, generator_args: Tuple = ()) → None[source]¶ Abstract base class for players that use a generated sequence to determine their plays.
Names:
 Sequence Player: Original name by Marc Harper

class
axelrod.strategies.sequence_player.
ThueMorse
→ None[source]¶ A player who cooperates or defects according to the ThueMorse sequence. The first few terms of the ThueMorse sequence are: 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 …
ThueMorse sequence: http://mathworld.wolfram.com/ThueMorseSequence.html
Names:
 Thue Morse: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'ThueMorse'¶

class
axelrod.strategies.sequence_player.
ThueMorseInverse
→ None[source]¶ A player who plays the inverse of the ThueMorse sequence.
Names:
 Inverse Thue Morse: Original name by Geraint Palmer

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

meta_strategy
(value: int) → axelrod.action.Action[source]¶ Determines how to map the sequence value to cooperate or defect. By default, treat values like python truth values. Override in child classes for alternate behaviors.

name
= 'ThueMorseInverse'¶

class
axelrod.strategies.shortmem.
ShortMem
[source]¶ A player starts by always cooperating for the first 10 moves.
From the tenth round on, the player analyzes the last ten actions, and compare the number of defects and cooperates of the opponent, based in percentage. If cooperation occurs 30% more than defection, it will cooperate. If defection occurs 30% more than cooperation, the program will defect. Otherwise, the program follows the TitForTat algorithm.
Names:
 ShortMem: [Andre2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 10, 'stochastic': False}¶

name
= 'ShortMem'¶

class
axelrod.strategies.selfsteem.
SelfSteem
[source]¶ This strategy is based on the feeling with the same name. It is modeled on the sine curve(f = sin( 2* pi * n / 10 )), which varies with the current iteration.
If f > 0.95, ‘ego’ of the algorithm is inflated; always defects. If 0.95 > abs(f) > 0.3, rational behavior; follows TitForTat algortithm. If 0.3 > f > 0.3; random behavior. If f < 0.95, algorithm is at rock bottom; always cooperates.
Futhermore, the algorithm implements a retaliation policy, if the opponent defects; the sin curve is shifted. But due to lack of further information, this implementation does not include a sin phase change. Names:
 SelfSteem: [Andre2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'SelfSteem'¶

class
axelrod.strategies.stalker.
Stalker
[source]¶ This is a strategy which is only influenced by the score. Its behavior is based on three values: the very_bad_score (all rounds in defection) very_good_score (all rounds in cooperation) wish_score (average between bad and very_good score)
It starts with cooperation.
 If current_average_score > very_good_score, it defects
 If current_average_score lies in (wish_score, very_good_score) it cooperates
 If current_average_score > 2, it cooperates
 If current_average_score lies in (1, 2)
 The remaining case, current_average_score < 1, it behaves randomly.
 It defects in the last round
Names:
 Stalker: [Andre2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length', 'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Stalker'¶

strategy
(opponent)¶

class
axelrod.strategies.titfortat.
AdaptiveTitForTat
(rate: float = 0.5) → None[source]¶ ATFT  Adaptive Tit For Tat (Basic Model)
Algorithm
if (opponent played C in the last cycle) then world = world + r*(1world) else world = world + r*(0world) If (world >= 0.5) play C, else play D
Attributes
 world : float [0.0, 1.0], set to 0.5
 continuous variable representing the world’s image 1.0  total cooperation 0.0  total defection other values  something in between of the above updated every round, starting value shouldn’t matter as long as it’s >= 0.5
Parameters
 rate : float [0.0, 1.0], default=0.5
 adaptation rate  r in Algorithm above smaller value means more gradual and robust to perturbations behaviour
Names:
 Adaptive Tit For Tat: [Tzafestas2000]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Adaptive Tit For Tat'¶

strategy
(opponent: axelrod.player.Player) → axelrod.action.Action[source]¶ This is a placeholder strategy.

world
= 0.5¶

class
axelrod.strategies.titfortat.
Alexei
[source]¶ Plays similar to TitforTat, but always defect on last turn.
Names:
 Alexei: [LessWrong2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Alexei'¶

strategy
(opponent)¶

class
axelrod.strategies.titfortat.
AntiTitForTat
[source]¶ A strategy that plays the opposite of the opponents previous move. This is similar to Bully, except that the first move is cooperation.
Names:
 Anti Tit For Tat: [Hilbe2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Anti Tit For Tat'¶

class
axelrod.strategies.titfortat.
Bully
[source]¶ A player that behaves opposite to Tit For Tat, including first move.
Starts by defecting and then does the opposite of opponent’s previous move. This is the complete opposite of Tit For Tat, also called Bully in the literature.
Names:
 Reverse Tit For Tat: [Nachbar1992]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Bully'¶

class
axelrod.strategies.titfortat.
ContriteTitForTat
[source]¶ A player that corresponds to Tit For Tat if there is no noise. In the case of a noisy match: if the opponent defects as a result of a noisy defection then ContriteTitForTat will become ‘contrite’ until it successfully cooperates.
Names:
 Contrite Tit For Tat: [Axelrod1995]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Contrite Tit For Tat'¶

original_class
¶ alias of
ContriteTitForTat

strategy
(opponent)¶

class
axelrod.strategies.titfortat.
DynamicTwoTitsForTat
[source]¶ A player starts by cooperating and then punishes its opponent’s defections with defections, but with a dynamic bias towards cooperating based on the opponent’s ratio of cooperations to total moves (so their current probability of cooperating regardless of the opponent’s move (aka: forgiveness)).
Names:
 Dynamic Two Tits For Tat: Original name by Grant GarrettGrossman.

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Dynamic Two Tits For Tat'¶

class
axelrod.strategies.titfortat.
EugineNier
[source]¶ Plays similar to TitforTat, but with two conditions: 1) Always Defect on Last Move 2) If other player defects five times, switch to all defects.
Names:
 Eugine Nier: [LessWrong2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'EugineNier'¶

original_class
¶ alias of
EugineNier

strategy
(opponent)¶

class
axelrod.strategies.titfortat.
Gradual
→ None[source]¶ A player that punishes defections with a growing number of defections but after punishing enters a calming state and cooperates no matter what the opponent does for two rounds.
Names:
 Gradual: [Beaufils1997]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Gradual'¶

class
axelrod.strategies.titfortat.
HardTitFor2Tats
[source]¶ A variant of Tit For Two Tats that uses a longer history for retaliation.
Names:
 Hard Tit For Two Tats: [Stewart2012]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Hard Tit For 2 Tats'¶

class
axelrod.strategies.titfortat.
HardTitForTat
[source]¶ A variant of Tit For Tat that uses a longer history for retaliation.
Names:
 Hard Tit For Tat: [PD2017]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 3, 'stochastic': False}¶

name
= 'Hard Tit For Tat'¶

class
axelrod.strategies.titfortat.
Michaelos
[source]¶ Plays similar to TitforTat with two exceptions: 1) Defect on last turn. 2) After own defection and opponent’s cooperation, 50 percent of the time, cooperate. The other 50 percent of the time, always defect for the rest of the game.
Names:
 Michaelos: [LessWrong2011]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

decorator
= <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>¶

name
= 'Michaelos'¶

strategy
(opponent)¶

class
axelrod.strategies.titfortat.
NTitsForMTats
(N: int = 3, M: int = 2) → None[source]¶ A parameterizable TitforTat, The arguments are: 1) M: the number of defection before retaliation 2) N: the number of retaliations
Names:
 N Tit(s) For M Tat(s): Original name by Marc Harper

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'N Tit(s) For M Tat(s)'¶

class
axelrod.strategies.titfortat.
OmegaTFT
(deadlock_threshold: int = 3, randomness_threshold: int = 8) → None[source]¶ OmegaTFT modifies Tit For Tat in two ways:  checks for deadlock loops of alternating rounds of (C, D) and (D, C), and attempting to break them  uses a more sophisticated retaliation mechanism that is noise tolerant
Names:
 OmegaTFT: [Slany2007]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Omega TFT'¶

class
axelrod.strategies.titfortat.
RandomTitForTat
(p: float = 0.5) → None[source]¶ A player starts by cooperating and then follows by copying its opponent (tit for tat style). From then on the player will switch between copying its opponent and randomly responding every other iteration.
Name:
 Random TitForTat: Original name by Zachary M. Taylor

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'Random Tit for Tat'¶

class
axelrod.strategies.titfortat.
SlowTitForTwoTats2
[source]¶ A player plays C twice, then if the opponent plays the same move twice, plays that move, otherwise plays previous move.
Names:
 Slow Tit For Tat: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Slow Tit For Two Tats 2'¶

class
axelrod.strategies.titfortat.
SneakyTitForTat
[source]¶ Tries defecting once and repents if punished.
Names:
 Sneaky Tit For Tat: Original name by Karol Langner

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Sneaky Tit For Tat'¶

class
axelrod.strategies.titfortat.
SpitefulTitForTat
→ None[source]¶ A player starts by cooperating and then mimics the previous action of the opponent until opponent defects twice in a row, at which point player always defects
Names:
 Spiteful Tit For Tat: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'Spiteful Tit For Tat'¶

class
axelrod.strategies.titfortat.
SuspiciousTitForTat
[source]¶ A variant of Tit For Tat that starts off with a defection.
Names:
 Suspicious Tit For Tat: [Hilbe2013]
 Mistrust: [Beaufils1997]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Suspicious Tit For Tat'¶

class
axelrod.strategies.titfortat.
TitFor2Tats
[source]¶ A player starts by cooperating and then defects only after two defects by opponent.
Names:
 Tit for two Tats: [Axelrod1984]
 Slow tit for two tats: Original name by Ranjini Das

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Tit For 2 Tats'¶

class
axelrod.strategies.titfortat.
TitForTat
[source]¶ A player starts by cooperating and then mimics the previous action of the opponent.
This strategy was referred to as the ‘simplest’ strategy submitted to Axelrod’s first tournament. It came first.
Note that the code for this strategy is written in a fairly verbose way. This is done so that it can serve as an example strategy for those who might be new to Python.
Names:
 Rapoport’s strategy: [Axelrod1980]
 TitForTat: [Axelrod1980]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': False}¶

name
= 'Tit For Tat'¶

class
axelrod.strategies.titfortat.
TwoTitsForTat
[source]¶ A player starts by cooperating and replies to each defect by two defections.
Names:
 Two Tits for Tats: [Axelrod1984]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 2, 'stochastic': False}¶

name
= 'Two Tits For Tat'¶

class
axelrod.strategies.verybad.
VeryBad
[source]¶ It cooperates in the first three rounds, and uses probability (it implements a memory, which stores the opponent’s moves) to decide for cooperating or defecting. Due to a lack of information as to what that probability refers to in this context, probability(P(X)) refers to (Count(X)/Total_Moves) in this implementation P(C) = Cooperations / Total_Moves P(D) = Defections / Total_Moves = 1  P(C)
Names:
 VeryBad: [Andre2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': False}¶

name
= 'VeryBad'¶

class
axelrod.strategies.worse_and_worse.
KnowledgeableWorseAndWorse
[source]¶ This strategy is based on ‘Worse And Worse’ but will defect with probability of ‘current turn / total no. of turns’.
 Names:
 Knowledgeable Worse and Worse: Original name by Adam Pohl

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'length'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Knowledgeable Worse and Worse'¶

class
axelrod.strategies.worse_and_worse.
WorseAndWorse
[source]¶ Defects with probability of ‘current turn / 1000’. Therefore it is more and more likely to defect as the round goes on.
Source code available at the download tab of [Prison1998]
 Names:
 Worse and Worse: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Worse and Worse'¶

class
axelrod.strategies.worse_and_worse.
WorseAndWorse2
[source]¶ Plays as tit for tat during the first 20 moves. Then defects with probability (current turn  20) / current turn. Therefore it is more and more likely to defect as the round goes on.
 Names:
 Worse and Worse 2: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Worse and Worse 2'¶

class
axelrod.strategies.worse_and_worse.
WorseAndWorse3
[source]¶ Cooperates in the first turn. Then defects with probability no. of opponent defects / (current turn  1). Therefore it is more likely to defect when the opponent defects for a larger proportion of the turns.
 Names:
 Worse and Worse 3: [Prison1998]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': set(), 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': inf, 'stochastic': True}¶

name
= 'Worse and Worse 3'¶

class
axelrod.strategies.zero_determinant.
LRPlayer
(phi: float = 0.2, s: float = 0.1, l: float = 1) → None[source]¶ Abstraction for Linear Relation players. These players enforce a linear difference in stationary payoffs s * (S_xy  l) = S_yx  l, with 0 <= l <= R. The parameter s is called the slope and the parameter l the baseline payoff. For extortionate strategies, the extortion factor is the inverse of the slope.
This parameterization is Equation 14 in http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0077886. See Figure 2 of the article for a more indepth explanation.
Names:
 Linear Relation player: [Hilbe2013]

classifier
= {'inspects_source': False, 'long_run_time': False, 'makes_use_of': {'game'}, 'manipulates_source': False, 'manipulates_state': False, 'memory_depth': 1, 'stochastic': True}¶

name
= 'LinearRelation'¶

class
axelrod.strategies.zero_determinant.
ZDExtort2
(phi: float = 0.1111111111111111, s: float = 0.5) → None[source]¶ An Extortionate Zero Determinant Strategy with l=P.
Names:
 Extort2: [Stewart2012]

name
= 'ZDExtort2'¶

class
axelrod.strategies.zero_determinant.
ZDExtort2v2
(phi: float = 0.125, s: float = 0.5, l: float = 1) → None[source]¶ An Extortionate Zero Determinant Strategy with l=1.
Names:
 EXTORT2: [Kuhn2017]

name
= 'ZDExtort2 v2'¶

class
axelrod.strategies.zero_determinant.
ZDExtort3
(phi: float = 0.11538461538461539, s: float = 0.3333333333333333, l: float = 1) → None[source]¶ An extortionate strategy from Press and Dyson’s paper witn an extortion factor of 3.
Names:
 ZDExtort3: Original name by Marc Harper
 Unnamed: [Press2012]

name
= 'ZDExtort3'¶

class
axelrod.strategies.zero_determinant.
ZDExtort4
(phi: float = 0.23529411764705882, s: float = 0.25, l: float = 1) → None[source]¶ An Extortionate Zero Determinant Strategy with l=1, s=1/4. TFT is the other extreme (with l=3, s=1)
Names:
 Extort 4: Original name by Marc Harper

name
= 'ZDExtort4'¶

class
axelrod.strategies.zero_determinant.
ZDExtortion
(phi: float = 0.2, s: float = 0.1, l: float = 1) → None[source]¶ An example ZD Extortion player.
Names:
 ZDExtortion: [Roemheld2013]

name
= 'ZDExtortion'¶

class
axelrod.strategies.zero_determinant.
ZDGTFT2
(phi: float = 0.25, s: float = 0.5) → None[source]¶ A Generous Zero Determinant Strategy with l=R.
Names:
 ZDGTFT2: [Stewart2012]

name
= 'ZDGTFT2'¶

class
axelrod.strategies.zero_determinant.
ZDGen2
(phi: float = 0.125, s: float = 0.5, l: float = 3) → None[source]¶ A Generous Zero Determinant Strategy with l=3.
Names:
 GEN2: [Kuhn2017]

name
= 'ZDGEN2'¶

class
axelrod.strategies.zero_determinant.
ZDMischief
(phi: float = 0.1, s: float = 0.0, l: float = 1) → None[source]¶ An example ZD Mischief player.
Names:
 ZDMischief: [Roemheld2013]

name
= 'ZDMischief'¶

class
axelrod.strategies.zero_determinant.
ZDSet2
(phi: float = 0.25, s: float = 0.0, l: float = 2) → None[source]¶ A Generous Zero Determinant Strategy with l=2.
Names:
 SET2: [Kuhn2017]

name
= 'ZDSET2'¶