Strategies index

Here are the docstrings of all the strategies in the library.

class axelrod.strategies.adaptive.Adaptive(initial_plays: typing.List[axelrod.action.Action] = None) → None[source]

Start with a specific sequence of C and D, then play the strategy that has worked best, recalculated each turn.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'game'}, 'inspects_source': False, 'manipulates_state': False}
name = 'Adaptive'
score_last_round(opponent: axelrod.player.Player)[source]
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.alternator.Alternator[source]

A player who alternates between cooperating and defecting.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Alternator'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.ann.ANN(weights: typing.List[float], num_features: int, num_hidden: int) → None[source]

Artificial Neural Network based strategy.

A single layer neural network based strategy, with the following features: * Opponent’s first move is C * Opponent’s first move is D * Opponent’s second move is C * Opponent’s second move is D * Player’s previous move is C * Player’s previous move is D * Player’s second previous move is C * Player’s second previous move is D * Opponent’s previous move is C * Opponent’s previous move is D * Opponent’s second previous move is C * Opponent’s second previous move is D * Total opponent cooperations * Total opponent defections * Total player cooperations * Total player defections * Round number

Original Source: https://gist.github.com/mojones/550b32c46a8169bb3cd89d917b73111a#file-ann-strategy-test-L60

Names

  • Artificial Neural Network based strategy: Original name by Martin Jones
classifier = {'manipulates_state': False, 'manipulates_source': False, 'stochastic': False, 'long_run_time': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False}
name = 'ANN'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.ann.EvolvedANN → None[source]

A strategy based on a pre-trained neural network with 17 features and a hidden layer of size 10.

Names:

  • Evolved ANN: Original name by Martin Jones.
name = 'Evolved ANN'
class axelrod.strategies.ann.EvolvedANN5 → None[source]

A strategy based on a pre-trained neural network with 17 features and a hidden layer of size 5.

Names:

  • Evolved ANN 5: Original name by Marc Harper.
name = 'Evolved ANN 5'
class axelrod.strategies.ann.EvolvedANNNoise05 → None[source]

A strategy based on a pre-trained neural network with a hidden layer of size 10, trained with noise=0.05.

Names:

  • Evolved ANN Noise 05: Original name by Marc Harper.
name = 'Evolved ANN 5 Noise 05'
axelrod.strategies.ann.activate(bias: typing.List[float], hidden: typing.List[float], output: typing.List[float], inputs: typing.List[int]) → float[source]
Compute the output of the neural network:
output = relu(inputs * hidden_weights + bias) * output_weights
axelrod.strategies.ann.compute_features(player: axelrod.player.Player, opponent: axelrod.player.Player) → typing.List[int][source]

Compute history features for Neural Network: * Opponent’s first move is C * Opponent’s first move is D * Opponent’s second move is C * Opponent’s second move is D * Player’s previous move is C * Player’s previous move is D * Player’s second previous move is C * Player’s second previous move is D * Opponent’s previous move is C * Opponent’s previous move is D * Opponent’s second previous move is C * Opponent’s second previous move is D * Total opponent cooperations * Total opponent defections * Total player cooperations * Total player defections * Round number

axelrod.strategies.ann.split_weights(weights: typing.List[float], num_features: int, num_hidden: int) → typing.Tuple[typing.List[typing.List[float]], typing.List[float], typing.List[float]][source]

Splits the input vector into the the NN bias weights and layer parameters.

class axelrod.strategies.apavlov.APavlov2006 → None[source]

APavlov attempts to classify its opponent as one of five strategies: Cooperative, ALLD, STFT, PavlovD, or Random. APavlov then responds in a manner intended to achieve mutual cooperation or to defect against uncooperative opponents.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Adaptive Pavlov 2006'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.apavlov.APavlov2011 → None[source]

APavlov attempts to classify its opponent as one of four strategies: Cooperative, ALLD, STFT, or Random. APavlov then responds in a manner intended to achieve mutual cooperation or to defect against uncooperative opponents.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Adaptive Pavlov 2011'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.appeaser.Appeaser[source]

A player who tries to guess what the opponent wants.

Switch the classifier every time the opponent plays D. Start with C, switch between C and D when opponent plays D.

Names:

  • Appeaser: Original Name by Jochen Müller
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Appeaser'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.averagecopier.AverageCopier[source]

The player will cooperate with probability p if the opponent’s cooperation ratio is p. Starts with random decision.

Names:

  • Average Copier: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Average Copier'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.averagecopier.NiceAverageCopier[source]

Same as Average Copier, but always starts by cooperating.

Names:

  • Average Copier: Original name by Owen Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Nice Average Copier'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Additional strategies from Axelrod’s first tournament.

class axelrod.strategies.axelrod_first.Davis(rounds_to_cooperate: int = 10) → None[source]

Submitted to Axelrod’s first tournament by Morton Davis.

A player starts by cooperating for 10 rounds then plays Grudger, defecting if at any point the opponent has defected.

This strategy came 8th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Davis'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for the remaining rounds if the opponent ever plays D.

class axelrod.strategies.axelrod_first.Feld(start_coop_prob: float = 1.0, end_coop_prob: float = 0.5, rounds_of_decay: int = 200) → None[source]

Submitted to Axelrod’s first tournament by Scott Feld.

This strategy plays Tit For Tat, always defecting if the opponent defects but cooperating when the opponent cooperates with a gradually decreasing probability until it is only .5.

This strategy came 11th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 200, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Feld'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.Grofman[source]

Submitted to Axelrod’s first tournament by Bernard Grofman.

Cooperate on the first two rounds and returns the opponent’s last action for the next 5. For the rest of the game Grofman cooperates if both players selected the same action in the previous round, and otherwise cooperates randomly with probability 2/7.

This strategy came 4th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Grofman'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.Joss(p: float = 0.9) → None[source]

Submitted to Axelrod’s first tournament by Johann Joss.

Cooperates with probability 0.9 when the opponent cooperates, otherwise emulates Tit-For-Tat.

This strategy came 12th in Axelrod’s original tournament.

Names:

name = 'Joss'
class axelrod.strategies.axelrod_first.Nydegger → None[source]

Submitted to Axelrod’s first tournament by Rudy Nydegger.

The program begins with tit for tat for the first three moves, except that if it was the only one to cooperate on the first move and the only one to defect on the second move, it defects on the third move. After the third move, its choice is determined from the 3 preceding outcomes in the following manner.

\[A = 16 a_1 + 4 a_2 + a_3\]

Where \(a_i\) is dependent on the outcome of the previous \(i\) th round. If both strategies defect, \(a_i=3\), if the opponent only defects: \(a_i=2\) and finally if it is only this strategy that defects then \(a_i=1\).

Finally this strategy defects if and only if:

\[A \in \{1, 6, 7, 17, 22, 23, 26, 29, 30, 31, 33, 38, 39, 45, 49, 54, 55, 58, 61\}\]

Thus if all three preceding moves are mutual defection, A = 63 and the rule cooperates. This rule was designed for use in laboratory experiments as a stooge which had a memory and appeared to be trustworthy, potentially cooperative, but not gullible.

This strategy came 3rd in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Nydegger'
static score_history(my_history: typing.List[axelrod.action.Action], opponent_history: typing.List[axelrod.action.Action], score_map: typing.Dict[typing.Tuple[axelrod.action.Action, axelrod.action.Action], int]) → int[source]

Implements the Nydegger formula A = 16 a_1 + 4 a_2 + a_3

strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.RevisedDowning(revised: bool = True) → None[source]

This strategy attempts to estimate the next move of the opponent by estimating the probability of cooperating given that they defected (\(p(C|D)\)) or cooperated on the previous round (\(p(C|C)\)). These probabilities are continuously updated during play and the strategy attempts to maximise the long term play. Note that the initial values are \(p(C|C)=p(C|D)=.5\).

Downing is implemented as RevisedDowning. Apparently in the first tournament the strategy was implemented incorrectly and defected on the first two rounds. This can be controlled by setting revised=True to prevent the initial defections.

This strategy came 10th in Axelrod’s original tournament but would have won if it had been implemented correctly.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Revised Downing'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.Shubik → None[source]

Submitted to Axelrod’s first tournament by Martin Shubik.

Plays like Tit-For-Tat with the following modification. After each retaliation, the number of rounds that Shubik retaliates increases by 1.

This strategy came 5th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Shubik'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.SteinAndRapoport(alpha: float = 0.05) → None[source]

This strategy plays a modification of Tit For Tat.

  1. It cooperates for the first 4 moves.
  2. It defects on the last 2 moves.
  3. Every 15 moves it makes use of a chi-squared test to check if the opponent is playing randomly.

This strategy came 6th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 15, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Stein and Rapoport'
original_class

alias of SteinAndRapoport

strategy(opponent)
class axelrod.strategies.axelrod_first.TidemanAndChieruzzi → None[source]

This strategy begins by playing Tit For Tat and then follows the following rules:

1. Every run of defections played by the opponent increases the number of defections that this strategy retaliates with by 1.

  1. The opponent is given a ‘fresh start’ if:
    • it is 10 points behind this strategy
    • and it has not just started a run of defections
    • and it has been at least 20 rounds since the last ‘fresh start’
    • and there are more than 10 rounds remaining in the match
    • and the total number of defections differs from a 50-50 random sample by at least 3.0 standard deviations.

A ‘fresh start’ is a sequence of two cooperations followed by an assumption that the game has just started (everything is forgotten).

This strategy came 2nd in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'length', 'game'}, 'inspects_source': False, 'manipulates_state': False}
name = 'Tideman and Chieruzzi'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.Tullock(rounds_to_cooperate: int = 11) → None[source]

Submitted to Axelrod’s first tournament by Gordon Tullock.

Cooperates for the first 11 rounds then randomly cooperates 10% less often than the opponent has in previous rounds.

This strategy came 13th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 11, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tullock'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_first.UnnamedStrategy[source]

Apparently written by a grad student in political science whose name was withheld, this strategy cooperates with a given probability P. This probability (which has initial value .3) is updated every 10 rounds based on whether the opponent seems to be random, very cooperative or very uncooperative. Furthermore, if after round 130 the strategy is losing then P is also adjusted.

Fourteenth Place with 282.2 points is a 77-line program by a graduate student of political science whose dissertation is in game theory. This rule has a probability of cooperating, P, which is initially 30% and is updated every 10 moves. P is adjusted if the other player seems random, very cooperative, or very uncooperative. P is also adjusted after move 130 if the rule has a lower score than the other player. Unfortunately, the complex process of adjustment frequently left the probability of cooperation in the 30% to 70% range, and therefore the rule appeared random to many other players.

Names:

Warning: This strategy is not identical to the original strategy (source unavailable) and was written based on published descriptions.

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 0, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Unnamed Strategy'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Additional strategies from Axelrod’s second tournament.

class axelrod.strategies.axelrod_second.Champion[source]

Strategy submitted to Axelrod’s second tournament by Danny Champion.

This player cooperates on the first 10 moves and plays Tit for Tat for the next 15 more moves. After 25 moves, the program cooperates unless all the following are true: the other player defected on the previous move, the other player cooperated less than 60% and the random number between 0 and 1 is greater that the other player’s cooperation rate.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
name = 'Champion'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_second.Eatherley[source]

Strategy submitted to Axelrod’s second tournament by Graham Eatherley.

A player that keeps track of how many times in the game the other player defected. After the other player defects, it defects with a probability equal to the ratio of the other’s total defections to the total moves to that point.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Eatherley'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_second.Gladstein → None[source]

Submitted to Axelrod’s second tournament by David Gladstein.

This strategy is also known as Tester and is based on the reverse engineering of the Fortran strategies from Axelrod’s second tournament.

This strategy is a TFT variant that defects on the first round in order to test the opponent’s response. If the opponent ever defects, the strategy ‘apologizes’ by cooperating and then plays TFT for the rest of the game. Otherwise, it defects as much as possible subject to the constraint that the ratio of its defections to moves remains under 0.5, not counting the first defection.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Gladstein'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.axelrod_second.Tester → None[source]

Submitted to Axelrod’s second tournament by David Gladstein.

This strategy is a TFT variant that attempts to exploit certain strategies. It defects on the first move. If the opponent ever defects, TESTER ‘apologies’ by cooperating and then plays TFT for the rest of the game. Otherwise TESTER alternates cooperation and defection.

This strategy came 46th in Axelrod’s second tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tester'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.backstabber.BackStabber[source]

Forgives the first 3 defections but on the fourth will defect forever. Defects on the last 2 rounds unconditionally.

Names:

  • Backstabber: Original name by Thomas Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'BackStabber'
original_class

alias of BackStabber

strategy(opponent)
class axelrod.strategies.backstabber.DoubleCrosser[source]

Forgives the first 3 defections but on the fourth will defect forever. Defects on the last 2 rounds unconditionally.

If 8 <= current round <= 180, if the opponent did not defect in the first 7 rounds, the player will only defect after the opponent has defected twice in-a-row.

Names:

  • Double Crosser: Original name by Thomas Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'DoubleCrosser'
original_class

alias of DoubleCrosser

strategy(opponent)
class axelrod.strategies.better_and_better.BetterAndBetter[source]

Defects with probability of ‘(1000 - current turn) / 1000’. Therefore it is less and less likely to defect as the round goes on.

Names:
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Better and Better'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.calculator.Calculator → None[source]

Plays like (Hard) Joss for the first 20 rounds. If periodic behavior is detected, defect forever. Otherwise play TFT.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
extended_strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
name = 'Calculator'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.cooperator.Cooperator[source]

A player who only ever cooperates.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 0, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cooperator'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.cooperator.TrickyCooperator[source]

A cooperator that is trying to be tricky.

Names:

  • Tricky Cooperator: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tricky Cooperator'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Almost always cooperates, but will try to trick the opponent by defecting.

Defect once in a while in order to get a better payout. After 3 rounds, if opponent has not defected to a max history depth of 10, Defect.

class axelrod.strategies.cycler.AntiCycler → None[source]

A player that follows a sequence of plays that contains no cycles: CDD CD CCD CCCD CCCCD ...

Names:

  • Anti Cycler: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'AntiCycler'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.cycler.Cycler(cycle: str = 'CCD') → None[source]

A player that repeats a given sequence indefinitely.

Names:

  • Cycler: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
get_new_itertools_cycle()[source]
name = 'Cycler'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.cycler.CyclerCCCCCD → None[source]

Cycles C, C, C, C, C, D

Names:

  • Cycler CCCD: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler CCCCCD'
class axelrod.strategies.cycler.CyclerCCCD → None[source]

Cycles C, C, C, D

Names:

  • Cycler CCCD: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler CCCD'
class axelrod.strategies.cycler.CyclerCCCDCD → None[source]

Cycles C, C, C, D, C, D

Names:

  • Cycler CCCDCD: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler CCCDCD'
class axelrod.strategies.cycler.CyclerCCD → None[source]

Cycles C, C, D

Names:

  • Cycler CCD: Original name by Marc Harper
  • Periodic player CCD: [Mittal2009]
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler CCD'
class axelrod.strategies.cycler.CyclerDC → None[source]

Cycles D, C

Names:

  • Cycler DC: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler DC'
class axelrod.strategies.cycler.CyclerDDC → None[source]

Cycles D, D, C

Names:

  • Cycler DDC: Original name by Marc Harper
  • Periodic player DDC: [Mittal2009]
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycler DDC'

The player class in this module does not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising it.

class axelrod.strategies.darwin.Darwin → None[source]

A strategy which accumulates a record (the ‘genome’) of what the most favourable response in the previous round should have been, and naively assumes that this will remain the correct response at the same round of future trials.

This ‘genome’ is preserved between opponents, rounds and repetitions of the tournament. It becomes a characteristic of the type and so a single version of this is shared by all instances for each loading of the class.

As this results in information being preserved between tournaments, this is classified as a cheating strategy!

If no record yet exists, the opponent’s response from the previous round is returned.

Names:

  • Darwin: Original name by Paul Slavin
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': True}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

genome = [C]
mutate(outcome: tuple, trial: int) → None[source]

Select response according to outcome.

name = 'Darwin'
receive_match_attributes()[source]
reset()[source]

Reset instance properties.

static reset_genome() → None[source]

For use in testing methods.

strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
valid_callers = ['play']
class axelrod.strategies.dbs.DBS(discount_factor=0.75, promotion_threshold=3, violation_threshold=4, reject_threshold=3, tree_depth=5)[source]

A strategy that learns the opponent’s strategy and uses symbolic noise detection for detecting whether anomalies in player’s behavior are deliberate or accidental. From the learned opponent’s strategy, a tree search is used to choose the best move.

Default values for the parameters are the suggested values in the article. When noise increases you can try to diminish violation_threshold and rejection_threshold.

Names

classifier = {'long_run_time': True, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
compute_prob_rule(outcome, alpha=1)[source]

Uses the game history to compute the probability of the opponent playing C, in the outcome situation (example: outcome = (C, C)). When alpha = 1, the results is approximately equal to the frequency of the occurrence of outcome C. alpha is a discount factor that gives more weight to recent events than earlier ones.

Parameters

outcome: tuple of two actions.Action alpha: int, optional. Discount factor. Default is 1.

name = 'DBS'
should_demote(r_minus, violation_threshold=4)[source]

Checks if the number of successive violations of a deterministic rule (in the opponent’s behavior) exceeds the user-defined violation_threshold.

should_promote(r_plus, promotion_threshold=3)[source]

This function determines if the move r_plus is a deterministic behavior of the opponent, and then returns True, or if r_plus is due to a random behavior (or noise) which would require a probabilistic rule, in which case it returns False.

To do so it looks into the game history: if the k last times when the opponent was in the same situation than in r_plus it played the same thing then then r_plus is considered as a deterministic rule (where K is the user-defined promotion_threshold).

Parameters

r_plus: tuple of (tuple of actions.Action, actions.Action)
example: ((C, C), D) r_plus represents one outcome of the history, and the following move played by the opponent.
promotion_threshold: int, optional
Number of successive observations needed to promote an opponent behavior as a deterministic rule. Default is 3.
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
update_history_by_cond(opponent_history)[source]

Updates self.history_by_cond between each turns of the game.

class axelrod.strategies.dbs.DeterministicNode(action1, action2, depth)[source]

Nodes (C, C), (C, D), (D, C), or (D, D) with deterministic choice for siblings.

get_siblings(policy)[source]

Returns the siblings node of the current DeterministicNode. Builds 2 siblings (C, X) and (D, X) that are StochasticNodes. Those siblings are of the same depth as the current node. Their probabilities pC are defined by the policy argument.

get_value()[source]
is_stochastic()[source]

Returns True if self is a StochasticNode.

class axelrod.strategies.dbs.Node[source]

Nodes used to build a tree for the tree-search procedure. The tree has Deterministic and Stochastic nodes, as the opponent’s strategy is learned as a probability distribution.

get_siblings()[source]
is_stochastic()[source]
class axelrod.strategies.dbs.StochasticNode(own_action, pC, depth)[source]

Node that have a probability pC to get to each sibling. A StochasticNode can be written (C, X) or (D, X), with X = C with a probability pC, else X = D.

get_siblings()[source]

Returns the siblings node of the current StochasticNode. There are two siblings which are DeterministicNodes, their depth is equal to current node depth’s + 1.

is_stochastic()[source]

Returns True if self is a StochasticNode.

axelrod.strategies.dbs.action_to_int(action)[source]
axelrod.strategies.dbs.create_policy(pCC, pCD, pDC, pDD)[source]

Creates a dict that represents a Policy. As defined in the reference, a Policy is a set of (prev_move, p) where p is the probability to cooperate after prev_move, where prev_move can be (C, C), (C, D), (D, C) or (D, D).

Parameters

pCC, pCD, pDC, pDD : float
Must be between 0 and 1.

Tree search function (minimax search procedure) for the tree (built by recursion) corresponding to the opponent’s policy, and solves it. Returns a tuple of two floats that are the utility of playing C, and the utility of playing D.

axelrod.strategies.dbs.move_gen(outcome, policy, depth_search_tree=5)[source]

Returns the best move considering opponent’s policy and last move, using tree-search procedure.

class axelrod.strategies.defector.Defector[source]

A player who only ever defects.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 0, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Defector'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.defector.TrickyDefector[source]

A defector that is trying to be tricky.

Names:

  • Tricky Defector: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tricky Defector'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Almost always defects, but will try to trick the opponent into cooperating.

Defect if opponent has cooperated at least once in the past and has defected for the last 3 turns in a row.

class axelrod.strategies.doubler.Doubler[source]

Cooperates except when the opponent has defected and the opponent’s cooperation count is less than twice their defection count.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Doubler'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.finite_state_machines.EvolvedFSM16 → None[source]

A 16 state FSM player trained with an evolutionary algorithm.

Names:

  • Evolved FSM 16: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 16, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Evolved FSM 16'
class axelrod.strategies.finite_state_machines.EvolvedFSM16Noise05 → None[source]

A 16 state FSM player trained with an evolutionary algorithm with noisy matches (noise=0.05).

Names:

  • Evolved FSM 16 Noise 05: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 16, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Evolved FSM 16 Noise 05'
class axelrod.strategies.finite_state_machines.EvolvedFSM4 → None[source]

A 4 state FSM player trained with an evolutionary algorithm.

Names:

  • Evolved FSM 4: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 4, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Evolved FSM 4'
class axelrod.strategies.finite_state_machines.FSMPlayer(transitions: tuple = ((1, C, 1, C), (1, D, 1, D)), initial_state: int = 1, initial_action: axelrod.action.Action = C) → None[source]

Abstract base class for finite state machine players.

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'FSM Player'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.finite_state_machines.Fortress3 → None[source]

Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.

Note that the description in http://www.graham-kendall.com/papers/lhk2011.pdf is not correct.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Fortress3'
class axelrod.strategies.finite_state_machines.Fortress4 → None[source]

Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.

Note that the description in http://www.graham-kendall.com/papers/lhk2011.pdf is not correct.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 4, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Fortress4'
class axelrod.strategies.finite_state_machines.Predator → None[source]

Finite state machine player specified in http://DOI.org/10.1109/CEC.2006.1688322.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 9, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Predator'
class axelrod.strategies.finite_state_machines.Pun1 → None[source]

FSM player described in [Ashlock2006].

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Pun1'
class axelrod.strategies.finite_state_machines.Raider → None[source]

FSM player described in http://DOI.org/10.1109/FOCI.2014.7007818.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Raider'
class axelrod.strategies.finite_state_machines.Ripoff → None[source]

FSM player described in http://DOI.org/10.1109/TEVC.2008.920675.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Ripoff'
class axelrod.strategies.finite_state_machines.SimpleFSM(transitions: tuple, initial_state: int) → None[source]

Simple implementation of a finite state machine that transitions between states based on the last round of play.

https://en.wikipedia.org/wiki/Finite-state_machine

move(opponent_action: axelrod.action.Action) → axelrod.action.Action[source]

Computes the response move and changes state.

state
state_transitions
class axelrod.strategies.finite_state_machines.SolutionB1 → None[source]

FSM player described in http://DOI.org/10.1109/TCIAIG.2014.2326012.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'SolutionB1'
class axelrod.strategies.finite_state_machines.SolutionB5 → None[source]

FSM player described in http://DOI.org/10.1109/TCIAIG.2014.2326012.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'SolutionB5'
class axelrod.strategies.finite_state_machines.TF1 → None[source]

A FSM player trained to maximize Moran fixation probabilities.

Names:

  • TF1: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'TF1'
class axelrod.strategies.finite_state_machines.TF2 → None[source]

A FSM player trained to maximize Moran fixation probabilities.

Names:

  • TF2: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'TF2'
class axelrod.strategies.finite_state_machines.TF3 → None[source]

A FSM player trained to maximize Moran fixation probabilities.

Names:

  • TF3: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'TF3'
class axelrod.strategies.finite_state_machines.Thumper → None[source]

FSM player described in http://DOI.org/10.1109/TEVC.2008.920675.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Thumper'
class axelrod.strategies.forgiver.Forgiver[source]

A player starts by cooperating however will defect if at any point the opponent has defected more than 10 percent of the time

Names:

  • Forgiver: Original name by Thomas Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Forgiver'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D if the opponent has defected more than 10 percent of the time.

class axelrod.strategies.forgiver.ForgivingTitForTat[source]

A player starts by cooperating however will defect if at any point, the opponent has defected more than 10 percent of the time, and their most recent decision was defect.

Names:

  • Forgiving Tit For Tat: Original name by Thomas Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Forgiving Tit For Tat'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D if the opponent has defected more than 10 percent of the time and their most recent decision was defect.

Stochastic variants of Lookup table based-strategies, trained with particle swarm algorithms.

For the original see:
https://gist.github.com/GDKO/60c3d0fd423598f3c4e4
class axelrod.strategies.gambler.Gambler(lookup_dict: dict = None, initial_actions: tuple = None, pattern: typing.Any = None, parameters: axelrod.strategies.lookerup.Plays = None) → None[source]

A stochastic version of LookerUp which will select randomly an action in some cases.

Names:

  • Gambler: Original name by Georgios Koutsovoulos
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Gambler'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.gambler.PSOGambler1_1_1 → None[source]

A 1x1x1 PSOGambler trained with pyswarm.

Names:

  • PSO Gambler 1_1_1: Original name by Marc Harper
name = 'PSO Gambler 1_1_1'
class axelrod.strategies.gambler.PSOGambler2_2_2 → None[source]

A 2x2x2 PSOGambler trained with a particle swarm algorithm (implemented in pyswarm). Original version by Georgios Koutsovoulos.

Names:

  • PSO Gambler 2_2_2: Original name by Marc Harper
name = 'PSO Gambler 2_2_2'
class axelrod.strategies.gambler.PSOGambler2_2_2_Noise05 → None[source]

A 2x2x2 PSOGambler trained with pyswarm with noise=0.05.

Names:

  • PSO Gambler 2_2_2 Noise 05: Original name by Marc Harper
name = 'PSO Gambler 2_2_2 Noise 05'
class axelrod.strategies.gambler.PSOGamblerMem1 → None[source]

A 1x1x0 PSOGambler trained with pyswarm. This is the ‘optimal’ memory one strategy trained against the set of short run time strategies in the Axelrod library.

Names:

  • PSO Gambler Mem1: Original name by Marc Harper
name = 'PSO Gambler Mem1'
class axelrod.strategies.gambler.ZDMem2 → None[source]

A memory two generalization of a zero determinant player.

Names:

  • ZDMem2: Original name by Marc Harper
  • Unnamed [LiS2014]
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'ZD-Mem2'

The player classes in this module do not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising them.

class axelrod.strategies.geller.Geller[source]

Observes what the player will do in the next round and adjust.

If unable to do this: will play randomly.

This code is inspired by Matthew Williams’ talk “Cheating at rock-paper-scissors — meta-programming in Python” given at Django Weekend Cardiff in February 2014.

His code is here: https://github.com/mattjw/rps_metaprogramming and there’s some more info here: http://www.mattjw.net/2014/02/rps-metaprogramming/

This code is way simpler than Matt’s, as in this exercise we already have access to the opponent instance, so don’t need to go hunting for it in the stack. Instead we can just call it to see what it’s going to play, and return a result based on that

This is almost certainly cheating, and more than likely against the spirit of the ‘competition’ :-)

Names:

  • Geller: Original name by Martin Chorley (@martinjc)
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': -1, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name = 'Geller'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Look at what the opponent will play in the next round and choose a strategy that gives the least jail time, which is is equivalent to playing the same strategy as that which the opponent will play.

class axelrod.strategies.geller.GellerCooperator[source]

Observes what the player will do (like Geller) but if unable to will cooperate.

Names:

  • Geller Cooperator: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': -1, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name = 'Geller Cooperator'
class axelrod.strategies.geller.GellerDefector[source]

Observes what the player will do (like Geller) but if unable to will defect.

Names:

  • Geller Defector: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': -1, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name = 'Geller Defector'
class axelrod.strategies.gobymajority.GoByMajority(memory_depth: typing.Union[int, float] = inf, soft: bool = True) → None[source]

A player examines the history of the opponent: if the opponent has more defections than cooperations then the player defects.

In case of equal number of defections and cooperations this player will Cooperate. Passing the soft=False keyword argument when initialising will create a HardGoByMajority which Defects in case of equality.

An optional memory attribute will limit the number of turns remembered (by default this is 0)

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Go By Majority'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

This is affected by the history of the opponent.

As long as the opponent cooperates at least as often as they defect then the player will cooperate. If at any point the opponent has more defections than cooperations in memory the player defects.

class axelrod.strategies.gobymajority.GoByMajority10 → None[source]

GoByMajority player with a memory of 10.

Names:

  • Go By Majority 10: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Go By Majority 10'
class axelrod.strategies.gobymajority.GoByMajority20 → None[source]

GoByMajority player with a memory of 20.

Names:

  • Go By Majority 20: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 20, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Go By Majority 20'
class axelrod.strategies.gobymajority.GoByMajority40 → None[source]

GoByMajority player with a memory of 40.

Names:

  • Go By Majority 40: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 40, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Go By Majority 40'
class axelrod.strategies.gobymajority.GoByMajority5 → None[source]

GoByMajority player with a memory of 5.

Names:

  • Go By Majority 5: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Go By Majority 5'
class axelrod.strategies.gobymajority.HardGoByMajority(memory_depth: typing.Union[int, float] = inf) → None[source]

A player examines the history of the opponent: if the opponent has more defections than cooperations then the player defects. In case of equal number of defections and cooperations this player will Defect.

An optional memory attribute will limit the number of turns remembered (by default this is 0)

Names:
name = 'Hard Go By Majority'
class axelrod.strategies.gobymajority.HardGoByMajority10 → None[source]

HardGoByMajority player with a memory of 10.

Names:

  • Hard Go By Majority 10: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Go By Majority 10'
class axelrod.strategies.gobymajority.HardGoByMajority20 → None[source]

HardGoByMajority player with a memory of 20.

Names:

  • Hard Go By Majority 20: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 20, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Go By Majority 20'
class axelrod.strategies.gobymajority.HardGoByMajority40 → None[source]

HardGoByMajority player with a memory of 40.

Names:

  • Hard Go By Majority 40: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 40, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Go By Majority 40'
class axelrod.strategies.gobymajority.HardGoByMajority5 → None[source]

HardGoByMajority player with a memory of 5.

Names:

  • Hard Go By Majority 5: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Go By Majority 5'
class axelrod.strategies.gradualkiller.GradualKiller[source]

It begins by defecting in the first five moves, then cooperates two times. It then defects all the time if the opponent has defected in move 6 and 7, else cooperates all the time. Initially designed to stop Gradual from defeating TitForTat in a 3 Player tournament.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Gradual Killer'
original_class

alias of GradualKiller

strategy(opponent)
class axelrod.strategies.grudger.Aggravater[source]

Grudger, except that it defects on the first 3 turns

Names

  • Aggravater: Original name by Thomas Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Aggravater'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.grudger.EasyGo[source]

A player starts by defecting however will cooperate if at any point the opponent has defected.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'EasyGo'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing D, then plays C for the remaining rounds if the opponent ever plays D.

class axelrod.strategies.grudger.ForgetfulGrudger → None[source]

A player starts by cooperating however will defect if at any point the opponent has defected, but forgets after mem_length matches.

Names:

  • Forgetful Grudger: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Forgetful Grudger'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for mem_length rounds if the opponent ever plays D.

class axelrod.strategies.grudger.GeneralSoftGrudger(n: int = 1, d: int = 4, c: int = 2) → None[source]

A generalization of the SoftGrudger strategy. SoftGrudger punishes by playing: D, D, D, D, C, C. after a defection by the opponent. GeneralSoftGrudger only punishes after its opponent defects a specified amount of times consecutively. The punishment is in the form of a series of defections followed by a ‘penance’ of a series of consecutive cooperations.

Names:

  • General Soft Grudger: Original Name by J. Taylor Smith
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'General Soft Grudger'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Punishes after its opponent defects ‘n’ times consecutively. The punishment is in the form of ‘d’ defections followed by a penance of ‘c’ consecutive cooperations.

class axelrod.strategies.grudger.Grudger[source]

A player starts by cooperating however will defect if at any point the opponent has defected.

This strategy came 7th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Grudger'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for the remaining rounds if the opponent ever plays D.

class axelrod.strategies.grudger.GrudgerAlternator[source]

A player starts by cooperating until the first opponents defection, then alternates D-C.

Names:

  • c_then_per_dc: [Prison1998]
  • Grudger Alternator: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'GrudgerAlternator'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays Alternator for the remaining rounds if the opponent ever plays D.

class axelrod.strategies.grudger.OppositeGrudger[source]

A player starts by defecting however will cooperate if at any point the opponent has cooperated.

Names:

  • Opposite Grudger: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Opposite Grudger'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing D, then plays C for the remaining rounds if the opponent ever plays C.

class axelrod.strategies.grudger.SoftGrudger → None[source]

A modification of the Grudger strategy. Instead of punishing by always defecting: punishes by playing: D, D, D, D, C, C. (Will continue to cooperate afterwards).

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 6, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Soft Grudger'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D, D, D, D, C, C against a defection

class axelrod.strategies.grumpy.Grumpy(starting_state: str = 'Nice', grumpy_threshold: int = 10, nice_threshold: int = -10) → None[source]

A player that defects after a certain level of grumpiness. Grumpiness increases when the opponent defects and decreases when the opponent co-operates.

Names:

  • Grumpy: Original name by Jason Young
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Grumpy'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

A player that gets grumpier the more the opposition defects, and nicer the more they cooperate.

Starts off Nice, but becomes grumpy once the grumpiness threshold is hit. Won’t become nice once that grumpy threshold is hit, but must reach a much lower threshold before it becomes nice again.

class axelrod.strategies.handshake.Handshake(initial_plays: typing.List[axelrod.action.Action] = None) → None[source]

Starts with C, D. If the opponent plays the same way, cooperate forever, else defect forever.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Handshake'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hmm.EvolvedHMM5 → None[source]

An HMM-based player with five hidden states trained with an evolutionary algorithm.

Names:

  • Evolved HMM 5: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Evolved HMM 5'
class axelrod.strategies.hmm.HMMPlayer(transitions_C=None, transitions_D=None, emission_probabilities=None, initial_state=0, initial_action=C) → None[source]

Abstract base class for Hidden Markov Model players.

Names

  • HMM Player: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
is_stochastic() → bool[source]

Determines if the player is stochastic.

name = 'HMM Player'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hmm.SimpleHMM(transitions_C, transitions_D, emission_probabilities, initial_state) → None[source]

Implementation of a basic Hidden Markov Model. We assume that the transition matrix is conditioned on the opponent’s last action, so there are two transition matrices. Emission distributions are stored as Bernoulli probabilities for each state. This is essentially a stochastic FSM.

https://en.wikipedia.org/wiki/Hidden_Markov_model

is_well_formed() → bool[source]
Determines if the HMM parameters are well-formed:
  • Both matrices are stochastic
  • Emissions probabilities are in [0, 1]
  • The initial state is valid.
move(opponent_action: axelrod.action.Action) → axelrod.action.Action[source]

Changes state and computes the response action.

Parameters
opponent_action: Axelrod.Action
The opponent’s last action.
axelrod.strategies.hmm.is_stochastic_matrix(m, ep=1e-08) → bool[source]

Checks that the matrix m (a list of lists) is a stochastic matrix.

class axelrod.strategies.hunter.AlternatorHunter → None[source]

A player who hunts for alternators.

Names:

  • Alternator Hunter: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Alternator Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hunter.CooperatorHunter[source]

A player who hunts for cooperators.

Names:

  • Cooperator Hunter: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cooperator Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hunter.CycleHunter → None[source]

Hunts strategies that play cyclically, like any of the Cyclers, Alternator, etc.

Names:

  • Cycle Hunter: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Cycle Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hunter.DefectorHunter[source]

A player who hunts for defectors.

Names:

  • Defector Hunter: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Defector Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.hunter.EventualCycleHunter → None[source]

Hunts strategies that eventually play cyclically.

Names:

  • Eventual Cycle Hunter: Original name by Marc Harper
name = 'Eventual Cycle Hunter'
strategy(opponent: axelrod.player.Player) → None[source]
class axelrod.strategies.hunter.MathConstantHunter[source]

A player who hunts for mathematical constant players.

Names:

Math Constant Hunter: Original name by Karol Langner

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Math Constant Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Check whether the number of cooperations in the first and second halves of the history are close. The variance of the uniform distribution (1/4) is a reasonable delta but use something lower for certainty and avoiding false positives. This approach will also detect a lot of random players.

class axelrod.strategies.hunter.RandomHunter → None[source]

A player who hunts for random players.

Names:

  • Random Hunter: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Random Hunter'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

A random player is unpredictable, which means the conditional frequency of cooperation after cooperation, and defection after defections, should be close to 50%... although how close is debatable.

axelrod.strategies.hunter.is_alternator(history: typing.List[axelrod.action.Action]) → bool[source]
class axelrod.strategies.inverse.Inverse[source]

A player who defects with a probability that diminishes relative to how long ago the opponent defected.

Names:

  • Inverse: Original Name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Inverse'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Looks at opponent history to see if they have defected.

If so, player defection is inversely proportional to when this occurred.

class axelrod.strategies.lookerup.EvolvedLookerUp1_1_1 → None[source]

A 1 1 1 Lookerup trained with an evolutionary algorithm.

Names:

  • Evolved Lookerup 1 1 1: Original name by Marc Harper
name = 'EvolvedLookerUp1_1_1'
class axelrod.strategies.lookerup.EvolvedLookerUp2_2_2 → None[source]

A 2 2 2 Lookerup trained with an evolutionary algorithm.

Names:

  • Evolved Lookerup 2 2 2: Original name by Marc Harper
name = 'EvolvedLookerUp2_2_2'
class axelrod.strategies.lookerup.LookerUp(lookup_dict: dict = None, initial_actions: tuple = None, pattern: typing.Any = None, parameters: axelrod.strategies.lookerup.Plays = None) → None[source]

This strategy uses a LookupTable to decide its next action. If there is not enough history to use the table, it calls from a list of self.initial_actions.

if self_depth=2, op_depth=3, op_openings_depth=5, LookerUp finds the last 2 plays of self, the last 3 plays of opponent and the opening 5 plays of opponent. It then looks those up on the LookupTable and returns the appropriate action. If 5 rounds have not been played (the minimum required for op_openings_depth), it calls from self.initial_actions.

LookerUp can be instantiated with a dictionary. The dictionary uses tuple(tuple, tuple, tuple) or Plays as keys. for example.

  • self_plays: depth=2

  • op_plays: depth=1

  • op_openings: depth=0:

    {Plays((C, C), (C), ()): C,
     Plays((C, C), (D), ()): D,
     Plays((C, D), (C), ()): D,  <- example below
     Plays((C, D), (D), ()): D,
     Plays((D, C), (C), ()): C,
     Plays((D, C), (D), ()): D,
     Plays((D, D), (C), ()): C,
     Plays((D, D), (D), ()): D}
    

From the above table, if the player last played C, D and the opponent last played C (here the initial opponent play is ignored) then this round, the player would play D.

The dictionary must contain all possible permutations of C’s and D’s.

LookerUp can also be instantiated with pattern=str/tuple of actions, and:

parameters=Plays(
    self_plays=player_depth: int,
    op_plays=op_depth: int,
    op_openings=op_openings_depth: int)

It will create keys of len=2 ** (sum(parameters)) and map the pattern to the keys.

initial_actions is a tuple such as (C, C, D). A table needs initial actions equal to max(self_plays depth, opponent_plays depth, opponent_initial_plays depth). If provided initial_actions is too long, the extra will be ignored. If provided initial_actions is too short, the shortfall will be made up with C’s.

Some well-known strategies can be expressed as special cases; for example Cooperator is given by the dict (All history is ignored and always play C):

{Plays((), (), ()) : C}

Tit-For-Tat is given by (The only history that is important is the opponent’s last play.):

{Plays((), (D,), ()): D,
 Plays((), (C,), ()): C}

LookerUp’s LookupTable defaults to Tit-For-Tat. The initial_actions defaults to playing C.

Names:

  • Lookerup: Original name by Martin Jones
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
default_tft_lookup_table = {Plays(self_plays=(), op_plays=(C,), op_openings=()): C, Plays(self_plays=(), op_plays=(D,), op_openings=()): D}
lookup_dict
lookup_table_display(sort_by: tuple = ('op_openings', 'self_plays', 'op_plays')) → str[source]

Returns a string for printing lookup_table info in specified order.

Parameters:sort_by – only_elements=’self_plays’, ‘op_plays’, ‘op_openings’
name = 'LookerUp'
strategy(opponent: axelrod.player.Player) → Reaction[source]
class axelrod.strategies.lookerup.LookupTable(lookup_dict: dict) → None[source]

LookerUp and its children use this object to determine their next actions.

It is an object that creates a table of all possible plays to a specified depth and the action to be returned for each combination of plays. The “get” method returns the appropriate response. For the table containing:

....
Plays(self_plays=(C, C), op_plays=(C, D), op_openings=(D, C): D
Plays(self_plays=(C, C), op_plays=(C, D), op_openings=(D, D): C
...

with: player.history[-2:]=[C, C] and opponent.history[-2:]=[C, D] and opponent.history[:2]=[D, D], calling LookupTable.get(plays=(C, C), op_plays=(C, D), op_openings=(D, D)) will return C.

Instantiate the table with a lookup_dict. This is {(self_plays_tuple, op_plays_tuple, op_openings_tuple): action, ...}. It must contain every possible permutation with C’s and D’s of the above tuple. so:

good_dict = {((C,), (C,), ()): C,
             ((C,), (D,), ()): C,
             ((D,), (C,), ()): D,
             ((D,), (D,), ()): C}

bad_dict = {((C,), (C,), ()): C,
            ((C,), (D,), ()): C,
            ((D,), (C,), ()): D}

LookupTable.from_pattern() creates an ordered list of keys for you and maps the pattern to the keys.:

LookupTable.from_pattern(pattern=(C, D, D, C),
    player_depth=0, op_depth=1, op_openings_depth=1
)

creates the dictionary:

{Plays(self_plays=(), op_plays=(C), op_openings=(C)): C,
 Plays(self_plays=(), op_plays=(C), op_openings=(D)): D,
 Plays(self_plays=(), op_plays=(D), op_openings=(C)): D,
 Plays(self_plays=(), op_plays=(D), op_openings=(D)): C,}

and then returns a LookupTable with that dictionary.

dictionary
display(sort_by: tuple = ('op_openings', 'self_plays', 'op_plays')) → str[source]

Returns a string for printing lookup_table info in specified order.

Parameters:sort_by – only_elements=’self_plays’, ‘op_plays’, ‘op_openings’
classmethod from_pattern(pattern: tuple, player_depth: int, op_depth: int, op_openings_depth: int)[source]
get(plays: tuple, op_plays: tuple, op_openings: tuple) → typing.Any[source]
op_depth
op_openings_depth
player_depth
table_depth
class axelrod.strategies.lookerup.Plays(self_plays, op_plays, op_openings)
op_openings

Alias for field number 2

op_plays

Alias for field number 1

self_plays

Alias for field number 0

class axelrod.strategies.lookerup.Winner12 → None[source]

A lookup table based strategy.

Names:

name = 'Winner12'
class axelrod.strategies.lookerup.Winner21 → None[source]

A lookup table based strategy.

Names:

name = 'Winner21'
axelrod.strategies.lookerup.create_lookup_table_keys(player_depth: int, op_depth: int, op_openings_depth: int) → list[source]

Returns a list of Plays that has all possible permutations of C’s and D’s for each specified depth. the list is in order, C < D sorted by ((player_tuple), (op_tuple), (op_openings_tuple)). create_lookup_keys(2, 1, 0) returns:

[Plays(self_plays=(C, C), op_plays=(C,), op_openings=()),
 Plays(self_plays=(C, C), op_plays=(D,), op_openings=()),
 Plays(self_plays=(C, D), op_plays=(C,), op_openings=()),
 Plays(self_plays=(C, D), op_plays=(D,), op_openings=()),
 Plays(self_plays=(D, C), op_plays=(C,), op_openings=()),
 Plays(self_plays=(D, C), op_plays=(D,), op_openings=()),
 Plays(self_plays=(D, D), op_plays=(C,), op_openings=()),
 Plays(self_plays=(D, D), op_plays=(D,), op_openings=())]
axelrod.strategies.lookerup.get_last_n_plays(player: axelrod.player.Player, depth: int) → tuple[source]

Returns the last N plays of player as a tuple.

axelrod.strategies.lookerup.make_keys_into_plays(lookup_table: dict) → dict[source]

Returns a dict where all keys are Plays.

class axelrod.strategies.mathematicalconstants.CotoDeRatio[source]

The player will always aim to bring the ratio of co-operations to defections closer to the ratio as given in a sub class

Names:

  • Co to Do Ratio: Original Name by Timothy Standen
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.mathematicalconstants.Golden[source]

The player will always aim to bring the ratio of co-operations to defections closer to the golden mean

Names:

  • Golden: Original Name by Timothy Standen
name = '$\\phi$'
ratio = 1.618033988749895
class axelrod.strategies.mathematicalconstants.Pi[source]

The player will always aim to bring the ratio of co-operations to defections closer to the pi

Names:

  • Pi: Original Name by Timothy Standen
name = '$\\pi$'
ratio = 3.141592653589793
class axelrod.strategies.mathematicalconstants.e[source]

The player will always aim to bring the ratio of co-operations to defections closer to the e

Names:

  • e: Original Name by Timothy Standen
name = '$e$'
ratio = 2.718281828459045
class axelrod.strategies.memorytwo.MEM2 → None[source]

A memory-two player that switches between TFT, TFTT, and ALLD.

Note that the reference claims that this is a memory two strategy but in fact it is infinite memory. This is because the player plays as ALLD if ALLD has ever been selected twice, which can only be known if the entire history of play is accessible.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'MEM2'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Memory One strategies. Note that there are Memory One strategies in other files, including titfortat.py and zero_determinant.py

class axelrod.strategies.memoryone.ALLCorALLD[source]

This strategy is at the parameter extreme of the ZD strategies (phi = 0). It simply repeats its last move, and so mimics ALLC or ALLD after round one. If the tournament is noisy, there will be long runs of C and D.

For now starting choice is random of 0.6, but that was an arbitrary choice at implementation time.

Names:

  • ALLC or ALLD: Original name by Marc Harper
  • Repeat: [Akin2015]
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'ALLCorALLD'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.memoryone.FirmButFair → None[source]

A strategy that cooperates on the first move, and cooperates except after receiving a sucker payoff.

Names:

name = 'Firm But Fair'
class axelrod.strategies.memoryone.GTFT(p: float = None) → None[source]

Generous Tit For Tat Strategy.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': {'game'}, 'inspects_source': False, 'manipulates_state': False}
name = 'GTFT'
receive_match_attributes()[source]
class axelrod.strategies.memoryone.MemoryOnePlayer(four_vector: typing.Tuple[float, float, float, float] = None, initial: axelrod.action.Action = C) → None[source]

Uses a four-vector for strategies based on the last round of play, (P(C|CC), P(C|CD), P(C|DC), P(C|DD)), defaults to Win-Stay Lose-Shift. Intended to be used as an abstract base class or to at least be supplied with a initializing four_vector.

Names

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Generic Memory One Player'
set_four_vector(four_vector: typing.Tuple[float, float, float, float])[source]
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.memoryone.ReactivePlayer(probabilities: typing.Tuple[float, float]) → None[source]

A generic reactive player. Defined by 2 probabilities conditional on the opponent’s last move: P(C|C), P(C|D).

Names:

name = 'Reactive Player'
class axelrod.strategies.memoryone.SoftJoss(q: float = 0.9) → None[source]

Defects with probability 0.9 when the opponent defects, otherwise emulates Tit-For-Tat.

Names:

name = 'Soft Joss'
class axelrod.strategies.memoryone.StochasticCooperator → None[source]

Stochastic Cooperator.

Names:

name = 'Stochastic Cooperator'
class axelrod.strategies.memoryone.StochasticWSLS(ep: float = 0.05) → None[source]

Stochastic WSLS, similar to Generous TFT. Note that this is not the same as Stochastic WSLS described in [Amaral2016], that strategy is a modification of WSLS that learns from the performance of other strategies.

Names:

  • Stochastic WSLS: Original name by Marc Harper
name = 'Stochastic WSLS'
class axelrod.strategies.memoryone.WinShiftLoseStay(initial: axelrod.action.Action = D) → None[source]

Win-Shift Lose-Stay, also called Reverse Pavlov.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Win-Shift Lose-Stay'
class axelrod.strategies.memoryone.WinStayLoseShift(initial: axelrod.action.Action = C) → None[source]

Win-Stay Lose-Shift, also called Pavlov.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Win-Stay Lose-Shift'
class axelrod.strategies.meta.MetaHunter[source]

A player who uses a selection of hunters.

Names

  • Meta Hunter: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
static meta_strategy(results, opponent)[source]
name = 'Meta Hunter'
class axelrod.strategies.meta.MetaHunterAggressive(team=None)[source]

A player who uses a selection of hunters.

Names

  • Meta Hunter Aggressive: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
static meta_strategy(results, opponent)[source]
name = 'Meta Hunter Aggressive'
class axelrod.strategies.meta.MetaMajority(team=None)[source]

A player who goes by the majority vote of all other non-meta players.

Names:

  • Meta Marjority: Original name by Karol Langner
static meta_strategy(results, opponent)[source]
name = 'Meta Majority'
class axelrod.strategies.meta.MetaMajorityFiniteMemory[source]

MetaMajority with the team of Finite Memory Players

Names

  • Meta Majority Finite Memory: Original name by Marc Harper
name = 'Meta Majority Finite Memory'
class axelrod.strategies.meta.MetaMajorityLongMemory[source]

MetaMajority with the team of Long (infinite) Memory Players

Names

  • Meta Majority Long Memory: Original name by Marc Harper
name = 'Meta Majority Long Memory'
class axelrod.strategies.meta.MetaMajorityMemoryOne[source]

MetaMajority with the team of Memory One players

Names

  • Meta Majority Memory One: Original name by Marc Harper
name = 'Meta Majority Memory One'
class axelrod.strategies.meta.MetaMinority(team=None)[source]

A player who goes by the minority vote of all other non-meta players.

Names:

  • Meta Minority: Original name by Karol Langner
static meta_strategy(results, opponent)[source]
name = 'Meta Minority'
class axelrod.strategies.meta.MetaMixer(team=None, distribution=None)[source]

A player who randomly switches between a team of players. If no distribution is passed then the player will uniformly choose between sub players.

In essence this is creating a Mixed strategy.

Parameters

team : list of strategy classes, optional
Team of strategies that are to be randomly played If none is passed will select the ordinary strategies.
distribution : list representing a probability distribution, optional
This gives the distribution from which to select the players. If none is passed will select uniformly.

Names

  • Meta Mixer: Original name by Vince Knight
classifier = {'long_run_time': True, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
meta_strategy(results, opponent)[source]

Using the numpy.random choice function to sample with weights

name = 'Meta Mixer'
class axelrod.strategies.meta.MetaPlayer(team=None)[source]

A generic player that has its own team of players.

Names:

  • Meta Player: Original name by Karol Langner
classifier = {'long_run_time': True, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length', 'game'}, 'inspects_source': False, 'manipulates_state': False}
meta_strategy(results, opponent)[source]

Determine the meta result based on results of all players. Override this function in child classes.

name = 'Meta Player'
strategy(opponent)[source]
class axelrod.strategies.meta.MetaWinner(team=None)[source]

A player who goes by the strategy of the current winner.

Names:

  • Meta Winner: Original name by Karol Langner
meta_strategy(results, opponent)[source]
name = 'Meta Winner'
class axelrod.strategies.meta.MetaWinnerDeterministic[source]

Meta Winner with the team of Deterministic Players.

Names

  • Meta Winner Deterministic: Original name by Marc Harper
name = 'Meta Winner Deterministic'
class axelrod.strategies.meta.MetaWinnerEnsemble(team=None)[source]

A variant of MetaWinner that chooses one of the top scoring strategies at random against each opponent. Note this strategy is always stochastic regardless of the team.

Names:

  • Meta Winner Ensemble: Original name by Marc Harper
meta_strategy(results, opponent)[source]
name = 'Meta Winner Ensemble'
class axelrod.strategies.meta.MetaWinnerFiniteMemory[source]

MetaWinner with the team of Finite Memory Players

Names

  • Meta Winner Finite Memory: Original name by Marc Harper
name = 'Meta Winner Finite Memory'
class axelrod.strategies.meta.MetaWinnerLongMemory[source]

MetaWinner with the team of Long (infinite) Memory Players

Names

  • Meta Winner Long Memory: Original name by Marc Harper
name = 'Meta Winner Long Memory'
class axelrod.strategies.meta.MetaWinnerMemoryOne[source]

MetaWinner with the team of Memory One players

Names

  • Meta Winner Memory Memory One: Original name by Marc Harper
name = 'Meta Winner Memory One'
class axelrod.strategies.meta.MetaWinnerStochastic[source]

Meta Winner with the team of Stochastic Players.

Names

  • Meta Winner Stochastic: Original name by Marc Harper
name = 'Meta Winner Stochastic'
class axelrod.strategies.meta.NMWEDeterministic[source]

Nice Meta Winner Ensemble with the team of Deterministic Players.

Names

  • Nice Meta Winner Ensemble Deterministic: Original name by Marc Harper
name = 'NMWE Deterministic'
class axelrod.strategies.meta.NMWEFiniteMemory[source]

Nice Meta Winner Ensemble with the team of Finite Memory Players.

Names

  • Nice Meta Winner Ensemble Finite Memory: Original name by Marc Harper
name = 'NMWE Finite Memory'
class axelrod.strategies.meta.NMWELongMemory[source]

Nice Meta Winner Ensemble with the team of Long Memory Players.

Names

  • Nice Meta Winner Ensemble Long Memory: Original name by Marc Harper
name = 'NMWE Long Memory'
class axelrod.strategies.meta.NMWEMemoryOne[source]

Nice Meta Winner Ensemble with the team of Memory One Players.

Names

  • Nice Meta Winner Ensemble Memory One: Original name by Marc Harper
name = 'NMWE Memory One'
class axelrod.strategies.meta.NMWEStochastic[source]

Nice Meta Winner Ensemble with the team of Stochastic Players.

Names

  • Nice Meta Winner Ensemble Stochastic: Original name by Marc Harper
name = 'NMWE Stochastic'
class axelrod.strategies.meta.NiceMetaWinner(team=None)

A player who goes by the strategy of the current winner.

Names:

  • Meta Winner: Original name by Karol Langner
classifier = {'long_run_time': True, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length', 'game'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Nice Meta Winner'
original_class

alias of MetaWinner

strategy(opponent)
class axelrod.strategies.meta.NiceMetaWinnerEnsemble(team=None)

A variant of MetaWinner that chooses one of the top scoring strategies at random against each opponent. Note this strategy is always stochastic regardless of the team.

Names:

  • Meta Winner Ensemble: Original name by Marc Harper
classifier = {'long_run_time': True, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length', 'game'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Nice Meta Winner Ensemble'
original_class

alias of MetaWinnerEnsemble

strategy(opponent)
class axelrod.strategies.mindcontrol.MindBender[source]

A player that changes the opponent’s strategy by modifying the internal dictionary.

Names

  • Mind Bender: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': True, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Mind Bender'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.mindcontrol.MindController[source]

A player that changes the opponents strategy to cooperate.

Names

  • Mind Controller: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': True, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Mind Controller'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Alters the opponents strategy method to be a lambda function which always returns C. This player will then always return D to take advantage of this

class axelrod.strategies.mindcontrol.MindWarper[source]

A player that changes the opponent’s strategy but blocks changes to its own.

Names

  • Mind Warper: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': True, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Mind Warper'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

The player classes in this module do not obey standard rules of the IPD (as indicated by their classifier). We do not recommend putting a lot of time in to optimising them.

class axelrod.strategies.mindreader.MindReader[source]

A player that looks ahead at what the opponent will do and decides what to do.

Names:

  • Mind reader: Original name by Jason Young
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name = 'Mind Reader'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Pretends to play the opponent a number of times before each match. The primary purpose is to look far enough ahead to see if a defect will be punished by the opponent.

class axelrod.strategies.mindreader.MirrorMindReader[source]

A player that will mirror whatever strategy it is playing against by cheating and calling the opponent’s strategy function instead of its own.

Names:

  • Protected Mind reader: Original name by Brice Fernandes
classifier = {'long_run_time': False, 'manipulates_source': True, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
static foil_strategy_inspection() → axelrod.action.Action[source]

Foils _strategy_utils.inspect_strategy and _strategy_utils.look_ahead

name = 'Mirror Mind Reader'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Will read the mind of the opponent and play the opponent’s strategy.

class axelrod.strategies.mindreader.ProtectedMindReader[source]

A player that looks ahead at what the opponent will do and decides what to do. It is also protected from mind control strategies

Names:

  • Protected Mind reader: Original name by Jason Young
classifier = {'long_run_time': False, 'manipulates_source': True, 'stochastic': False, 'memory_depth': -10, 'makes_use_of': set(), 'inspects_source': True, 'manipulates_state': False}
name = 'Protected Mind Reader'
class axelrod.strategies.mutual.Desperate[source]

A player that only cooperates after mutual defection.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Desperate'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.mutual.Hopeless[source]

A player that only defects after mutual cooperation.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hopeless'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.mutual.Willing[source]

A player that only defects after mutual defection.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Willing'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.negation.Negation[source]

A player starts by cooperating or defecting randomly if it’s their first move, then simply doing the opposite of the opponents last move thereafter.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Negation'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.oncebitten.FoolMeForever[source]

Fool me once, shame on me. Teach a man to fool me and I’ll be fooled for the rest of my life.

Names:

  • Fool Me Forever: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Fool Me Forever'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.oncebitten.FoolMeOnce[source]

Forgives one D then retaliates forever on a second D.

Names:

  • Fool me once: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Fool Me Once'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.oncebitten.ForgetfulFoolMeOnce(forget_probability: float = 0.05) → None[source]

Forgives one D then retaliates forever on a second D. Sometimes randomly forgets the defection count, and so keeps a secondary count separate from the standard count in Player.

Names:

  • Forgetful Fool Me Once: Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Forgetful Fool Me Once'
reset()[source]
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.oncebitten.OnceBitten → None[source]

Cooperates once when the opponent defects, but if they defect twice in a row defaults to forgetful grudger for 10 turns defecting.

Names:

  • Once Bitten: Original name by Holly Marissa
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 12, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Once Bitten'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for mem_length rounds if the opponent ever plays D twice in a row.

class axelrod.strategies.prober.CollectiveStrategy[source]

Defined in [Li2009]. ‘It always cooperates in the first move and defects in the second move. If the opponent also cooperates in the first move and defects in the second move, CS will cooperate until the opponent defects. Otherwise, CS will always defect.’

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'CollectiveStrategy'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.HardProber[source]

Plays D, D, C, C initially. Defects forever if opponent cooperated in moves 2 and 3. Otherwise plays TFT.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Prober'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.NaiveProber(p: float = 0.1) → None[source]

Like tit-for-tat, but it occasionally defects with a small probability.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Naive Prober'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.Prober[source]

Plays D, C, C initially. Defects forever if opponent cooperated in moves 2 and 3. Otherwise plays TFT.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Prober'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.Prober2[source]

Plays D, C, C initially. Cooperates forever if opponent played D then C in moves 2 and 3. Otherwise plays TFT.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Prober 2'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.Prober3[source]

Plays D, C initially. Defects forever if opponent played C in moves 2. Otherwise plays TFT.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Prober 3'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.Prober4 → None[source]

Plays C, C, D, C, D, D, D, C, C, D, C, D, C, C, D, C, D, D, C, D initially. Counts retaliating and provocative defections of the opponent. If the absolute difference between the counts is smaller or equal to 2, defects forever. Otherwise plays C for the next 5 turns and TFT for the rest of the game.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Prober 4'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.prober.RemorsefulProber(p: float = 0.1) → None[source]

Like Naive Prober, but it remembers if the opponent responds to a random defection with a defection by being remorseful and cooperating.

For reference see: [Li2011]. A more complete description is given in “The Selfish Gene” (https://books.google.co.uk/books?id=ekonDAAAQBAJ):

“Remorseful Prober remembers whether it has just spontaneously defected, and whether the result was prompt retaliation. If so, it ‘remorsefully’ allows its opponent ‘one free hit’ without retaliating.”

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Remorseful Prober'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.punisher.InversePunisher → None[source]

An inverted version of Punisher. The player starts by cooperating however will defect if at any point the opponent has defected, and forgets after mem_length matches, with 1 <= mem_length <= 20. This time mem_length is proportional to the amount of time the opponent has played C.

Names:

  • Inverse Punisher: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Inverse Punisher'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for an amount of rounds proportional to the opponents historical ‘%’ of playing C if the opponent ever plays D.

class axelrod.strategies.punisher.LevelPunisher[source]

A player starts by cooperating however, after 10 rounds will defect if at any point the number of defections by an opponent is greater than 20%.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Level Punisher'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.punisher.Punisher → None[source]

A player starts by cooperating however will defect if at any point the opponent has defected, but forgets after meme_length matches, with 1<=mem_length<=20 proportional to the amount of time the opponent has played D, punishing that player for playing D too often.

Names:

  • Punisher: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Punisher'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Begins by playing C, then plays D for an amount of rounds proportional to the opponents historical ‘%’ of playing D if the opponent ever plays D

class axelrod.strategies.qlearner.ArrogantQLearner → None[source]

A player who learns the best strategies through the q-learning algorithm.

This Q learner jumps to quick conclusions and cares about the future.

Names:

  • Arrogant Q Learner: Original name by Geraint Palmer
discount_rate = 0.1
learning_rate = 0.9
name = 'Arrogant QLearner'
class axelrod.strategies.qlearner.CautiousQLearner → None[source]

A player who learns the best strategies through the q-learning algorithm.

This Q learner is slower to come to conclusions and wants to look ahead more.

Names:

  • Cautious Q Learner: Original name by Geraint Palmer
discount_rate = 0.1
learning_rate = 0.1
name = 'Cautious QLearner'
class axelrod.strategies.qlearner.HesitantQLearner → None[source]

A player who learns the best strategies through the q-learning algorithm.

This Q learner is slower to come to conclusions and does not look ahead much.

Names:

  • Hesitant Q Learner: Original name by Geraint Palmer
discount_rate = 0.9
learning_rate = 0.1
name = 'Hesitant QLearner'
class axelrod.strategies.qlearner.RiskyQLearner → None[source]

A player who learns the best strategies through the q-learning algorithm.

This Q learner is quick to come to conclusions and doesn’t care about the future.

Names:

  • Risky Q Learner: Original name by Geraint Palmer
action_selection_parameter = 0.1
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'game'}, 'inspects_source': False, 'manipulates_state': False}
discount_rate = 0.9
find_reward(opponent: axelrod.player.Player) → typing.Dict[axelrod.action.Action, typing.Dict[axelrod.action.Action, typing.Union[int, float]]][source]

Finds the reward gained on the last iteration

find_state(opponent: axelrod.player.Player) → str[source]

Finds the my_state (the opponents last n moves + its previous proportion of playing C) as a hashable state

learning_rate = 0.9
memory_length = 12
name = 'Risky QLearner'
perform_q_learning(prev_state: str, state: str, action: axelrod.action.Action, reward)[source]

Performs the qlearning algorithm

receive_match_attributes()[source]
select_action(state: str) → axelrod.action.Action[source]

Selects the action based on the epsilon-soft policy

strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

Runs a qlearn algorithm while the tournament is running.

class axelrod.strategies.rand.Random(p: float = 0.5) → None[source]

A player who randomly chooses between cooperating and defecting.

This strategy came 15th in Axelrod’s original tournament.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 0, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Random'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.resurrection.DoubleResurrection[source]

A player starts by cooperating and defects if the number of rounds played by the player is greater than five and the last five rounds are cooperations.

If the last five rounds were defections, the player cooperates.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'DoubleResurrection'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.resurrection.Resurrection[source]

A player starts by cooperating and defects if the number of rounds played by the player is greater than five and the last five rounds are defections.

Otherwise, the strategy plays like Tit-for-tat.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 5, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Resurrection'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.retaliate.LimitedRetaliate(retaliation_threshold: float = 0.1, retaliation_limit: int = 20) → None[source]

A player that co-operates unless the opponent defects and wins. It will then retaliate by defecting. It stops when either, it has beaten the opponent 10 times more often that it has lost or it reaches the retaliation limit (20 defections).

Names:

  • Limited Retaliate: Original name by Owen Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Limited Retaliate'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

If the opponent has played D to my C more often than x% of the time that I’ve done the same to him, retaliate by playing D but stop doing so once I’ve hit the retaliation limit.

class axelrod.strategies.retaliate.LimitedRetaliate2(retaliation_threshold: float = 0.08, retaliation_limit: int = 15) → None[source]

LimitedRetaliate player with a threshold of 8 percent and a retaliation limit of 15.

Names:

  • Limited Retaliate 2: Original name by Owen Campbell
name = 'Limited Retaliate 2'
class axelrod.strategies.retaliate.LimitedRetaliate3(retaliation_threshold: float = 0.05, retaliation_limit: int = 20) → None[source]

LimitedRetaliate player with a threshold of 5 percent and a retaliation limit of 20.

Names:

  • Limited Retaliate 3: Original name by Owen Campbell
name = 'Limited Retaliate 3'
class axelrod.strategies.retaliate.Retaliate(retaliation_threshold: float = 0.1) → None[source]

A player starts by cooperating but will retaliate once the opponent has won more than 10 percent times the number of defections the player has.

Names:

  • Retaliate: Original name by Owen Campbell
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Retaliate'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

If the opponent has played D to my C more often than x% of the time that I’ve done the same to him, play D. Otherwise, play C.

class axelrod.strategies.retaliate.Retaliate2(retaliation_threshold: float = 0.08) → None[source]

Retaliate player with a threshold of 8 percent.

Names:

  • Retaliate 2: Original name by Owen Campbell
name = 'Retaliate 2'
class axelrod.strategies.retaliate.Retaliate3(retaliation_threshold: float = 0.05) → None[source]

Retaliate player with a threshold of 5 percent.

Names:

  • Retaliate 3: Original name by Owen Campbell
name = 'Retaliate 3'
class axelrod.strategies.sequence_player.SequencePlayer(generator_function: function, generator_args: typing.Tuple = ()) → None[source]

Abstract base class for players that use a generated sequence to determine their plays.

Names:

  • Sequence Player: Original name by Marc Harper
meta_strategy(value: int) → None[source]

Determines how to map the sequence value to cooperate or defect. By default, treat values like python truth values. Override in child classes for alternate behaviors.

strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.sequence_player.ThueMorse → None[source]

A player who cooperates or defects according to the Thue-Morse sequence. The first few terms of the Thue-Morse sequence are: 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 . . .

Thue-Morse sequence: http://mathworld.wolfram.com/Thue-MorseSequence.html

Names:

  • Thue Morse: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'ThueMorse'
class axelrod.strategies.sequence_player.ThueMorseInverse → None[source]

A player who plays the inverse of the Thue-Morse sequence.

Names:

  • Inverse Thue Morse: Original name by Geraint Palmer
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
meta_strategy(value: int) → axelrod.action.Action[source]
name = 'ThueMorseInverse'
class axelrod.strategies.shortmem.ShortMem[source]

A player starts by always cooperating for the first 10 moves.

From the tenth round on, the player analyzes the last ten actions, and compare the number of defects and cooperates of the opponent, based in percentage. If cooperation occurs 30% more than defection, it will cooperate. If defection occurs 30% more than cooperation, the program will defect. Otherwise, the program follows the TitForTat algorithm.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 10, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'ShortMem'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.selfsteem.SelfSteem[source]

This strategy is based on the feeling with the same name. It is modeled on the sine curve(f = sin( 2* pi * n / 10 )), which varies with the current iteration.

If f > 0.95, ‘ego’ of the algorithm is inflated; always defects. If 0.95 > abs(f) > 0.3, rational behavior; follows TitForTat algortithm. If 0.3 > f > -0.3; random behavior. If f < -0.95, algorithm is at rock bottom; always cooperates.

Futhermore, the algorithm implements a retaliation policy, if the opponent defects; the sin curve is shifted. But due to lack of further information, this implementation does not include a sin phase change. Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'SelfSteem'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.stalker.Stalker → None[source]

This is a strategy which is only influenced by the score. Its behavior is based on three values: the very_bad_score (all rounds in defection) very_good_score (all rounds in cooperation) wish_score (average between bad and very_good score)

It starts with cooperation.

  • If current_average_score > very_good_score, it defects
  • If current_average_score lies in (wish_score, very_good_score) it cooperates
  • If current_average_score > 2, it cooperates
  • If current_average_score lies in (1, 2)
  • The remaining case, current_average_score < 1, it behaves randomly.
  • It defects in the last round

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length', 'game'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Stalker'
original_class

alias of Stalker

strategy(opponent)
class axelrod.strategies.titfortat.AdaptiveTitForTat(rate: float = 0.5) → None[source]

ATFT - Adaptive Tit For Tat (Basic Model)

Algorithm

if (opponent played C in the last cycle) then world = world + r*(1-world) else world = world + r*(0-world) If (world >= 0.5) play C, else play D

Attributes

world : float [0.0, 1.0], set to 0.5
continuous variable representing the world’s image 1.0 - total cooperation 0.0 - total defection other values - something in between of the above updated every round, starting value shouldn’t matter as long as it’s >= 0.5

Parameters

rate : float [0.0, 1.0], default=0.5
adaptation rate - r in Algorithm above smaller value means more gradual and robust to perturbations behaviour

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Adaptive Tit For Tat'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
world = 0.5
class axelrod.strategies.titfortat.Alexei[source]

Plays similar to Tit-for-Tat, but always defect on last turn.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Alexei'
original_class

alias of Alexei

strategy(opponent)
class axelrod.strategies.titfortat.AntiTitForTat[source]

A strategy that plays the opposite of the opponents previous move. This is similar to Bully, except that the first move is cooperation.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Anti Tit For Tat'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.Bully[source]

A player that behaves opposite to Tit For Tat, including first move.

Starts by defecting and then does the opposite of opponent’s previous move. This is the complete opposite of Tit For Tat, also called Bully in the literature.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Bully'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.ContriteTitForTat[source]

A player that corresponds to Tit For Tat if there is no noise. In the case of a noisy match: if the opponent defects as a result of a noisy defection then ContriteTitForTat will become ‘contrite’ until it successfully cooperates.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Contrite Tit For Tat'
original_class

alias of ContriteTitForTat

strategy(opponent)
class axelrod.strategies.titfortat.DynamicTwoTitsForTat[source]

A player starts by cooperating and then punishes its opponent’s defections with defections, but with a dynamic bias towards cooperating based on the opponent’s ratio of cooperations to total moves (so their current probability of cooperating regardless of the opponent’s move (aka: forgiveness)).

Names:

  • Dynamic Two Tits For Tat: Original name by Grant Garrett-Grossman.
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Dynamic Two Tits For Tat'
static strategy(opponent)[source]
class axelrod.strategies.titfortat.EugineNier[source]

Plays similar to Tit-for-Tat, but with two conditions: 1) Always Defect on Last Move 2) If other player defects five times, switch to all defects.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'EugineNier'
original_class

alias of EugineNier

strategy(opponent)
class axelrod.strategies.titfortat.Gradual → None[source]

A player that punishes defections with a growing number of defections but after punishing enters a calming state and cooperates no matter what the opponent does for two rounds.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Gradual'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.HardTitFor2Tats[source]

A variant of Tit For Two Tats that uses a longer history for retaliation.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Tit For 2 Tats'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.HardTitForTat[source]

A variant of Tit For Tat that uses a longer history for retaliation.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 3, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Hard Tit For Tat'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.Michaelos[source]

Plays similar to Tit-for-Tat with two exceptions: 1) Defect on last turn. 2) After own defection and opponent’s cooperation, 50 percent of the time, cooperate. The other 50 percent of the time, always defect for the rest of the game.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
decorator = <axelrod.strategy_transformers.StrategyTransformerFactory.<locals>.Decorator object>
name = 'Michaelos'
original_class

alias of Michaelos

strategy(opponent)
class axelrod.strategies.titfortat.NTitsForMTats(N: int = 3, M: int = 2) → None[source]

A parameterizable Tit-for-Tat, The arguments are: 1) M: the number of defection before retaliation 2) N: the number of retaliations

Names:

  • N Tit(s) For M Tat(s): Original name by Marc Harper
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'N Tit(s) For M Tat(s)'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.OmegaTFT(deadlock_threshold: int = 3, randomness_threshold: int = 8) → None[source]

OmegaTFT modifies Tit For Tat in two ways: - checks for deadlock loops of alternating rounds of (C, D) and (D, C), and attempting to break them - uses a more sophisticated retaliation mechanism that is noise tolerant

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Omega TFT'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.SlowTitForTwoTats2[source]

A player plays C twice, then if the opponent plays the same move twice, plays that move, otherwise plays previous move.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Slow Tit For Two Tats 2'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.SneakyTitForTat[source]

Tries defecting once and repents if punished.

Names:

  • Sneaky Tit For Tat: Original name by Karol Langner
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Sneaky Tit For Tat'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.SpitefulTitForTat → None[source]

A player starts by cooperating and then mimics the previous action of the opponent until opponent defects twice in a row, at which point player always defects

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Spiteful Tit For Tat'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.SuspiciousTitForTat[source]

A variant of Tit For Tat that starts off with a defection.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Suspicious Tit For Tat'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.TitFor2Tats[source]

A player starts by cooperating and then defects only after two defects by opponent.

Names:

  • Tit for two Tats: [Axelrod1984]
  • Slow tit for two tats: Original name by Ranjini Das
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tit For 2 Tats'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.titfortat.TitForTat[source]

A player starts by cooperating and then mimics the previous action of the opponent.

This strategy was referred to as the ‘simplest’ strategy submitted to Axelrod’s first tournament. It came first.

Note that the code for this strategy is written in a fairly verbose way. This is done so that it can serve as an example strategy for those who might be new to Python.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 1, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Tit For Tat'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]

This is the actual strategy

class axelrod.strategies.titfortat.TwoTitsForTat[source]

A player starts by cooperating and replies to each defect by two defections.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': 2, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Two Tits For Tat'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.verybad.VeryBad[source]

It cooperates in the first three rounds, and uses probability (it implements a memory, which stores the opponent’s moves) to decide for cooperating or defecting. Due to a lack of information as to what that probability refers to in this context, probability(P(X)) refers to (Count(X)/Total_Moves) in this implementation P(C) = Cooperations / Total_Moves P(D) = Defections / Total_Moves = 1 - P(C)

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': False, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'VeryBad'
static strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.worse_and_worse.KnowledgeableWorseAndWorse[source]

This strategy is based on ‘Worse And Worse’ but will defect with probability of ‘current turn / total no. of turns’.

Names:
  • Knowledgeable Worse and Worse: Original name by Adam Pohl
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': {'length'}, 'inspects_source': False, 'manipulates_state': False}
name = 'Knowledgeable Worse and Worse'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.worse_and_worse.WorseAndWorse[source]

Defects with probability of ‘current turn / 1000’. Therefore it is more and more likely to defect as the round goes on.

Source code available at the download tab of [Prison1998]

Names:
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Worse and Worse'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.worse_and_worse.WorseAndWorse2[source]

Plays as tit for tat during the first 20 moves. Then defects with probability (current turn - 20) / current turn. Therefore it is more and more likely to defect as the round goes on.

Names:
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Worse and Worse 2'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.worse_and_worse.WorseAndWorse3[source]

Cooperates in the first turn. Then defects with probability no. of opponent defects / (current turn - 1). Therefore it is more likely to defect when the opponent defects for a larger proportion of the turns.

Names:
classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': inf, 'makes_use_of': set(), 'inspects_source': False, 'manipulates_state': False}
name = 'Worse and Worse 3'
strategy(opponent: axelrod.player.Player) → axelrod.action.Action[source]
class axelrod.strategies.zero_determinant.LRPlayer(four_vector: typing.Tuple[float, float, float, float] = None, initial: axelrod.action.Action = C) → None[source]

Abstraction for Linear Relation players. These players enforce a linear difference in stationary payoffs s * (S_xy - l) = S_yx - l, with 0 <= l <= R. The parameter s is called the slope and the parameter l the baseline payoff. For extortionate strategies, the extortion factor is the inverse of the slope.

This parameterization is Equation 14 in http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0077886. See Figure 2 of the article for a more in-depth explanation.

Names:

classifier = {'long_run_time': False, 'manipulates_source': False, 'stochastic': True, 'memory_depth': 1, 'makes_use_of': {'game'}, 'inspects_source': False, 'manipulates_state': False}
name = 'LinearRelation'
receive_match_attributes(phi: float = 0, s: float = None, l: float = None)[source]

Parameters

phi, s, l: floats
Parameter used to compute the four-vector according to the parameterization of the strategies below.
class axelrod.strategies.zero_determinant.ZDExtort2(phi: float = 0.1111111111111111, s: float = 0.5) → None[source]

An Extortionate Zero Determinant Strategy with l=P.

Names:

name = 'ZD-Extort-2'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDExtort2v2(phi: float = 0.125, s: float = 0.5, l: float = 1) → None[source]

An Extortionate Zero Determinant Strategy with l=1.

Names:

name = 'ZD-Extort-2 v2'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDExtort3(phi: float = 0.11538461538461539, s: float = 0.3333333333333333, l: float = 1) → None[source]

An extortionate strategy from Press and Dyson’s paper witn an extortion factor of 3.

Names:

  • ZDExtort3: Original name by Marc Harper
  • Unnamed: [Press2012]
name = 'ZD-Extort3'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDExtort4(phi: float = 0.23529411764705882, s: float = 0.25, l: float = 1) → None[source]

An Extortionate Zero Determinant Strategy with l=1, s=1/4. TFT is the other extreme (with l=3, s=1)

Names:

  • Extort 4: Original name by Marc Harper
name = 'ZD-Extort-4'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDExtortion(phi: float = 0.2, s: float = 0.1, l: float = 1) → None[source]

An example ZD Extortion player.

Names:

name = 'ZD-Extortion'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDGTFT2(phi: float = 0.25, s: float = 0.5) → None[source]

A Generous Zero Determinant Strategy with l=R.

Names:

name = 'ZD-GTFT-2'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDGen2(phi: float = 0.125, s: float = 0.5, l: float = 3) → None[source]

A Generous Zero Determinant Strategy with l=3.

Names:

name = 'ZD-GEN-2'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDMischief(phi: float = 0.1, s: float = 0.0, l: float = 1) → None[source]

An example ZD Mischief player.

Names:

name = 'ZD-Mischief'
receive_match_attributes()[source]
class axelrod.strategies.zero_determinant.ZDSet2(phi: float = 0.25, s: float = 0.0, l: float = 2) → None[source]

A Generous Zero Determinant Strategy with l=2.

Names:

name = 'ZD-SET-2'
receive_match_attributes()[source]