欢迎访问本站。

网络创业培训 跨境电商培训 电脑培训-培训网 Supporting a Low-Carbon Lifestyle Previous

Prisoner's Dilemma (PD)

The prisoner's dilemma is a canonical example of a game analyzed in game theory
that shows why two individuals might not cooperate, even if it appears that it
is in their best interest to do so. It was originally framed by Merrill Flood
and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game
with prison sentence payoffs and gave it the "prisoner's dilemma" name (Poundstone,
1992). A classic example of the prisoner's dilemma (PD) is presented as follows:

Two men are arrested, but the police do not possess enough information for a
conviction. Following the separation of the two men, the police offer both a
similar deal—if one testifies against his partner (defects/betrays), and the
other remains silent (cooperates/assists), the betrayer goes free and the one
that remains silent receives the full one-year sentence. If both remain silent,
both are sentenced to only one month in jail for a minor charge. If each 'rats
out' the other, each receives a three-month sentence. Each prisoner must choose
either to betray or remain silent; the decision of each is kept quiet. What
should they do? If it is supposed here that each player is only concerned with
lessening his time in jail, the game becomes a non-zero sum game where the two
players may either assist or betray the other. In the game, the sole worry of
the prisoners seems to be increasing his own reward. The interesting symmetry of
this problem is that the logical decision leads each to betray the other, even
though their individual ‘prize’ would be greater if they cooperated.

In the regular version of this game, collaboration is dominated
by betrayal, and as a result, the only possible outcome of the game is for both
prisoners to betray the other. Regardless of what the other prisoner chooses,
one will always gain a greater payoff by betraying the other. Because betrayal
is always more beneficial than cooperation, all objective prisoners would
seemingly betray the other if operating purely rationally. However, in reality
humans display a systematic bias towards cooperative behavior in Prisoner's
dilemma and similar games, much more so than predicted by a theory based only on
rational self interested action.

In the extended form game, the game is played over and over, and consequently,
both prisoners continuously have an opportunity to penalize the other for the
previous decision. If the number of times the game will be played is known, the
finite aspect of the game means that by backward induction, the two prisoners
will betray each other repeatedly.

In casual usage, the label "prisoner's dilemma" may be applied to situations not
strictly matching the formal criteria of the classic or iterative games, for
instance, those in which two entities could gain important benefits from
cooperating or suffer from the failure to do so, but find it merely difficult or
expensive, not necessarily impossible, to coordinate their activities to achieve
cooperation.

Strategy for the classic prisoners' dilemma

The normal game is shown below:

Prisoner B stays silent (cooperates) | Prisoner B betrays (defects) | |

Prisoner A stays silent (cooperates) | Each serves 1 month | Prisoner A: 1 year Prisoner B: goes free |

Prisoner A betrays (defects) | Prisoner A: goes free Prisoner B: 1 year |
Each serves 3 months |

Here, regardless of what the other decides, each prisoner gets a higher pay-off
by betraying the other. For example, Prisoner A can (according to the payoffs
above) state that no matter what prisoner B chooses, prisoner A is better off
'ratting him out' (defecting) than staying silent (cooperating). As a result,
based on the payoffs above, prisoner A should logically betray him. The game is
symmetric, so Prisoner B should act the same way. Since both rationally decide
to defect, each receives a lower reward than if both were to stay quiet.
Traditional game theory results in both players being worse off than if each
chose to lessen the sentence of his accomplice at the cost of spending more time
in jail himself.

Generalized form

The structure of the traditional Prisoners’ Dilemma can
be analyzed by removing its original prisoner setting. Suppose that the two
players are represented by colors, red and blue, and that each player chooses to
either "Cooperate" or "Defect".

If both players play "Cooperate" they both get the payoff A. If Blue plays
"Defect" while Red plays "Cooperate" then Blue gets B while Red gets C.
Symmetrically, if Blue plays "Cooperate" while Red plays "Defect" then Blue gets
payoff C while Red gets payoff B. If both players play "Defect" they both get
the payoff D.

In terms of general point values:

Canonical PD payoff matrix

Cooperate | Defect | |

Cooperate | A, A | C, B |

Defect | B, C | D, D |

To be a prisoner's dilemma, the following must be true:

B > A > D > C

The fact that A>D implies that the "Both Cooperate" outcome is better than the
"Both Defect" outcome, while B>A and D>C in turn imply that the "Both Defect"
outcome is the one which will actually result.

It is not necessary for a Prisoner's Dilemma to be strictly symmetric as in the
above example, merely that the choices which are individually optimal (and
strongly dominant) result in an equilibrium which is socially inferior.

The iterated prisoners' dilemma

If two players play prisoners' dilemma
more than once in succession and they remember previous actions of their
opponent and change their strategy accordingly, the game is called iterated
prisoners' dilemma.

In addition to the general form above, the iterative version also requires that
2A > B + C, to prevent alternating cooperation and defection giving a greater
reward than mutual cooperation.

The iterated prisoners' dilemma game is fundamental to certain theories of human
cooperation and trust. On the assumption that the game can model transactions
between two people requiring trust, cooperative behaviour in populations may be
modeled by a multi-player, iterated, version of the game. It has, consequently,
fascinated many scholars over the years. In 1975, Grofman and Pool estimated the
count of scholarly articles devoted to it at over 2,000. The iterated prisoners'
dilemma has also been referred to as the "Peace-War game".

If the game is played exactly N times and both players know this, then it is
always game theoretically optimal to defect in all rounds. The only possible
Nash equilibrium is to always defect. The proof is inductive: one might as well
defect on the last turn, since the opponent will not have a chance to punish the
player. Therefore, both will defect on the last turn. Thus, the player might as
well defect on the second-to-last turn, since the opponent will defect on the
last no matter what is done, and so on. The same applies if the game length is
unknown but has a known upper limit.

Unlike the standard prisoners' dilemma, in the iterated prisoners' dilemma the
defection strategy is counter-intuitive and fails badly to predict the behavior
of human players. Within standard economic theory, though, this is the only
correct answer. The superrational strategy in the iterated prisoners' dilemma
with fixed N is to cooperate against a superrational opponent, and in the limit
of large N, experimental results on strategies agree with the superrational
version, not the game-theoretic rational one.

For cooperation to emerge between game theoretic rational players, the total
number of rounds N must be random, or at least unknown to the players. In this
case always defect may no longer be a strictly dominant strategy, only a Nash
equilibrium. Amongst results shown by Robert Aumann in a 1959 paper, rational
players repeatedly interacting for indefinitely long games can sustain the
cooperative outcome.

Strategy for the iterated prisoners' dilemma

Interest in the iterated
prisoners' dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution
of Cooperation (1984). In it he reports on a tournament he organized of the N
step prisoners' dilemma (with N fixed) in which participants have to choose
their mutual strategy again and again, and have memory of their previous
encounters. Axelrod invited academic colleagues all over the world to devise
computer strategies to compete in an IPD tournament. The programs that were
entered varied widely in algorithmic complexity, initial hostility, capacity for
forgiveness, and so forth.

Axelrod discovered that when these encounters were repeated over a long period
of time with many players, each with different strategies, greedy strategies
tended to do very poorly in the long run while more altruistic strategies did
better, as judged purely by self-interest. He used this to show a possible
mechanism for the evolution of altruistic behaviour from mechanisms that are
initially purely selfish, by natural selection.

The best deterministic strategy was found to be tit for tat, which Anatol
Rapoport developed and entered into the tournament. It was the simplest of any
program entered, containing only four lines of BASIC, and won the contest. The
strategy is simply to cooperate on the first iteration of the game; after that,
the player does what his or her opponent did on the previous move. Depending on
the situation, a slightly better strategy can be "tit for tat with forgiveness."
When the opponent defects, on the next move, the player sometimes cooperates
anyway, with a small probability (around 1–5%). This allows for occasional
recovery from getting trapped in a cycle of defections. The exact probability
depends on the line-up of opponents.

By analysing the top-scoring strategies, Axelrod stated several conditions
necessary for a strategy to be successful.

Nice

The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore, a purely selfish strategy will not "cheat" on its opponent, for purely self-interested reasons first.

Retaliating

However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players.

Forgiving

Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points.

Non-envious

The last quality is being non-envious, that is not striving to score more than
the opponent (note that a "nice" strategy can never score more than the
opponent).

The optimal (points-maximizing) strategy for the one-time PD game is simply
defection; as explained above, this is true whatever the composition of
opponents may be. However, in the iterated-PD game the optimal strategy depends
upon the strategies of likely opponents, and how they will react to defections
and cooperations. For example, consider a population where everyone defects
every time, except for a single individual following the tit for tat strategy.
That individual is at a slight disadvantage because of the loss on the first
turn. In such a population, the optimal strategy for that individual is to
defect every time. In a population with a certain percentage of always-defectors
and the rest being tit for tat players, the optimal strategy for an individual
depends on the percentage, and on the length of the game.

A strategy called Pavlov (an example of Win-Stay, Lose-Switch) cooperates at the
first iteration and whenever the player and co-player did the same thing at the
previous iteration; Pavlov defects when the player and co-player did different
things at the previous iteration. For a certain range of parameters, Pavlov
beats all other strategies by giving preferential treatment to co-players which
resemble Pavlov.

Deriving the optimal strategy is generally done in two ways:

Bayesian Nash Equilibrium: If the statistical distribution of opposing
strategies can be determined (e.g. 50% tit for tat, 50% always cooperate) an
optimal counter-strategy can be derived analytically.

Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit for tat players (see for instance Chess 1988), but there is no analytic proof that this will always occur.

Although tit for tat is considered to be the most robust basic strategy, a team
from Southampton University in England (led by Professor Nicholas Jennings [1]
and consisting of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers, Perukrishnen
Vytelingum) introduced a new strategy at the 20th-anniversary iterated
prisoners' dilemma competition, which proved to be more successful than tit for
tat. This strategy relied on cooperation between programs to achieve the highest
number of points for a single program. The University submitted 60 programs to
the competition, which were designed to recognize each other through a series of
five to ten moves at the start. Once this recognition was made, one program
would always cooperate and the other would always defect, assuring the maximum
number of points for the defector. If the program realized that it was playing a
non-Southampton player, it would continuously defect in an attempt to minimize
the score of the competing program. As a result,[8] this strategy ended up
taking the top three positions in the competition, as well as a number of
positions towards the bottom.

This strategy takes advantage of the fact that multiple entries were allowed in
this particular competition and that the performance of a team was measured by
that of the highest-scoring player (meaning that the use of self-sacrificing
players was a form of minmaxing). In a competition where one has control of only
a single player, tit for tat is certainly a better strategy. Because of this new
rule, this competition also has little theoretical significance when analysing
single agent strategies as compared to Axelrod's seminal tournament. However, it
provided the framework for analysing how to achieve cooperative strategies in
multi-agent frameworks, especially in the presence of noise. In fact, long
before this new-rules tournament was played, Richard Dawkins in his book The
Selfish Gene pointed out the possibility of such strategies winning if multiple
entries were allowed, but he remarked that most probably Axelrod would not have
allowed them if they had been submitted. It also relies on circumventing rules
about the prisoners' dilemma in that there is no communication allowed between
the two players. When the Southampton programs engage in an opening "ten move
dance" to recognize one another, this only reinforces just how valuable
communication can be in shifting the balance of the game.

Continuous iterated prisoners' dilemma

Most work on the iterated
prisoners' dilemma has focused on the discrete case, in which players either
cooperate or defect, because this model is relatively simple to analyze.
However, some researchers have looked at models of the continuous iterated
prisoners' dilemma, in which players are able to make a variable contribution to
the other player. Le and Boyd[9] found that in such situations, cooperation is
much harder to evolve than in the discrete iterated prisoners' dilemma. The
basic intuition for this result is straightforward: in a continuous prisoners'
dilemma, if a population starts off in a non-cooperative equilibrium, players
who are only marginally more cooperative than non-cooperators get little benefit
from assorting with one another. By contrast, in a discrete prisoners' dilemma,
tit for tat cooperators get a big payoff boost from assorting with one another
in a non-cooperative equilibrium, relative to non-cooperators. Since nature
arguably offers more opportunities for variable cooperation rather than a strict
dichotomy of cooperation or defection, the continuous prisoners' dilemma may
help explain why real-life examples of tit for tat-like cooperation are
extremely rare in nature (ex. Hammerstein) even though tit for tat seems
robust in theoretical models.

Real-life examples

These particular examples, involving prisoners and bag
switching and so forth, may seem contrived, but there are in fact many examples
in human interaction as well as interactions in nature that have the same payoff
matrix. The prisoner's dilemma is therefore of interest to the social sciences
such as economics, politics and sociology, as well as to the biological sciences
such as ethology and evolutionary biology. Many natural processes have been
abstracted into models in which living beings are engaged in endless games of
prisoner's dilemma. This wide applicability of the PD gives the game its
substantial importance.

In environmental studies

In environmental studies, the PD is evident in crises such as
global climate change. All countries will benefit from a stable climate, but any
single country is often hesitant to curb CO2 emissions. The immediate benefit to
an individual country to maintain current behavior is perceived to be greater
than the eventual benefit to all countries if behavior was changed, therefore
explaining the current impasse concerning climate change.

In psychology

In addiction research/behavioral economics, George Ainslie
points out that addiction can be cast as an intertemporal PD problem between
the present and future selves of the addict. In this case, defecting means
relapsing, and it is easy to see that not defecting both today and in the future
is by far the best outcome, and that defecting both today and in the future is
the worst outcome. The case where one abstains today but relapses in the future
is clearly a bad outcome—in some sense the discipline and self-sacrifice
involved in abstaining today have been "wasted" because the future relapse means
that the addict is right back where he started and will have to start over
(which is quite demoralizing, and makes starting over more difficult). The final
case, where one engages in the addictive behavior today while abstaining
"tomorrow" will be familiar to anyone who has struggled with an addiction. The
problem here is that (as in other PDs) there is an obvious benefit to defecting
"today", but tomorrow one will face the same PD, and the same obvious benefit
will be present then, ultimately leading to an endless string of defections.

John Gottman in his research described in "the science of trust" defines good
relationships as those where partners know not to enter the (D,D) cell or at
least not to get dynamically stuck there in a loop.

In economics

Advertising is sometimes cited as a real life example of the
prisoner’s dilemma. When cigarette advertising was legal in the United States,
competing cigarette manufacturers had to decide how much money to spend on
advertising. The effectiveness of Firm A’s advertising was partially determined
by the advertising conducted by Firm B. Likewise, the profit derived from
advertising for Firm B is affected by the advertising conducted by Firm A. If
both Firm A and Firm B chose to advertise during a given period the advertising
cancels out, receipts remain constant, and expenses increase due to the cost of
advertising. Both firms would benefit from a reduction in advertising. However,
should Firm B choose not to advertise, Firm A could benefit greatly by
advertising. Nevertheless, the optimal amount of advertising by one firm depends
on how much advertising the other undertakes. As the best strategy is dependent
on what the other firm chooses there is no dominant strategy, which makes it
slightly different than a prisoner's dilemma. The outcome is similar, though, in
that both firms would be better off were they to advertise less than in the
equilibrium. Sometimes cooperative behaviors do emerge in business situations.
For instance, cigarette manufacturers endorsed the creation of laws banning
cigarette advertising, understanding that this would reduce costs and increase
profits across the industry. This analysis is likely to be pertinent in many
other business situations involving advertising.

Another example of the prisoner's dilemma in economics is competition-oriented
objectives. When firms are aware of the activities of their competitors,
they tend to pursue policies that are designed to oust their competitors as
opposed to maximizing the performance of the firm. This approach impedes the
firm from functioning at its maximum capacity because it limits the scope of the
strategies employed by the firms.

Without enforceable agreements, members of a cartel are also involved in a
(multi-player) prisoners' dilemma. 'Cooperating' typically means keeping
prices at a pre-agreed minimum level. 'Defecting' means selling under this
minimum level, instantly taking business (and profits) from other cartel
members. Anti-trust authorities want potential cartel members to mutually
defect, ensuring the lowest possible prices for consumers.

Multiplayer dilemmas

Many real-life dilemmas involve multiple players.
Although metaphorical, Hardin's tragedy of the commons may be viewed as an
example of a multi-player generalization of the PD: Each villager makes a choice
for personal gain or restraint. The collective reward for unanimous (or even
frequent) defection is very low payoffs (representing the destruction of the
"commons"). The commons are not always exploited: William Poundstone, in a book
about the prisoner's dilemma (see References below), describes a situation in
New Zealand where newspaper boxes are left unlocked. It is possible for people
to take a paper without paying (defecting) but very few do, feeling that if they
do not pay then neither will others, destroying the system. Subsequent research
by Elinor Ostrom, winner of the 2009 Sveriges Riksbank Prize in Economic
Sciences in Memory of Alfred Nobel, hypothesized that the tragedy of the commons
is oversimplified, with the negative outcome influenced by outside influences.
Without complicating pressures, groups communicate and manage the commons among
themselves for their mutual benefit, enforcing social norms to preserve the
resource and achieve the maximum good for the group, an example of effecting the
best case outcome for PD.

The Cold WarThe Cold War can be modelled as a Prisoner's Dilemma
situation. During the Cold War the opposing alliances of NATO and the Warsaw
Pact both had the choice to arm or disarm. From each side's point of view:
Disarming whilst your opponent continues to arm would have led to military
inferiority and possible annihilation. If both sides chose to arm, neither could
afford to attack each other, but at the high cost of maintaining and developing
a nuclear arsenal. If both sides chose to disarm, war would be avoided and there
would be no costs. If your opponent disarmed while you continue to arm, then you
achieve superiority.

Although the 'best' overall outcome is for both sides to disarm, the rational
course for both sides is to arm. This is indeed what happened, and both sides
poured enormous resources in to military research and armament for the next
thirty years until the dissolution of the Soviet Union broke the deadlock.

Related games

Closed-bag exchangeHofstadter once suggested that
people often find problems such as the PD problem easier to understand when it
is illustrated in the form of a simple game, or trade-off. One of several
examples he used was "closed bag exchange":

Two people meet and exchange closed bags, with the understanding that one of
them contains money, and the other contains a purchase. Either player can choose
to honor the deal by putting into his or her bag what he or she agreed, or he or
she can defect by handing over an empty bag.

In this game, defection is always the best course, implying that rational agents
will never play. However, in this case both players cooperating and both players
defecting actually give the same result, assuming there are no gains from trade,
so chances of mutual cooperation, even in repeated games, are few.

Friend or Foe?

Friend or Foe? is a game show that aired from 2002 to 2005
on the Game Show Network in the United States. It is an example of the
prisoner's dilemma game tested by real people, but in an artificial setting. On
the game show, three pairs of people compete. As each pair is eliminated, it
plays a game similar to the prisoner's dilemma to determine how the winnings are
split. If they both cooperate (Friend), they share the winnings 50–50. If one
cooperates and the other defects (Foe), the defector gets all the winnings and
the cooperator gets nothing. If both defect, both leave with nothing. Notice
that the payoff matrix is slightly different from the standard one given above,
as the payouts for the "both defect" and the "cooperate while the opponent
defects" cases are identical. This makes the "both defect" case a weak
equilibrium, compared with being a strict equilibrium in the standard prisoner's
dilemma. If you know your opponent is going to vote Foe, then your choice does
not affect your winnings. In a certain sense, Friend or Foe has a payoff model
between prisoner's dilemma and the game of Chicken.

The payoff matrix is

Cooperate | Defect | |

Cooperate | 1, 1 | 0, 2 |

Defect | 2, 0 | 0, 0 |

This payoff matrix has also been used on the British television programmes Trust
Me, Shafted, The Bank Job and Golden Balls. The latter show has been analyzed by
a team of economists. See: Split or Steal? Cooperative Behavior When the Stakes
are Large.

Iterated Snowdrift

A modified version of the PD modifies the payoff matrix to reduce the risk of cooperation in the case of partner defection. This may better reflect real world scenarios: "For example two scientists collaborating on a report would benefit if the other worked harder. But when your collaborator doesn’t do any work, it’s probably better for you to do all the work yourself. You’ll still end up with a completed project."

Example Snowdrift Payouts (A, B)

A cooperates | A defects | |

B cooperates | 200, 200 | 300, 0 |

B defects | 0, 300 | 0, 0 |

Example PD Payouts (A, B)

A cooperates | A defects | |

B cooperates | 200, 200 | 300, -100 |

B defects | -100, 300 | 0, 0 |

相关阅读：汉奸是怎样产生的

感谢您访问本站。