A brief introduction to game theory prisoners dilemma and nash equilibrum
TermPaper
1. Applications of Game Theory in Networking and
Complexity Theory
Karl Lassy
University of Connecticut
December 2, 2015
Abstract
Game theory has been a brilliant way of modeling social situations, especially in
economics. In the past 20 years, game theory has also made a large impact in the field
of computer science. The ability to model and analyze problems in game theoretic
terms has significantly helped in analyzing outcomes and finding solutions to problems
which would otherwise be unsolvable. This paper explores several applications of game
theory in computer science.
In this paper, we introduce the fundamental concepts of game theory and explore
two major concepts in computer science which extend notions of game theory. The
first, computer networking, shows how game theory helps us plan ahead in order to
prevent and minimize the harm of cyber attacks. The second concept, Yao’s principle,
shows how we can use game theory to find relations in the complexities of deterministic
and randomized algorithms.
1 Introduction
Game theory is the study of interaction between parties. Game theory has been a prevalent
field in mathematics since the 19th Century. More recently, concepts of game theory have
been extended to topics within computer science. Many problems In this paper we will first
present a fundamental definition of basic theory. Following this we will explore a particular
application of game theory in Network security. Finally, we will show how game theory allows
us to find a relation between determinisitc and randomized algorithm. The extensions of
game theory concepts in computer science ultimately prove the importance of the field.
While the power of game theory is often taken for granted, we show that certain problems
would be unsolvable without being able to represent them in game theoretic terms.
2 History
The roots of game theory date back to as early as 1713. Francis Waldegrave sent a letter
describing a minimax mixed strategy solution to a card game le Her. However, it wasn’t
1
2. until the 20th century that significant advances in game theory took place. In the years
1921-1927, Emile Borel published four notes on strategic games. He gave the first formula
of a mixed strategy and gave minimax solutions to two person games with three and five
strategies. While being unable to solve for any other number of strategies, he initially claimed
that finding a minimax for anymore was impossible. Later in 1927, he considered it to be
possible but was unable to prove or disprove it. In 1928, John von Neumann proved the
minimax theorem in his paper, Zur Theorie der Gesellschaftsspiele. To this day, this proof
is considered one of the greatest feats in game theory. The minimax theorem states that
all two person zero-sum games with a finite number of pure strategies has precisely one
pay-off vector. Proof of the minimax theorem has led to various discoveries including Yao’s
Principle, which will be discussed in depth later on. So far, 12 game theorists have earned
a nobel prize. One of these people, John Nash is considered one of the most influential
contributors to the field. In 1950, Nash proposed the Nash equilibrium, a solution to game
in which a player’s action is most optimal regardless of the other player(s)’ choices. We will
discuss the Nash equilibrium much more in depth later on. While the roots of game theory
have been around as early as the 18th century, the extension to computer science is still in
its infancy. [9]
3 Preliminaries
We will define various fundamental concepts which will help create a foundation to better
understand the applications of game theory. The formal definition of a game is a situation
between some number of agents where each agent has a choice actions. The actions of each
agent not only effects the other agents in the situation but also contributes to the overall
outcome of the situation. While game theory as a whole studies these situations in their
entirety, the main interest of those studying the theory is usually the strategic aspect behind
it.
3.1 Game Models
3.1.1 Extensive Form
Extensive form is the ideal way to represent a game when visual representation helps under-
stand the game.
Definition 1. An extensive form of a game contains a complete description of the set of
players, who moves when, the players’ choices, the information the players have, and the
payoff function for each of the players’ choices. [4]
Typically, the extensive form of a game will be much more informal and is helpful in
understanding the game but impractical to use in mathematical equations.
3.1.2 Normal Form
When we give a formal definition of a game, we will likely use normal form to represent the
game as it is very concise and can be easily interpreted.
2
3. Definition 2. A normal-form game G is a tuple (P, A, u) where P is a finite set of players,
A contains the set of the actions available to each player, and u is the utility function of the
game [8]
To be more precise, we can think of A as a set containing the sets of actions A1, ..., Ai, ..., An
where Ai is the set of actions corresponding to player i. In the same way, we can define u
as a set u1, ..., ui, ..., un. u : P → R is the function that determines the payoff of each action
vector a ∈ A.
3.2 Game Types
There are two particular types of games we will look at, cooperative games and non-
cooperative games. In a cooperative game, agents can work together or in a coalition to
produce the best outcome for all. Non-cooperative games are games in which agents must
make actions purely based on self interest and without knowing what actions the other agents
will take. A game representation of a car dealership would be a cooperative game as the
salesman is working with the potential buyer to work out a deal that is optimal for both of
them. The majority of games we see in the real world are non-cooperative games as many
social situations require the agents in a game to make independent decisions. [7]
3.2.1 Cold War Model
An example of a non-cooperative game is a model of the Cold War. We can think of the
United States and the Soviet Union as the two players. Both players have the same two
options, they can choose to be either passive or aggressive. We can create a matrix to
represent all possible outcomes of the game. [7]
Figure 1: Visualization of the outcomes for all combinations of actions
Each player will have ranks for each of the four outcomes. We call a player’s strategy a
dominant strategy if it results in a better pay off, regardless of the opponents strategy. In
this game, choosing to be aggressive will be a dominant strategy for both players because
it offers the best outcome for the lowest risk. This type of outcome is known as a Nash
Equilibrium. [7]
3
4. 3.3 Nash Equilibrium
Definition 3. A Nash Equilibrium is an outcome in a game such that a unilateral change
by any player will result in a worse payoff for that player. [7]
Before we define Nash equilibrium mathematically, we will define a game G. Let G =
(P, Ai, u) be a non-cooperative game. In this case,
• P is the finite set of players
• Ai is the set of actions for player i
• The set A = ⊗n
i=1Ai
The set of mixed strategies δ(Ai) for each player is a random variable over the set of actions
Ai. As a result, each player has a set of joint mixed strategies which can be denotes as
δ(A) = ⊗n
i=1δAi
A joint mixed strategy z ∈ δ(A) is a set of n vectors z→
1 , ..., z→
n where each vector z→
i is a
probability distribution on each action played by player i. With this information, we can
define the utility of an action for player i by
ui(z) = Ea z[ui(a)]
=
a∈A
z(a) ∗ ui(a)
=
a∈A
(
n
i=1
z→
i (ai)) ∗ ui(a) (1)
Now that we have defined all the components of G, we can define mathematically a Nash
equilibrium.
Definition 4. A Nash equilibrium is a joint mixed strategy z∗
∈ δ(A) if and only if ∀i ∈ P,
it holds true that ∀yi ∈ δ(Ai)
ui(z∗
) ≥ ui(z∗
−i, yi)
[6]
4 Network Security Solutions using Game Theory
Cybersecurity has been evolving now more than ever to respond to the increase in cyber
attacks. As computer systems are becoming larger and more powerful, cyber attacks are
taking larger financial tolls on companies. With the cost of cyber attacks becoming larger,
companies are taking whatever preventative measures are necessary to stop future attacks.
Recently, game models have been used to analyze the strengths of network systems. We
can imagine a cybersecurity attack as a two person non-cooperative game with incomplete
information. Player P1 is the hacker, and player P2 is the cybersecurity team. Because the
gains of P1 will not always be equal to the costs of P2, a stochastic game is best suited for
the scenario.
4
5. Definition 5. A two player stochastic game is a tuple (S, A1, A2, Q, R1, R2, τ), where
• S = {β1,...,βn} is the state set.
• Ai = {ai
1,...,ai
n} is a finite set of action sets.
• Q : S × A1 × S × A2 → [0, 1] is the transition function for each state.
• Ri : S × A1 × A2 → β is the return function for player i.
• 0 < τ ≤ 1 is a discount factor. The discount factor allows for situations where at a
current state, a transition has a reward worth its entire value, however the transition
from the next state is worth only a percentage of the value at the current state. This
value can be represented using τ.
We will also need to define a set P of probability vectors to represent the probability for
a given action φ at a particular state s. We will represent P as follows.
Pn
= {p ∈ βn
|
i
= 1n
pi = 1, pi ≥ 0} (2)
We denote πk
: S → PMk
as a strategy for player k. At a state s, the probability that player
k should choose some action φ can be represented as πk
(s, φ). These values can be stored in
a vector for each state such that
πk
(s) = {πk
(s, φ1), ..., πk
(s, φn)}. (3)
Having defined a strategy, we can differentiate between a mixed or pure strategy based on
their probability value. A mixed strategy πk
(s, φ) ≥ 0 but will never equal 1. While a pure
strategy πk
(s, φ) = 1. Having defined the stochastic game, the main goal in applying this
to computer networks is to determine the Nash Equilibrium for the problem in order to
determine the optimal action for P2 for any attack by P1. At any given time t, the game is
currently at state st. The return at st is ri
t ∀ i players. As such, we can express the expected
return to be the vector vk
π1,π2 , where
vk
π1,π2 = [vk
π1,π2 (β1), ..., vk
π1,π2 (βn)] (4)
and ∀s = {β1, ..., βn} ∈ S,
vk
π1,π2 (s) = Eπ1,π2 {ri
t + τri
t+1 + (τ)2
ri
t+2 + ... + (τ)H
ri
t+H} (5)
= Eπ1,π2 {
H
h=0
(τ)h
ri
t+h} (6)
Note that H represents the boundaries of the game and will typically equal ∞ since we
consider the game of cybersecurity an ongoing conflict. Using the non-linear program NLP-
1 created by Filar and Vrieze, we can compute the Nash equilibrium of the stochastic game.
By using this general approach, we can determine the best response by the cybersecurity
team to any cyber attack as long as we have a way of classifying the attacks. The most useful
way to classify attacks is by the time of recovery for the network. Knowing this information
can significantly help cybersecurity teams prepare in advance for attacks and can minimize
the harm done by attacks. [5]
5
6. 5 Game theoretics for Algorithm Analysis
5.1 Preliminaries
The ability to model outcomes using game theory has been helpful in various areas of com-
puter science. One particular application is in the study of computational complexity. Com-
putational complexity deals with studying the runtime and memory usage of algorithms.
Algorithms studied in complexity theory are typically categorized by the way they solve the
given desired problem. In order to better understand the application of game theory we will
define two types of algorithms. Algorithm A solves some problem p by producing the same
outcome every time it is ran. Algorithm B also solves problem P, however its outcome is
unpredictable and varies for each run. We call A a deterministic algorithm and B a random-
ized algorithm. With the output of B being nondeterministic, it is often difficult to compare
the complexities of randomized algorithms. Modeling using game theory can help solve this
problem. In 1977, Andrew Yao introduced the idea of comparing the complexities of de-
terministic algorithms to randomized algorithms [3]. To do this, Yao utilized the minimax
theorem, a fundamental theorem of game theory discovered by John von Neumann. We will
define a payoff matrix A and two mixed strategies X and Y for players a and b respectively.
5.1.1 John von Neumann Minimax Theorem
Theorem 1. John von Neumann Minimax Theorem: Given a m × n pay off matrix P for
a two player non-cooperative game, there exists a pair of optimal mixed strategies for both
players.
We will formally define a the value of a mixed strategy pair (x, y) as
V (x, y) :=
m
i=1
n
j=1
xipi,jyj. (7)
Note that in Equation 7, pi,j represents the value in the payoff matrix P at location (i, j).
Having defined the V (x, y), we can now define an equilibrium point.
Definition 6. Given a pair of mixed strategies (ˆx, ˆy), the pair is considered an equilibrium
point if and only if
max
x∈Xm
V (x, ˆy) = V (ˆx, ˆy) = min
y∈Yn
V (ˆx, y). (8)
[1]
5.2 Yao’s Minimax Principle
Andrew Yao extends the von Neumann’s minimax theorem to the comparison of algorithms.
We will define An to be the finite set of all deterministic algorithms which solve an arbitrary
problem p for the set of inputs In. We will let kn represent the cost or runtime of an algorithm
in A on some input in I such that
kn : An × In → N.
6
7. We denote the runtime of algorithm Aj on input Ij as kj(A, I). Note that Yao’s principle
does not directly define a randomized algorithm. However, we see that given the set of all
deterministic algorithms An for a problem p, a randomized algorithm R can represented by
a set of probability distributions σn on An. Using this representation, we can define R as
R = Probσn [A]. (9)
In game theoretic terms, we will define a game G = A, I, k . G will be a two player zero-
sum game with incomplete information. The first player P1 selects an algorithm from set
A. The second player P2 selects the input from the set I. P1 is trying to minimize the cost
of the algorithm selected, while P2 is trying to maximize the size of the input selected. We
can think of G as a matrix game where each location (i, j) in the matrix contains the value
k(Ai, Ij). Each player has a set of pure strategies which are simply elements in A and I.
We will create a random strategy by applying a probability distribution on both sets A and
I. This results in two random selections from Aφ and Iτ . The expected cost of a random
selection can be represented as
E[k(Aφ, Iτ )] =
(a,i)∈(A,I)
Probφ[a] · Probτ [i] · k(a, i). (10)
After applying von Neumann’s theorem, we see that
max
φ
min
τ
E[k(Aφ, Iτ )] = min
τ
max
φ
E[k(Aφ, Iτ )]. (11)
We can rewrite Equation 11 in the following form
max
φ
min
a∈A
E[k(A, Iτ )] = min
τ
max
i∈I
E[k(Aφ, I)]. (12)
As a corollary to this reformulation, we can write
Corollary 1.1. Let G = (A, I, k) be a game in which Aφ and Iτ are mixed strategies. Then,
min
a∈A
E[k(A, Iτ ] ≤ max
i∈I
E[k(Aφ, I)] (13)
Equation 13 shows that the worst pay off or runtime of a randomized algorithm is at
least the pay off of the best (lowest runtime) deterministic algorithm. [2]
6 Conclusion
In this paper, we explored two problems in computer science which can be solved using
game modeling. The first problem in the realm of cybersecurity used games to represent the
ongoing conflict between hackers and cybersecurity teams. By finding the Nash equilibrium
for the game, we defined a strategy for producing the optimal result for the security team for
any given cyber attack. Following this, we described a method for comparing randomized
and deterministic algorithms. To model this situation, we described a game in which one
player chooses an input, and another player chooses a deterministic algorithm. This game
7
8. represents a randomized algorithm and from this, we were able to develop relations between
a randomized algorithm and an equivalent deterministic algorithm for some problem p. Ul-
timately, we have shown that in roughly twenty years, Game theory has lead to significant
discoveries in computer science. The applications discussed in this paper attest the signifi-
cance of game theory. As we look to the future, game theory will be beneficial in evolving
fields in computer science.
8
9. References
[1] Von neumann minimax theorem. http://www.math.udel.edu/~angell/minimax.pdf.
[2] Yao’s minimax principle. http://math.uni-heidelberg.de/logic/md/lehre/
ra08-yaominimax-41.pdf. [Online; accessed 1-December-2015].
[3] B. Kapron, An introduction to game theory. University of Connecticut Math Club Talk,
2015.
[4] J. Levin, Extensive form games.
[5] K.-w. Lye and J. M. Wing, Game strategies in network security, International Journal
of Information Security, 4 (2005), pp. 71–86.
[6] Y. Mansour, Computational game theory, lecture 6: April 25, 2006. http://www.
math.tau.ac.il/~mansour/course_games/2006/lecture6.pdf.
[7] M. Minn-Thu-Aye, Andrew chi-chih yao. http://amturing.acm.org/award_
winners/yao_1611524.cfm, 2000.
[8] N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani, Algorithmic game
theory, vol. 1, Cambridge University Press Cambridge, 2007.
[9] P. Walker, A chronology of game theory.
9