This PowerPoint helps students to consider the concept of infinity.
Markov theory
1. ], ,oi
Mp",RKOVTHEORY
744 ~ ~Q~ '7,,~ U
a4dtUlt~"1
.. 14tUe ., ~ '* 'N41foi '/411'8
ti/".
DEFINITION 3.1:
A stochastic process, {x( t ), t E T}, is a collectionof random
variables. That is, for each t E T:. X(t) is a random variable. The
index t is often referred to as time and asa result, we refer to X( t)
as the state of the process at.time ~..The set T is called the index
set of the process.
DEFINITION 3.2:
When T is a countable set, the stochastic process is said to be a
discrete-time process. [f T is an interval of the real line, the
stochasticprocess is said to be continuous time- process.
DEFINITION: 3.3: .
The state space of a stochastic process is defined as the set of
all possible values that the random variables X(t) can assulne.
THUS, STOCHASTIC
A PROCESS ISA fAMILY Of RANDOM
VARIABLESTHAT DESCRIBES
THE EVOLUTIONTHROUGH
, TIME OF SOME (PHYSICAL) PROCESS.
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11
MARKOV THEORY EDGAR L. DE CASTRO PAGE 1
..
.. ..
2. DISCRETE-TIME PROCESSES
DEFINITION 3.4:
An epoch is a point in time at which the system is observed. The
states correspond ,to the possible conditions observed. A
transition is a change of state. A record of the observed states
through time is caned a realization of the process.
DEFINITION 3.5:
A transition diagram is a pictorial map in which the states are
represented by points and transition by arrows.
o
TRANSITION DIAGRAM FOR THREE STATES
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11
MARKOVTHEORY EDGARL. DECASTRO PAGE2
',., ''
.'" ::,.:
.... .
. ,'..
,", <
3. DEFINITION 3.6:
The process of transition can be visualized as a random walk of
the particle over the transition diagram. A virtual transition is
one where the new state is the same as the old. A real transition
is a genuine ?hange of state.
THE RANDOM WALK MODEL
Consider a discrete time process whose state space is given by the
integers i = O,:f: 1,:f: 2, The discrete time process is said to
be a random walk, if for some number 0 < P < 1,
lj,i+l = P = 1..1li,i-I i = 0,:1:1,:J:2,. ..
The random walk may be thought of as being a model for an
individual walking on a straight line who at each point of time
either takes one step to the right with probability p and one step to
the left with probability 1 - p.
I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111
MARKOV THEORY EDGAR L. DE CASTRO PAGE 3
" ", . ,
.. " ,,
. t:
,'. .
" .,
4. THE MARKOV CHAIN
DEFINITION 3.7:
A markov chain is a discrete time stochastic process in which
the current state of each random variable Xi depends only on the
previous state. The word chain suggests the linking of the random
variables to their immediately adjacent neighbors in the sequence.
Markov is the Russian mathematician who developed the process
around the beginning of the 20th century.
TRANSITION PROBABILITY (Pij) - the probability of a
transition from state i to state j after one period.
.
TRANSITION MATRIX (P) - the matrix of transition
probabilities.
PII PI2 ... PIn
P22
... P2n
P = P21
.
. I .
. .
. .
.
. .
. . .
. PDI Pn2
... Pnn
..
t:",'
,..
"
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
MARKOV THEORY EDGAR L. DE CASTRO PAGE 4
.. ...
.. ......
.'. .', .
',' .
. ..".
5. ASSUMPTIONS OF THE MARKOV CHAIN
t. THEMARKOV ASSUMPTION,
The knowledge of the state at any time is sufficient to predict
the future of the process. Or, given the present, the future is
independent of the parts and the process is "forgetful."
2. THE STATIONARITYASSUMPTION
The probability mechanism is assumed as stable.
CHAPMAN-KOLMOGOROV EQUATIONS
Let PDI1) = the n step transition probability, i.e., the probability
that a process in state i will be in state j after n
additional transitions.
pJn) = P{Xn+m = jlXnl = i}, n > 0, i,j > 0
The Chapman-Kolmogorov equations provide a method for
calculating these n-step transition probabilities.
00
P(n+m) -
o
- L...pIk rkJ
.(n)n(~n)in ' m> 0 all i,J
~'
ij
k=O
-, '
Formally, we derive:
11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111
MARKOV THEORY EDGAR r..,DE CASTRO PAGE 5
"' . '. .
," " ' .
. .:
. -.:'
,"
, ,
6. 00.
. LP{Xn+m = j,Xn = kixo = i}
k=O
00
= LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i}
k=O
- " n(m) p ik
- .i- rkj
00
(n)
k=O .
If we let p~n) denote the matrix of n-step transition probabiIities
p,(n) then
1] ,
p(n+m) = p(n) - p(trt)
where the dot represents matrix multiplication. Hence, in
particular:
p(2) = p(l+l) = p- p = p2
And by induction:
p(o) = p(n-l+1) = pn-I - p = pO
That is, the n-step transition matrix is obtained by multiplying
matrix P by itself n times. Therefore the N-step transition matrix is
given by:
1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili
MARKOV THEORY EDGARL. DECASTRO PAGE 6
0.
7. rp(N) p(N)
12
... In
pCl'J) l
111(N) p(N) ... p(N)
p(N) = I P21
. 22
. . 2n
.
.
. .
. .
. .
.
FIRST PASSAGE AND FIRST RETURN
PROBABILITIES
Let f~N) = first passage probability
= probability of reaching state j from state i for
the first time in N steps.
f~N)= first return probability if i = j
fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf :I:jlXo = i}
f.(I) = p..
lJ IJ
N-l
f,(N)= p.(N)-
IJ IJ
~
~ IJ
f.(k)p(N-k)
JJ
k=1
111111111111111111111111111111111111111111111111111111" 11111111111" 11111111111111111111111111111111111" 111111111111111111111111111111111111111""" 11/1111111" 11111111111I11111
fv1ARKOVTHEORY EDGAR L. DE CASTRO PAGE 7
.
'. I"
.... .
".:' .
...
8. CLAS~IFICATION OF STATES
For fixed i andj, the fi~N) are nonnegative numbers such that
When the sum does equal 1, fi~N) can be considered as a
probability distribution for the random variable: first passage time
If i =j and
00
L f(N) -
IJ - 1
N=1
then state i is caned a recurrent state because this condition
implies that once the process is in state i, it will return to state i.
A special case of the recuuent state is the absorbingstate. A state
is said to be an absorb,lng state if the one step transition
probability Pij = 1. Thus, if a state is absorbing, the process win
never leave once it enters. If
00
L f(N) <t
N=] 1J
then state i is called a transient state because this condition implies
that once the process is in state i, there is a strictly positive
probability that it will never return to i.
I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11
MARKOV THEORY EDGARL. DECASTRO PAGE 8
'"
, ..
.:
9. Let Mij = expected first passage time from i to j
00
00 if L
N=1 ~
fi(N) <1
00
if L
N=1
fDN) = 1
[Mijexists only if the states are recurrent]
Whenever
00
~
£.oJ 1J
f.(N)-
-
1
N=l
Then
Moo= 1+ '" P'k M kJ'
1J £.. I
k*j
When j = i, the expected first passage time is caUed the first
recurrence time. If Mii = 00, it is called a null recurrent
state, If Mii< 00, it is called a positive recurrent state, In a
finite Markov chain, there are no null recurrent states (only
positive recurrent states and transient states).
111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11
MARKOV THEORY EDGAR L. DE CASTRO PAGE 9
.'." .
10. State j is accessible from i if Pij > 0 for some n > o. If j is
accessible to i and i is accessible to j, then the two states
communicate. In general :
(1) any state communicates with itself.
(2) if state i communicates with state j, then state j communicates
with state i.
(3) if state i communicates with state j and state j communicates
with state k, then state i communicates with state k.
If all states communicate, the Markov chain is Irreducible. In a
fmite Markov chain, the members of a class are either all transient
states or all positive recurrent states. A state i is said to have a
. period t (t > 1) if Pj]N) = 0 whenever n is not divisible by t, and t
is the largest integer with this property.If a state has a period 1, it
is called aperiodic state. If state i in a class is aperiodic, then
.all states in the class are aperiodic. Positive recurrent states that
are aperiodicare called ergodic states.
11111111111111111111111111111111111111 n 11111111111" 1111111111" II" 111111" II" II iIIlllll"" !l1I1I111111" n II111I "" 11111111111111111" 11111" I" 1111111111111111" I111III1IIII
MARKOV THEORY EDGARL. DECASTRO PAGE 10
.. .
. ," ..:
.
'." .
",'I
11. ERGODIC MARKOV CHAINS
STEADY STATE PROBABILITIES (LIMITING
PROBABILITIES)
Let 7tj = N~oo p.(N)
lim IJ
As N grows large:
7t1 7t2 ... 7tn
...
.
pN ~ 17tI . .
7t2 . 7tn
. . .
. . .
. .
... 7tn
As long as the process is ergodic, such Iin1itexists.
p(I'l) = p(N-I) 8 P
Jim peN) = Jim p(N-I). p
N~oo N~oo
... 7t 11 'it--
1 1t 2 ... 7t n
.
.
. . =. .... .
. . . l.p
... .
1tnJ ...
L7t1 7t2 : . ,:J
1t=1t8P
1tT = pT 81t
[This system possesses an infinite number of solutions.]
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1
MARKOV THEORY EDGAR L. DE CASTRO PAGE 11
',' : '
. ". .
". ::
.
" 'I .
,'.
12. The nonnalizing equation
L 1ti = 1
al1 i
is used to identifY the one solution which wiU qualify as a
probability distribution.
ABSORBING MARKOV CHAINS
Let
PH Pl2 ... 'Plk I
Pl,k+l
... ...
P21 P22 ... P2k P2,k+l
... ...
.
I
. .
. .
. .
.
. . . . I
Pkl Pk2
... Pkk I
Pk,k +1
... ...
- - - - I - - -
0 0 ... 0 I
1 ... 0
0 0 ... 0 0 ... 0
.
.
I
.
. .
. .
.
. I
. . .
0 0 .. . 0 I
0 ... I
The partitioned matrix is given by:
Q I I( ..,
P=I- -
0 I
J
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111
MARKOV THEORY EDGAR L. DE CASTRO PAGE 12
13. Let eij = mean number of times that transient state j is occupied
given the initial state in i before absorption
E = corresponding matrix
Then,
k
i:l: j: eij"= L ~vevj
v=l
k
i = j : eij = 1 + L
v=l
Pjvevj
In matrix form :
E = I + QE
E - QE =I
(I-Q)E=I
E=(I-Q)-l
Let di = total number of transitions until absorption
k
di = Leij
j=l
111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11
MARKOVTHEORY EDGARL. DE CASTRO PAGE 13
.":.":
14. ABSORPTION PROBABILITY - probability of entering an
absorbing state
Let Aij = probability that the process even enters absorbing state j
given that the initial state is i.
k
Aij L
= Pij + v=l PivAvj
In matrix form
A = matrix of Aij (not necessarily square)
[where the number of rows is the number of transient states and
the number of columns is the number of absorbing states]
Examining matrix A
A=R+QA
A-QA=R
(1- Q)A = R
A = [I - Q ]-1 R
CONDITIONAL MEAN FIRST PASSAGE TIME - number of
transitions which will occur before an absorbing state is entered
A..M..=A..+ 1J ~ P.kA .M .
U IJ £.- I'" kJ kJ
k=#j
111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111
MARKOV THEORY EDGAR L. DE CASTRO PAGE 14
..'..',. .
.
.,
.'