SlideShare ist ein Scribd-Unternehmen logo
1 von 14
Downloaden Sie, um offline zu lesen
],   ,oi




                                                                                                      Mp",RKOVTHEORY

                                                                744                ~                                   ~Q~                                                       '7,,~ U
                                                                                                                                                                        a4dtUlt~"1
                                                                   ..            14tUe                 ., ~                         '*              'N41foi '/411'8
                                                                                                                                                                  ti/".

               DEFINITION 3.1:

               A stochastic                                                        process, {x( t ), t E T}, is a collectionof random
               variables. That is, for each t E T:. X(t) is a random variable. The
               index t is often referred to as time and asa result, we refer to X( t)
               as the state of the process at.time ~..The set T is called the index
               set of the process.

               DEFINITION 3.2:

               When T is a countable set, the stochastic process is said to be a
               discrete-time process. [f T is an interval of the real line, the
               stochasticprocess is said to be continuous time- process.

               DEFINITION: 3.3:                                                                           .




               The state space of a stochastic process is defined as the set of
               all possible values that the random variables X(t) can assulne.

               THUS, STOCHASTIC
                    A          PROCESS ISA fAMILY Of RANDOM
               VARIABLESTHAT DESCRIBES
                                     THE EVOLUTIONTHROUGH
           ,   TIME OF SOME (PHYSICAL) PROCESS.




               1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11



               MARKOV THEORY                                                                                           EDGAR L. DE CASTRO                                                                  PAGE 1

                                                                                                              ..
                                                                                                              ..                                                                                      ..
DISCRETE-TIME PROCESSES

DEFINITION 3.4:

An epoch is a point in time at which the system is observed. The
states correspond ,to the possible conditions observed. A
transition is a change of state. A record of the observed states
through time is caned a realization of the process.


DEFINITION 3.5:

A transition diagram is a pictorial map in which the states are
represented by points and transition by arrows.




                                                                                                                          o
                                        TRANSITION                                                       DIAGRAM                                                FOR THREE STATES




1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11


MARKOVTHEORY                                                                                            EDGARL. DECASTRO                                                                         PAGE2

                                                                                                                                                                                       ',., ''
                                                                                        .'" ::,.:
                                                                                            ....                                                                                       .
                                                                                                                                                                                       . ,'..
                                                                                                                                                                                        ,", <
DEFINITION 3.6:

     The process of transition can be visualized as a random walk of
     the particle over the transition diagram. A virtual transition is
     one where the new state is the same as the old. A real transition
     is a genuine ?hange of state.


     THE RANDOM WALK MODEL

     Consider a discrete time process whose state space is given by the
     integers i = O,:f: 1,:f: 2,   The discrete time process is said to
     be a random walk, if for some number 0 < P < 1,

                                     lj,i+l                   = P = 1..1li,i-I                                                          i = 0,:1:1,:J:2,. ..


     The random walk may be thought of as being a model for an
     individual walking on a straight line who at each point of time
     either takes one step to the right with probability p and one step to
     the left with probability 1 - p.




     I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111

     MARKOV THEORY                                                                     EDGAR L. DE CASTRO                                                                                   PAGE 3

"                                                                                  ",                                                                            . ,
..                                                                                 "                                                                                ,,
                                                                                                                                                                 . t:
                                                                                                                                                                ,'. .
                                                                                                                                                                " .,
THE MARKOV CHAIN

DEFINITION 3.7:

A markov chain is a discrete time stochastic process in which
the current state of each random variable Xi depends only on the
previous state. The word chain suggests the linking of the random
variables to their immediately adjacent neighbors in the sequence.
Markov is the Russian mathematician who developed the process
around the beginning of the 20th century.

TRANSITION PROBABILITY (Pij) - the probability of a
transition from state i to state j after one period.



                                            .
TRANSITION                                                 MATRIX                             (P)          -         the matrix of transition
probabilities.

                                                                                 PII          PI2                 ...               PIn
                                                                                              P22
                                                                                                                  ...               P2n
                                                        P = P21
                                                             .
                                                             .          I                       .
                                                                                                .                  .
                                                                                                                   .                      .
                                                                                                                                          .
                                                                                      .   .
                                                                                                .                  .                      .
                                                                            .    PDI          Pn2
                                                                                                                  ...               Pnn

                                 ..
                                  t:",'
                                      ,..
                                  "




1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111         II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111


MARKOV THEORY                                                                    EDGAR L. DE CASTRO                                                                                        PAGE 4

                                                                                ..                                                         ...
                                                                                ..                                                      ......

                                                                                .'.                                                         .', .
                                                                                                                                           ',' .
                                                                        . ..".
ASSUMPTIONS OF THE MARKOV CHAIN

      t. THEMARKOV ASSUMPTION,
         The knowledge of the state at any time is sufficient to predict
         the future of the process. Or, given the present, the future is
         independent of the parts and the process is "forgetful."

     2. THE STATIONARITYASSUMPTION
        The probability mechanism is assumed as stable.

     CHAPMAN-KOLMOGOROV                                                                                          EQUATIONS

     Let PDI1) = the n step transition probability, i.e., the probability
                                     that a process in state i will be in state j after n
                                     additional transitions.


                           pJn)              = P{Xn+m = jlXnl = i}, n > 0, i,j > 0
     The Chapman-Kolmogorov equations provide a method for
     calculating these n-step transition probabilities.

                                                                        00

                                       P(n+m) -
                                                                                                                                                               o


                                                               - L...pIk rkJ
                                                                      .(n)n(~n)in ' m> 0 all i,J
                                                                       ~'
                                           ij
                                                                     k=O
                                                                                      -,                                             '




     Formally, we derive:




     11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111

     MARKOV THEORY                                                      EDGAR r..,DE CASTRO                                                                                      PAGE 5

                                                                      "'                                                             . '. .
,"                                                                   " '                                                                   .
                                                                                                                                     . .:
                                                                                                                                   . -.:'
                                                                                                                                    ,"
                                                                                                                                         , ,
00.

                            .    LP{Xn+m = j,Xn = kixo = i}
                                k=O
                                  00

                         = LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i}
                                k=O
                         - " n(m) p ik
                         - .i- rkj
                                  00
                                     (n)
                                k=O                                                                        .




If we let p~n) denote the matrix of n-step transition probabiIities
p,(n) then
  1] ,


                                                         p(n+m)                    = p(n)                  - p(trt)

where the dot represents matrix multiplication.                                                                                                            Hence, in
particular:

                                                p(2)             = p(l+l) = p- p = p2
And by induction:

                                        p(o)           = p(n-l+1) = pn-I - p = pO
That is, the n-step transition matrix is obtained by multiplying
matrix P by itself n times. Therefore the N-step transition matrix is
given by:




1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili

MARKOV THEORY                                                    EDGARL. DECASTRO                                                                                      PAGE 6

                                                               0.
rp(N)                            p(N)
                                                                                                   12
                                                                                                                         ...                 In
                                                                                                                                            pCl'J)               l
                                                111(N)                                            p(N)                   ...                p(N)
                                         p(N) = I P21
                                                    .                                              22
                                                                                                           .              .                  2n
                                                                                                                                                     .
                                                                              .
                                                                              .                            .
                                                                                                           .              .
                                                                                                                          .                          .
                                                                                                                                                     .

FIRST   PASSAGE                                                                               AND                           FIRST                                    RETURN
PROBABILITIES


Let f~N)                          = first passage                                       probability
                                 = probability of reaching state j from state i for
                                   the first time in N steps.

                 f~N)= first return probability if i = j


         fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf                                                                                                        :I:jlXo = i}
          f.(I) = p..
             lJ                        IJ
                                                          N-l
         f,(N)= p.(N)-
          IJ     IJ
                                                          ~
                                                           ~ IJ
                                                                   f.(k)p(N-k)
                                                                                     JJ
                                                          k=1




111111111111111111111111111111111111111111111111111111"       11111111111"   11111111111111111111111111111111111"   111111111111111111111111111111111111111"""       11/1111111"   11111111111I11111


fv1ARKOVTHEORY                                                         EDGAR L. DE CASTRO                                                                                          PAGE 7


                                                                                                                                             .
                                                                                                                                       '. I"
                                                                                                                                    .... .
                                                                                                                                      ".:' .
                                                                                                                                       ...
CLAS~IFICATION OF STATES


For fixed i andj, the fi~N) are nonnegative numbers such that




When the sum does equal 1, fi~N) can be considered as a
probability distribution for the random variable: first passage time

If i =j and
                                                                                              00

                                                                                             L f(N) -
                                                                                                IJ  -                              1

                                                                                          N=1

then state i is caned a recurrent state because this condition
implies that once the process is in state i, it will return to state i.

A special case of the recuuent state is the absorbingstate. A state
is said to be an absorb,lng state if the one step transition
probability Pij = 1. Thus, if a state is absorbing, the process win
never leave once it enters. If

                                                                                              00

                                                                                             L          f(N)              <t
                                                                                          N=] 1J

then state i is called a transient state because this condition implies
that once the process is in state i, there is a strictly positive
probability that it will never return to i.

I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11

MARKOV THEORY                                                                  EDGARL. DECASTRO                                                                                        PAGE 8


                                                                                                                                                           '"
                                                                                                                                                         , ..
                                                                                                                                                             .:
Let Mij                    = expected first passage time from i to j
                                                                                                                                    00

                                                                           00                                           if         L
                                                                                                                                 N=1 ~
                                                                                                                                              fi(N)             <1
                                                                                                                                   00


                                                                                                                         if       L
                                                                                                                                N=1
                                                                                                                                             fDN) = 1


                                          [Mijexists only if the states are recurrent]

Whenever

                                                                                            00

                                                                                           ~
                                                                                         £.oJ 1J
                                                                                                      f.(N)-
                                                                                                           -
                                                                                                                               1
                                                                                        N=l

Then

                                                                          Moo= 1+ '" P'k M kJ'
                                                                           1J     £.. I
                                                                                                            k*j


When j = i, the expected first passage time is caUed the first
recurrence time. If Mii = 00, it is called a null recurrent
state, If Mii< 00, it is called a positive recurrent state, In a
finite Markov chain, there are no null recurrent states (only
positive recurrent states and transient states).




111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11

MARKOV THEORY                                                                 EDGAR L. DE CASTRO                                                                                       PAGE 9


                                                                                                                                                   .'." .
State j is accessible  from i if Pij > 0 for some n > o. If j is
    accessible to i and i is accessible to j, then the two states
    communicate. In general :


    (1) any state communicates                                                       with itself.
    (2) if state i communicates                                                      with state j, then state j communicates
        with state i.
    (3) if state i communicates                                                      with state j and state j communicates
        with state k, then state i                                                   communicates with state k.


    If all states communicate, the Markov chain is Irreducible. In a
    fmite Markov chain, the members of a class are either all transient
    states or all positive recurrent states. A state i is said to have a
.   period t (t > 1) if Pj]N)                                               = 0 whenever n is not divisible by t, and t
    is the largest integer with this property.If a state has a period 1, it
    is called aperiodic state. If state i in a class is aperiodic, then
    .all states in the class are aperiodic. Positive recurrent states that
     are aperiodicare called ergodic states.




    11111111111111111111111111111111111111   n 11111111111" 1111111111" II" 111111" II" II iIIlllll""   !l1I1I111111"   n II111I ""   11111111111111111" 11111" I" 1111111111111111" I111III1IIII

    MARKOV THEORY                                                        EDGARL. DECASTRO                                                                                      PAGE 10


                                                                    ..                                                                  .
                                                                                                                                  . ," ..:
                                                                                                                                    .
                                                                                                                                    '." .
                                                                                                                                      ",'I
ERGODIC MARKOV CHAINS

STEADY    STATE                                                                                 PROBABILITIES                                                                      (LIMITING
PROBABILITIES)

Let 7tj                   = N~oo p.(N)
                             lim IJ

As N grows large:

                                                                                            7t1               7t2               ...              7tn
                                                                                                                                ...
                                                                      .
                                                               pN ~ 17tI                        .              .
                                                                                                              7t2    . 7tn
                                                                                                                  . . .
                                                                                                                  . . .
                                                                                                .                        .
                                                                                                                    ... 7tn
As long as the process is ergodic,                                                                                     such Iin1itexists.

                             p(I'l) = p(N-I) 8 P
                                Jim peN) = Jim p(N-I).                                                                        p
                             N~oo                                          N~oo
                                                                     ...              7t 11                      'it--
                                                                                                                    1              1t 2               ...             7t n
                                    .
                                    .
                                    .                                                     . =. .... .
                                                                                          .                                                              .                 . l.p
                                                                     ...               .
                                                                                      1tnJ                     ...
                                                                                                             L7t1 7t2                                    :   .        ,:J
                                                                                              1t=1t8P

                                                                                              1tT = pT 81t

                  [This system possesses an infinite number of solutions.]


1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1

MARKOV THEORY                                                                  EDGAR L. DE CASTRO                                                                                      PAGE 11


                                                                                                                                                   ',' : '
                                                                                                                                                     . ". .
                                                                                                                                                      ". ::
                                                                                                                                                       .
                                                                                                                                                      " 'I .
                                                                                                                                                      ,'.
The nonnalizing equation

                                                                                          L 1ti = 1
                                                                                        al1 i
is used to identifY the one solution which wiU qualify as a
probability distribution.

ABSORBING MARKOV CHAINS

Let

                                   PH                  Pl2                ...             'Plk                    I
                                                                                                                         Pl,k+l
                                                                                                                                                    ... ...
                                   P21                 P22                ...             P2k                           P2,k+l
                                                                                                                                                    ... ...
                                       .
                                                                                                                  I



                                       .                   .
                                                           .               .
                                                                           .                   .
                                                                                               .
                                       .                   .               .                   .                  I



                                   Pkl                Pk2
                                                                          ...             Pkk                     I
                                                                                                                        Pk,k +1
                                                                                                                                                    ... ...
                                      -                   -                 -                 -                   I            -                      - -
                                      0                   0               ...                 0                   I
                                                                                                                                 1                  ... 0
                                      0                   0               ...                 0                                 0                   ... 0
                                                                                                .
                                                                                                .
                                                                                                                  I

                                                                                                                                 .
                                                                                                                                 .                   .
                                                                                                                                                     .                 .
                                                                                                                                                                       .
                                                                                                .                 I
                                                                                                                                 .                   .                 .
                                      0                   0                .. .               0                   I
                                                                                                                                0                   ...                I

The partitioned                                  matrix is given by:

                                                                                             Q                I          I( ..,
                                                                            P=I-                           -
                                                                                              0               I
                                                                                                                               J



IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!
                                          11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111

MARKOV THEORY                                                             EDGAR L. DE CASTRO                                                                                          PAGE 12
Let eij                           = mean number                                                              of times that transient state j is occupied
                        given the initial state in i before absorption
                    E = corresponding matrix

Then,
                                                                                                                                                   k
                                                                                            i:l: j: eij"=                                       L ~vevj
                                                                                                                                              v=l
                                                                                                                                                                  k
                                                                                            i = j : eij = 1 +                                                  L
                                                                                                                                                             v=l
                                                                                                                                                                             Pjvevj


In matrix form :

                                                                                                                  E = I + QE
                                                                                                                  E - QE                                =I
                                                                                                                  (I-Q)E=I
                                                                                                                  E=(I-Q)-l

Let di = total number of transitions until absorption

                                                                                                                                                    k
                                                                                                                          di           = Leij
                                                                                                                                                 j=l




111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11


MARKOVTHEORY                                                                                           EDGARL. DE CASTRO                                                                        PAGE 13

                                                                                                                                                                                       .":.":
ABSORPTION PROBABILITY                                                                                                   - probability of entering an
                                                                                                                           absorbing state

Let Aij                  = probability that the process even enters absorbing state j
                                given that the initial state is i.

                                                                                                                   k
                                                                         Aij                  L
                                                                                      = Pij + v=l PivAvj
In matrix form


A        = matrix of Aij (not                                                       necessarily square)

[where the number of rows is the number of transient states and
the number of columns is the number of absorbing states]

Examining matrix A

                                                                                   A=R+QA
                                                                                   A-QA=R
                                                                                   (1- Q)A = R
                                                                                   A = [I - Q ]-1 R

CONDITIONAL MEAN FIRST PASSAGE TIME - number of
transitions which will occur before an absorbing state is entered

                                                         A..M..=A..+ 1J                                         ~ P.kA .M                                       .
                                                           U IJ                                                 £.- I'" kJ kJ
                                                                                                               k=#j




111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111

MARKOV THEORY                                                                    EDGAR L. DE CASTRO                                                                                    PAGE 14

                                                                      ..'..',. .
                                                                            .
                                                                          .,
                                                                            .'

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Top Down and Bottom Up Design Model
Top Down and Bottom Up Design ModelTop Down and Bottom Up Design Model
Top Down and Bottom Up Design Model
 
Chap 4 markov chains
Chap 4   markov chainsChap 4   markov chains
Chap 4 markov chains
 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
 
Markov Chain and its Analysis
Markov Chain and its Analysis Markov Chain and its Analysis
Markov Chain and its Analysis
 
Industrial management
Industrial managementIndustrial management
Industrial management
 
Markov presentation
Markov presentationMarkov presentation
Markov presentation
 
Roles of project managers in oe
Roles of project managers in oeRoles of project managers in oe
Roles of project managers in oe
 
Uniform Distribution
Uniform DistributionUniform Distribution
Uniform Distribution
 
GAUSS ELIMINATION METHOD
 GAUSS ELIMINATION METHOD GAUSS ELIMINATION METHOD
GAUSS ELIMINATION METHOD
 
MIS: Project Management Systems
MIS: Project Management SystemsMIS: Project Management Systems
MIS: Project Management Systems
 
4 Phases of Project Management Cycle
4 Phases of Project Management Cycle4 Phases of Project Management Cycle
4 Phases of Project Management Cycle
 
Project Management Life Cycle
Project Management Life CycleProject Management Life Cycle
Project Management Life Cycle
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
 
Waterfall Methodology
Waterfall MethodologyWaterfall Methodology
Waterfall Methodology
 
Hidden markov model ppt
Hidden markov model pptHidden markov model ppt
Hidden markov model ppt
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Functional analysis
Functional analysis Functional analysis
Functional analysis
 
Markov process
Markov processMarkov process
Markov process
 
Fixed point iteration
Fixed point iterationFixed point iteration
Fixed point iteration
 
Probability Formula sheet
Probability Formula sheetProbability Formula sheet
Probability Formula sheet
 

Andere mochten auch

Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationMissAnam
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationAnurag Jaiswal
 
Search Engine Marketing
Search Engine Marketing Search Engine Marketing
Search Engine Marketing Mehul Rasadiya
 
Monte Carlo Simulations
Monte Carlo SimulationsMonte Carlo Simulations
Monte Carlo Simulationsgfbreaux
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationRajesh Piryani
 

Andere mochten auch (7)

Markov chain
Markov chainMarkov chain
Markov chain
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 
Monte carlo
Monte carloMonte carlo
Monte carlo
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 
Search Engine Marketing
Search Engine Marketing Search Engine Marketing
Search Engine Marketing
 
Monte Carlo Simulations
Monte Carlo SimulationsMonte Carlo Simulations
Monte Carlo Simulations
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 

Ähnlich wie Markov theory

1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)pmloscholte
 
Standard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterStandard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterVishal Gondal
 
Process flow map
Process flow mapProcess flow map
Process flow mapadimak
 
Process flow map
Process flow map Process flow map
Process flow map adimak
 

Ähnlich wie Markov theory (6)

Mock DUI
Mock DUIMock DUI
Mock DUI
 
Bag filters
Bag filtersBag filters
Bag filters
 
1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)
 
Standard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterStandard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues Letter
 
Process flow map
Process flow mapProcess flow map
Process flow map
 
Process flow map
Process flow map Process flow map
Process flow map
 

Mehr von De La Salle University-Manila

Chapter3 general principles of discrete event simulation
Chapter3   general principles of discrete event simulationChapter3   general principles of discrete event simulation
Chapter3 general principles of discrete event simulationDe La Salle University-Manila
 

Mehr von De La Salle University-Manila (20)

Queueing theory
Queueing theoryQueueing theory
Queueing theory
 
Queueing theory
Queueing theoryQueueing theory
Queueing theory
 
Queuing problems
Queuing problemsQueuing problems
Queuing problems
 
Verfication and validation of simulation models
Verfication and validation of simulation modelsVerfication and validation of simulation models
Verfication and validation of simulation models
 
Markov exercises
Markov exercisesMarkov exercises
Markov exercises
 
Game theory problem set
Game theory problem setGame theory problem set
Game theory problem set
 
Game theory
Game theoryGame theory
Game theory
 
Decision theory Problems
Decision theory ProblemsDecision theory Problems
Decision theory Problems
 
Decision theory handouts
Decision theory handoutsDecision theory handouts
Decision theory handouts
 
Sequential decisionmaking
Sequential decisionmakingSequential decisionmaking
Sequential decisionmaking
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Decision theory blockwood
Decision theory blockwoodDecision theory blockwood
Decision theory blockwood
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Random variate generation
Random variate generationRandom variate generation
Random variate generation
 
Random number generation
Random number generationRandom number generation
Random number generation
 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
 
Input modeling
Input modelingInput modeling
Input modeling
 
Conceptual modeling
Conceptual modelingConceptual modeling
Conceptual modeling
 
Chapter3 general principles of discrete event simulation
Chapter3   general principles of discrete event simulationChapter3   general principles of discrete event simulation
Chapter3 general principles of discrete event simulation
 
Comparison and evaluation of alternative designs
Comparison and evaluation of alternative designsComparison and evaluation of alternative designs
Comparison and evaluation of alternative designs
 

Kürzlich hochgeladen

How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...KokoStevan
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 

Kürzlich hochgeladen (20)

How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 

Markov theory

  • 1. ], ,oi Mp",RKOVTHEORY 744 ~ ~Q~ '7,,~ U a4dtUlt~"1 .. 14tUe ., ~ '* 'N41foi '/411'8 ti/". DEFINITION 3.1: A stochastic process, {x( t ), t E T}, is a collectionof random variables. That is, for each t E T:. X(t) is a random variable. The index t is often referred to as time and asa result, we refer to X( t) as the state of the process at.time ~..The set T is called the index set of the process. DEFINITION 3.2: When T is a countable set, the stochastic process is said to be a discrete-time process. [f T is an interval of the real line, the stochasticprocess is said to be continuous time- process. DEFINITION: 3.3: . The state space of a stochastic process is defined as the set of all possible values that the random variables X(t) can assulne. THUS, STOCHASTIC A PROCESS ISA fAMILY Of RANDOM VARIABLESTHAT DESCRIBES THE EVOLUTIONTHROUGH , TIME OF SOME (PHYSICAL) PROCESS. 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 1 .. .. ..
  • 2. DISCRETE-TIME PROCESSES DEFINITION 3.4: An epoch is a point in time at which the system is observed. The states correspond ,to the possible conditions observed. A transition is a change of state. A record of the observed states through time is caned a realization of the process. DEFINITION 3.5: A transition diagram is a pictorial map in which the states are represented by points and transition by arrows. o TRANSITION DIAGRAM FOR THREE STATES 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11 MARKOVTHEORY EDGARL. DECASTRO PAGE2 ',., '' .'" ::,.: .... . . ,'.. ,", <
  • 3. DEFINITION 3.6: The process of transition can be visualized as a random walk of the particle over the transition diagram. A virtual transition is one where the new state is the same as the old. A real transition is a genuine ?hange of state. THE RANDOM WALK MODEL Consider a discrete time process whose state space is given by the integers i = O,:f: 1,:f: 2, The discrete time process is said to be a random walk, if for some number 0 < P < 1, lj,i+l = P = 1..1li,i-I i = 0,:1:1,:J:2,. .. The random walk may be thought of as being a model for an individual walking on a straight line who at each point of time either takes one step to the right with probability p and one step to the left with probability 1 - p. I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 3 " ", . , .. " ,, . t: ,'. . " .,
  • 4. THE MARKOV CHAIN DEFINITION 3.7: A markov chain is a discrete time stochastic process in which the current state of each random variable Xi depends only on the previous state. The word chain suggests the linking of the random variables to their immediately adjacent neighbors in the sequence. Markov is the Russian mathematician who developed the process around the beginning of the 20th century. TRANSITION PROBABILITY (Pij) - the probability of a transition from state i to state j after one period. . TRANSITION MATRIX (P) - the matrix of transition probabilities. PII PI2 ... PIn P22 ... P2n P = P21 . . I . . . . . . . . . . . . PDI Pn2 ... Pnn .. t:",' ,.. " 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 4 .. ... .. ...... .'. .', . ',' . . ..".
  • 5. ASSUMPTIONS OF THE MARKOV CHAIN t. THEMARKOV ASSUMPTION, The knowledge of the state at any time is sufficient to predict the future of the process. Or, given the present, the future is independent of the parts and the process is "forgetful." 2. THE STATIONARITYASSUMPTION The probability mechanism is assumed as stable. CHAPMAN-KOLMOGOROV EQUATIONS Let PDI1) = the n step transition probability, i.e., the probability that a process in state i will be in state j after n additional transitions. pJn) = P{Xn+m = jlXnl = i}, n > 0, i,j > 0 The Chapman-Kolmogorov equations provide a method for calculating these n-step transition probabilities. 00 P(n+m) - o - L...pIk rkJ .(n)n(~n)in ' m> 0 all i,J ~' ij k=O -, ' Formally, we derive: 11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111 MARKOV THEORY EDGAR r..,DE CASTRO PAGE 5 "' . '. . ," " ' . . .: . -.:' ," , ,
  • 6. 00. . LP{Xn+m = j,Xn = kixo = i} k=O 00 = LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i} k=O - " n(m) p ik - .i- rkj 00 (n) k=O . If we let p~n) denote the matrix of n-step transition probabiIities p,(n) then 1] , p(n+m) = p(n) - p(trt) where the dot represents matrix multiplication. Hence, in particular: p(2) = p(l+l) = p- p = p2 And by induction: p(o) = p(n-l+1) = pn-I - p = pO That is, the n-step transition matrix is obtained by multiplying matrix P by itself n times. Therefore the N-step transition matrix is given by: 1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili MARKOV THEORY EDGARL. DECASTRO PAGE 6 0.
  • 7. rp(N) p(N) 12 ... In pCl'J) l 111(N) p(N) ... p(N) p(N) = I P21 . 22 . . 2n . . . . . . . . . FIRST PASSAGE AND FIRST RETURN PROBABILITIES Let f~N) = first passage probability = probability of reaching state j from state i for the first time in N steps. f~N)= first return probability if i = j fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf :I:jlXo = i} f.(I) = p.. lJ IJ N-l f,(N)= p.(N)- IJ IJ ~ ~ IJ f.(k)p(N-k) JJ k=1 111111111111111111111111111111111111111111111111111111" 11111111111" 11111111111111111111111111111111111" 111111111111111111111111111111111111111""" 11/1111111" 11111111111I11111 fv1ARKOVTHEORY EDGAR L. DE CASTRO PAGE 7 . '. I" .... . ".:' . ...
  • 8. CLAS~IFICATION OF STATES For fixed i andj, the fi~N) are nonnegative numbers such that When the sum does equal 1, fi~N) can be considered as a probability distribution for the random variable: first passage time If i =j and 00 L f(N) - IJ - 1 N=1 then state i is caned a recurrent state because this condition implies that once the process is in state i, it will return to state i. A special case of the recuuent state is the absorbingstate. A state is said to be an absorb,lng state if the one step transition probability Pij = 1. Thus, if a state is absorbing, the process win never leave once it enters. If 00 L f(N) <t N=] 1J then state i is called a transient state because this condition implies that once the process is in state i, there is a strictly positive probability that it will never return to i. I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11 MARKOV THEORY EDGARL. DECASTRO PAGE 8 '" , .. .:
  • 9. Let Mij = expected first passage time from i to j 00 00 if L N=1 ~ fi(N) <1 00 if L N=1 fDN) = 1 [Mijexists only if the states are recurrent] Whenever 00 ~ £.oJ 1J f.(N)- - 1 N=l Then Moo= 1+ '" P'k M kJ' 1J £.. I k*j When j = i, the expected first passage time is caUed the first recurrence time. If Mii = 00, it is called a null recurrent state, If Mii< 00, it is called a positive recurrent state, In a finite Markov chain, there are no null recurrent states (only positive recurrent states and transient states). 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 9 .'." .
  • 10. State j is accessible from i if Pij > 0 for some n > o. If j is accessible to i and i is accessible to j, then the two states communicate. In general : (1) any state communicates with itself. (2) if state i communicates with state j, then state j communicates with state i. (3) if state i communicates with state j and state j communicates with state k, then state i communicates with state k. If all states communicate, the Markov chain is Irreducible. In a fmite Markov chain, the members of a class are either all transient states or all positive recurrent states. A state i is said to have a . period t (t > 1) if Pj]N) = 0 whenever n is not divisible by t, and t is the largest integer with this property.If a state has a period 1, it is called aperiodic state. If state i in a class is aperiodic, then .all states in the class are aperiodic. Positive recurrent states that are aperiodicare called ergodic states. 11111111111111111111111111111111111111 n 11111111111" 1111111111" II" 111111" II" II iIIlllll"" !l1I1I111111" n II111I "" 11111111111111111" 11111" I" 1111111111111111" I111III1IIII MARKOV THEORY EDGARL. DECASTRO PAGE 10 .. . . ," ..: . '." . ",'I
  • 11. ERGODIC MARKOV CHAINS STEADY STATE PROBABILITIES (LIMITING PROBABILITIES) Let 7tj = N~oo p.(N) lim IJ As N grows large: 7t1 7t2 ... 7tn ... . pN ~ 17tI . . 7t2 . 7tn . . . . . . . . ... 7tn As long as the process is ergodic, such Iin1itexists. p(I'l) = p(N-I) 8 P Jim peN) = Jim p(N-I). p N~oo N~oo ... 7t 11 'it-- 1 1t 2 ... 7t n . . . . =. .... . . . . l.p ... . 1tnJ ... L7t1 7t2 : . ,:J 1t=1t8P 1tT = pT 81t [This system possesses an infinite number of solutions.] 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1 MARKOV THEORY EDGAR L. DE CASTRO PAGE 11 ',' : ' . ". . ". :: . " 'I . ,'.
  • 12. The nonnalizing equation L 1ti = 1 al1 i is used to identifY the one solution which wiU qualify as a probability distribution. ABSORBING MARKOV CHAINS Let PH Pl2 ... 'Plk I Pl,k+l ... ... P21 P22 ... P2k P2,k+l ... ... . I . . . . . . . . . . . I Pkl Pk2 ... Pkk I Pk,k +1 ... ... - - - - I - - - 0 0 ... 0 I 1 ... 0 0 0 ... 0 0 ... 0 . . I . . . . . . . I . . . 0 0 .. . 0 I 0 ... I The partitioned matrix is given by: Q I I( .., P=I- - 0 I J IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII! 11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 12
  • 13. Let eij = mean number of times that transient state j is occupied given the initial state in i before absorption E = corresponding matrix Then, k i:l: j: eij"= L ~vevj v=l k i = j : eij = 1 + L v=l Pjvevj In matrix form : E = I + QE E - QE =I (I-Q)E=I E=(I-Q)-l Let di = total number of transitions until absorption k di = Leij j=l 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11 MARKOVTHEORY EDGARL. DE CASTRO PAGE 13 .":.":
  • 14. ABSORPTION PROBABILITY - probability of entering an absorbing state Let Aij = probability that the process even enters absorbing state j given that the initial state is i. k Aij L = Pij + v=l PivAvj In matrix form A = matrix of Aij (not necessarily square) [where the number of rows is the number of transient states and the number of columns is the number of absorbing states] Examining matrix A A=R+QA A-QA=R (1- Q)A = R A = [I - Q ]-1 R CONDITIONAL MEAN FIRST PASSAGE TIME - number of transitions which will occur before an absorbing state is entered A..M..=A..+ 1J ~ P.kA .M . U IJ £.- I'" kJ kJ k=#j 111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 14 ..'..',. . . ., .'