SlideShare ist ein Scribd-Unternehmen logo
1 von 72
Downloaden Sie, um offline zu lesen
Introduction and State Space Models
            Reminder on some Monte Carlo methods
                 Particle Markov Chain Monte Carlo
                                            SMC2




    SMC2 : A sequential Monte Carlo algorithm with
      particle Markov chain Monte Carlo updates

         N. CHOPIN1 , P.E. JACOB2 , & O. PAPASPILIOPOULOS3


                           MCB seminar, March 9th, 2011



     1
       ENSAE-CREST
     2
       CREST & Universit´ Paris Dauphine, funded by AXA research
                         e
     3
       Universitat Pompeu Fabra
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2          1/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Outline



  1   Introduction and State Space Models


  2   Reminder on some Monte Carlo methods


  3   Particle Markov Chain Monte Carlo


  4   SMC2




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   2/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Outline



  1   Introduction and State Space Models


  2   Reminder on some Monte Carlo methods


  3   Particle Markov Chain Monte Carlo


  4   SMC2




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   3/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


State Space Models




  Context
  In these models:
         we observe some data Y1:T = (Y1 , . . . YT ),
         we suppose that they depend on some hidden states X1:T .




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2          4/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


State Space Models


  A system of equations
         Hidden states: p(x1 |θ) = µθ (x1 ) and when t ≥ 1

                       p(xt+1 |x1:t , θ) = p(xt+1 |xt , θ) = fθ (xt+1 |xt )

         Observations:

                     p(yt |y1:t−1 , x1:t−1 , θ) = p(yt |xt , θ) = gθ (yt |xt )

         Parameter: θ ∈ Θ, prior p(θ).




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                       5/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


State Space Models


  Some interesting distributions
  Bayesian inference focuses on:

                                              p(θ|y1:T )

  Filtering (traditionally) focuses on:

                                   ∀t ∈ [1, T ]       pθ (xt |y1:t )

  Smoothing (traditionally) focuses on:

                                  ∀t ∈ [1, T ]        pθ (xt |y1:T )


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2             6/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


State Space Models



  Some interesting distributions [spoiler]
  PMCMC methods provide a sample from:

                                          p(θ, x1:T |y1:T )

  SMC2 provides a sample from:

                                 ∀t ∈ [1, T ]         p(θ, x1:t |y1:t )




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS           SMC2              7/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples



  Local level
                    
                    yt
                                  = xt + σV εt ,            εt ∼ N (0, 1),
                     x             = xt + σW ηt ,             ηt ∼ N (0, 1),
                     t+1
                     x0            ∼ N (0, 1)
                    

  Here: θ = (σV , σW ). The model is linear and Gaussian.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                     8/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples



  Stochastic Volatility (simple)
                  
                  yt |xt ∼ N (0, e xt )
                  
                     x      = µ + ρ(xt−1 − µ) + σεt
                   t
                     x0     = µ0
                  

  Here: θ = (µ, ρ, σ), or can include µ0 .




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   9/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples



  Population growth model
             
             yt
                       = nt + σw εt
               log nt+1 = log nt + b0 + b1 (nt )b2 + σ ηt
             
               log n0   = µ0
             

  Here: θ = (b0 , b1 , b2 , σ , σW ), or can include µ0 .




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   10/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples

  Stochastic Volatility (sophisticated)
                                                        1/2
                              yt = µ + βvt + vt               t   ,t ≥ 1


                                                iid                        iid
         k ∼ Poi λξ 2 /ω 2                c1:k ∼ U(t, t + 1)          ei:k ∼ Exp ξ/ω 2
                                k
     zt+1 = e −λ zt +               e −λ(t+1−cj ) ej
                              j=1
                                                 
                                           k
                1
     vt+1 =       zt − zt+1 +                   ej 
                λ
                                          j=1

     xt+1 = (vt+1 , zt+1 )

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2                              11/ 72
Introduction and State Space Models
                         Reminder on some Monte Carlo methods
                              Particle Markov Chain Monte Carlo
                                                         SMC2


Examples


                                                                                         20
                     2




                                                                      Squared observations
                                                                                         15
      Observations




                     0


                                                                                         10

                 −2


                                                                                             5


                 −4



                           100   200   300    400   500   600   700                              100   200   300   400   500   600   700
                                         Time                                                                  Time


                                        (a)                                                                  (b)

                     Figure: The S&P 500 data from 03/01/2005 to 21/12/2007.

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                                SMC2                                                          12/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples

  Athletics records model

                                                              2
                                                                     g (yi,t |µt , ξ, σ)
    g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)}
                                                                   1 − G (yi,t |µt , ξ, σ)
                                                             i=1


                 xt = (µt , µt ) ,
                            ˙               xt+1 | xt , ν ∼ N (Fxt , Q) ,
  with
                                 1 1                         1/3 1/2
                      F =                    and Q = ν 2
                                 0 1                         1/2 1
                                                                            −1/ξ
                                                              y −µ
             G (y |µ, ξ, σ) = 1 − exp − 1 − ξ
                                                                σ           +


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                                   13/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Examples

                                            530




                                            520



                              Times (seconds)
                                            510




                                            500




                                            490




                                            480

                                                  1980   1985   1990    1995   2000   2005   2010
                                                                 Year



  Figure: Best two times of each year, in women’s 3000 metres events
  between 1976 and 2010.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                           SMC2                        14/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why are those models challenging?


  It’s all about dimensions. . .

                               pθ (y1:T |x1:T )pθ (x1:T )
     pθ (x1:T |y1:T ) =                                   ∝ pθ (y1:T |x1:T )pθ (x1:T )
                                       pθ (y1:T )

  . . . even if it’s not obvious

           p(θ|y1:T ) ∝ p(y1:T |θ)p(θ)

                           =            p(y1:T |x1:T , θ)p(x1:T |θ)dx1:T p(θ)
                                   XT




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                               15/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Outline



  1   Introduction and State Space Models


  2   Reminder on some Monte Carlo methods


  3   Particle Markov Chain Monte Carlo


  4   SMC2




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   16/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Metropolis-Hastings algorithm
  A popular method to sample from a distribution π.

  Algorithm 1 Metropolis-Hastings algorithm
    1: Set some x (1)
    2: for i = 2 to N do
    3:   Propose x ∗ ∼ q(·|x (i−1) )
    4:   Compute the ratio:

                                                   π(x ) q(x (i−1) |x )
                             α = min 1,
                                                  π(x (i−1) ) q(x |x (i−1) )

    5:   Set x (i) = x with probability α, otherwise set x (i) = x (i−1)
    6: end for


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                     17/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Metropolis-Hastings algorithm


  Requirements
      π can be evaluated point-wise, up to a multiplicative constant.
         x is low-dimensional, otherwise designing q gets tedious or
         even impossible.

  Back to SSM
      p(θ|y1:T ) cannot be evaluated point-wise.
         pθ (x1:T |y1:T ) and p(x1:T , θ|y1:T ) are high-dimensional, and
         cannot be necessarily computed point-wise either.



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                  18/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Gibbs sampling

  Suppose the target distribution π is defined on X d .

  Algorithm 2 Gibbs sampling
                        (1)
    1:   Set some x1:d
    2:   for i = 2 to N do
    3:     for j = 1 to d do
                     (i)     (i) (i)    (i−1)
    4:        Draw xj ∼ π(xj |x1:j−1 , xj+1:d )
    5:     end for
    6:   end for

  It allows to break a high-dimensional sampling problem into many
  low-dimensional sampling problems!

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2           19/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Gibbs sampling


  Requirements
         Conditional distributions π(xj |x1:j−1 , xj+1:d ) can be sampled
         from, otherwise MH within Gibbs.
         The components xj are not too correlated one to another.

  Back to SSM
      The hidden states x1:T are typically very correlated one to
      another.
         If the target is p(θ, x1:T |y1:T ), θ is also very correlated with
         x1:T .



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                    20/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Context
      Suppose we are interested in pθ (x1:T |y1:T ), with θ known.
                                                      (i)
         We want to get a sample x1:T , i ∈ [1, N] from it.

  General idea
      We introduce the following sequence of distributions:

                                      {pθ (x1:t |y1:t ), t ∈ [1, T ]}

         Sample recursively from pθ (x1:t |y1:t ) to pθ (x1:t+1 |y1:t+1 ).



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS               SMC2             21/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering

  Definition
  A particle filter is just a collection of weighted points, called
  particles.

  Particles
  Writing (w (i) , x (i) )N ∼ π means that the empirical distribution:
                          i=1

                                          N
                                               w (i) δx (i) (dx)
                                         i=1

  converges towards π when N → +∞.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2              22/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Importance Sampling
  Suppose:
                                             (i)
                                      (w1 , x (i) )N ∼ π1
                                                   i=1

  and if we define:
                                       (i)            (i)    π2 (x (i) )
                                    w2 = w1 ∗
                                                             π1 (x (i) )
  then
                                             (i)
                                      (w2 , x (i) )N ∼ π2
                                                   i=1

  under some common-sense assumptions on π1 and π2 .



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS               SMC2           23/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering

  From one time-step to the other
  Suppose
                       (i) (i)
                    (wt , x1:t )N ∼ pθ (x1:t |y1:t )
                                i=1

  We want
                             (i)      (i)
                         (wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 )
                                         i=1


  Decomposition


          pθ (x1:t+1 |y1:t+1 ) ∝ pθ (yt+1 |xt+1 )pθ (xt+1 |xt )pθ (x1:t |y1:t )
                                    ∝ gθ (yt+1 |xt+1 )fθ (xt+1 |xt )pθ (x1:t |y1:t )


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                             24/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering




  Proposal
                (i)                                   (i)
  Propose xt+1 ∼ qθ (xt+1 |x1:t = x1:t , y1:t ). Then:

              (i)      (i)    (i)      N
           wt , (x1:t , xt+1 )               ∼ qθ (xt+1 |x1:t , y1:t+1 )pθ (x1:t |y1:t )
                                       i=1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS               SMC2                           25/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering



  Reweighting
                                                          (i)         (i)   (i)
                      (i)           (i)       gθ (yt+1 |xt+1 )fθ (xt+1 |xt )
                     wt+1     =    wt     ×            (i)      (i)
                                                  qθ (xt+1 |x1:t , y1:t+1 )
  and finally we have
                             (i)      (i)
                         (wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 )
                                         i=1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2                       26/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Resampling
  To fight the weight degeneracy we introduce a resampling step.

  Notation
  Family of probability distribution on {1, . . . N}N :
                                                                 N
                                                      N
              a ∼ r (·|w ) for w ∈ [0, 1] such that                    w (i) = 1
                                                                 i=1

                          (i)                                                      (i)
  The variables (at−1 )N are the indices of the parents of (x1:t )N .
                       i=1                                        i=1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS             SMC2                           27/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Algorithm 3 Sequential Monte Carlo algorithm
                       (i)
    1: Propose x1 ∼ µθ (·)
                         (i)
    2: Compute weights w1
    3: for t = 2 to T do
    4:   Resample at−1 ∼ r (·|wt−1 )
                                                (i)                    (i)
                             (i)t−1         a                (i)
                                                             t−1   a         (i)
    5:     Propose xt ∼ qθ (·|x1:t−1 , y1:t ), let x1:t = (x1:t−1 , xt )
                             (i)             (i)
    6:   Update wt                 to get wt+1
    7: end for




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                         28/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering




                                                                time
                 Figure: Three weighted trajectories x1:t at time t.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2             29/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering




                                                             time
             Figure: Three proposed trajectories x1:t+1 at time t + 1.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2               30/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering




                                                             time
            Figure: Three reweighted trajectories x1:t+1 at time t + 1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2               31/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Output
  In the end we get particles:
                                  (i)    (i)
                             (wT , x1:T )N ∼ pθ (x1:T |y1:T )
                                         i=1


  Requirements
         Proposal kernels qθ (·|x1:t−1 , y1:t ) from which we can sample.
         Weight functions which we can evaluate point-wise.
         These proposal kernels and weight functions must result in
         properly weighted samples.



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                  32/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Sequential Monte Carlo for filtering


  Marginal likelihood
  A side effect of the SMC algorithm is that we can approximate the
  marginal likelihood ZT :

                                          ZT = p(y1:T |θ)

  with the following unbiased estimate:
                                     T            N
                         ˆN                  1          (i)     P
                         ZT =                          wt     − − → ZT
                                                              −−
                                             N                N→∞
                                    t=1          i=1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2              33/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Outline



  1   Introduction and State Space Models


  2   Reminder on some Monte Carlo methods


  3   Particle Markov Chain Monte Carlo


  4   SMC2




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   34/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Reference



  Particle Markov Chain Monte Carlo methods
  is an article by Andrieu, Doucet, Holenstein,
  JRSS B., 2010, 72(3):269–342

  Motivation
  Bayesian inference in state space models:

                                          p(θ, x1:T |y1:T )




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2    35/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Idealized Metropolis–Hastings for SSM



  If only. . .
  . . . we had p(θ|y1:T ) ∝ p(θ)p(y1:T |θ) up to a multiplicative
  constant, we could run a MH algorithm with acceptance rate:

                                                p(θ )p(y1:T |θ ) q(θ(i) |θ )
            α(θ(i) , θ ) = min 1,
                                               p(θ(i) )p(y1:T |θ(i) ) q(θ |θ(i) )




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                          36/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Valid Metropolis–Hastings for SSM ??



  Plug in estimates
                    ˆN
  However we have ZT (θ) ≈ p(y1:T |θ) by running a SMC algorithm,
  and we can try to run a MH algorithm with acceptance rate:

                                                         ˆN
                                                  p(θ )ZT (θ ) q(θ(i) |θ )
               α(θ(i) , θ ) = min 1,
                                                         ˆ
                                                 p(θ(i) )Z N (θ(i) ) q(θ |θ(i) )
                                                          T




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                         37/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


The Beauty of Particle MCMC


  “Exact approximation”
  Turns out it is a valid MH algorithm that targets exactly p(θ|y1:T ),
  regardless of the number N of particles used in the SMC algorithm
                              ˆN
  that provides the estimates ZT (θ) at each iteration.

  State estimation
  In fact the PMCMC algorithms provide samples from
  p(θ, x1:T |y1:T ), and not only from the posterior distribution of the
  parameters.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                 38/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Particle Metropolis-Hastings

  Algorithm 4 Particle Metropolis-Hastings algorithm
    1:   Set some θ(1)
                                    ˆN                            (1)
    2:   Run a SMC algorithm, keep ZT (θ(1) ), draw a trajectory x1:T
    3:   for i = 2 to I do
    4:     Propose θ ∼ q(·|θ(i−1) )
    5:                                ˆN
           Run a SMC algorithm, keep ZT (θ ), draw a trajectory x1:T
    6:     Compute the ratio:

                                                            ˆN
                                                     p(θ )ZT (θ )         q(θ(i−1) |θ )
            α(θ(i−1) , θ ) = min 1,
                                                            ˆ
                                                  p(θ(i−1) )Z N (θ(i−1) ) q(θ |θ(i−1) )
                                                             T

                                   (i)
    7:   Set θ(i) = θ , x1:T = x1:T with probability α, otherwise keep
         the previous values
    8: end for
 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                                39/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?

  Variables generated by SMC
                                         (1)          (N)
         ∀t ∈ [1, T ] xt = (xt , . . . xt                   )
                                               (1)          (N)
         ∀t ∈ [1, T − 1] at = (at , . . . at )

  Joint distribution

                                                  N
                                                                (i)
  ψ(x1 , . . . xT , a1 , . . . aT −1 ) =              qθ (x1 )
                                                i=1
                                                 T                      N                    (i)
                                                                                   (i)   a
                                                                                        1:t−1
                                         ×            r (at−1 |wt−1 )         qθ (xt |x1:t−1 )
                                                t=2                     i=1


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS           SMC2                                       40/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?



  Extended proposal distribution
                                                      k ,
  The PMH proposes: a new parameter θ , a trajectory x1:T , and
  the rest of the variables generated by the SMC.

                  q N (θ , k , x1 , . . . xT , a1 , . . . aT −1 )
                                 k          ,
                   = q(θ |θ(i) )wT              ψ (x1 , . . . xT , a1 , . . . aT −1 )




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS           SMC2                            41/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?

  Extended target distribution


       π N (θ, k, x1 , . . . xT , a1 , . . . aT −1 )
       ˜
            p(θ, x1:T |y1:T )           ψ θ (x1 , . . . xT , a1 , . . . aT −1 )
        =
                  NT               bk
                              qθ (x1 1 ) T r (bt−1 |wt−1 )qθ (xt t |x1:t−1 )
                                                        k
                                                                                 k
                                                                            b k bt−1
                                            t=2

        k                                                     (k)
  with b1:T the index history of particle x1:T .

  Valid algorithm
  From the explicit form of the extended distributions, showing that
  PMH is a standard MH algorithm becomes straightforward.

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2                            42/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Particle MCMC: conclusion


  Remarks
     It is exact regardless of N . . .
         . . . however a sufficient number N of particles is required to
         get decent acceptance rates.
         SMC methods are considered expensive, but easy to
         parallelize.
         Applies to a broad class of models.
         More sophisticated SMC and MCMC methods can be used,
         and result in more sophisticated Particle MCMC methods.



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2               43/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Outline



  1   Introduction and State Space Models


  2   Reminder on some Monte Carlo methods


  3   Particle Markov Chain Monte Carlo


  4   SMC2




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2   44/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Our idea. . .



  . . . was to use the same, very powerful “extended distribution”
  framework, to build a SMC sampler instead of a MCMC algorithm.
  Foreseen benefits
       to sample more efficiently from the posterior distribution
       p(θ|y1:T ),
         to sample sequentially from p(θ|y1 ), p(θ|y1 , y2 ), . . . p(θ|y1:T ).

  and it turns out, it allows even a bit more.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                        45/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Idealized SMC sampler for SSM

  Algorithm 5 Iterated Batch Importance Sampling
    1:   Sample from the prior θ(m) ∼ p(·) for m ∈ [1, Nθ ]
    2:   Set ω (m) ← 1
    3:   for t = 1 to T do
    4:     Compute ut (θ(m) ) = p(yt |y1:t−1 , θ(m) )
    5:     Update ω (m) ← ω (m) × ut (θ(m) )
    6:     if some degeneracy criterion is met then
    7:        Resample the particles, reset the weights ω (m) ← 1
    8:        Move the particles using a Markov kernel leaving the dis-
              tribution invariant
    9:     end if
  10:    end for


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                46/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Valid SMC sampler for SSM ??



  Plug in estimates
  Similarly to PMCMC methods, we want to replace
  p(yt |y1:t−1 , θ(m) ) with an unbiased estimate, and see what
  happens.

  SMC everywhere
  We associate Nx x-particles to each of the Nθ θ-particles.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2        47/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Valid SMC sampler for SSM ??


  Marginal likelihood
  Remember, a side effect of the SMC algorithm is that we can
  approximate the incremental likelihood:
                                  Nx
                           1              (i,m)
                                       wt         ≈ p(yt |y1:t−1 , θ(m) )
                           Nx
                                 i=1


  Move steps
  Instead of simple MH kernels, use PMH kernels.



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2                 48/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?



  A simple idea. . .
  . . . especially after the PMCMC article.

  Still. . .
  . . . some work had to be done to justify the validity of the
  algorithm.

  In short, it leads to a standard SMC sampler on a sequence of
  extended distributions πt (proposition 1 of the article).




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2        49/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?



  Additional notations
      hn denotes the index history of xtn , that is, hn (t) = n, and
        t                                                 t
                   n
      htn (s) = aht (s+1) recursively, for s = t − 1, . . . , 1.
                 s
         xn denotes a state trajectory finishing in xtn , that is:
          1:t

                                                hn (s)
                               xn (s) = xs t
                                1:t                      , for s = 1, . . . , t.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS            SMC2                      50/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?

  Here is what the distribution πt looks like:
                        1:N      1:Nx
                πt (θ, x1:t x , a1:t−1 ) = p(θ|y1:t )
                                                                       
                              N
                                                  N                    
                         1 x p(xn |θ, y1:t )  x
                                                                       
                                                                        
                                       1:t                           i
                   ×                       t−1
                                                              q1,θ (x1 )
                        Nx              Nx                             
                             n=1                  i=1
                                                  n                    
                                                                        
                                                     i=ht (1)
                                                               
                        
                         t        Nx
                                                                
                                                                
                                            i
                                            as−1            i
                                                          as−1
                                                                
                                                       i
                   ×                    Ws−1,θ qs,θ (xs |xs−1 )
                        
                        s=2 i=1                                
                                                                
                                   n
                                                                
                                   i=ht (s)




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                  51/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Why does it work?


  PMCMC move steps
  These steps are valid because the PMCMC invariant distribution
  πt defined on
                                  1:N      1:Nx
                           θ, k, x1:t x , a1:t−1
  is such that πt is the marginal distribution of
                                              1:N      1:Nx
                                          θ, x1:t x , a1:t−1

  with respect to πt .

  (Sections 3.2, 3.3 of the article)


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2         52/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Benefits




  Explicit form of the distribution
  It allows to prove the validity of the algorithm, but also:
         to get samples from p(θ, x1:t |y1:t ),
         to validate an automatic calibration of Nx .




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2      53/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Benefits

  Drawing trajectories
  If for every θ-particle θ(m) one draws an index n (m) uniformly on
  {1, . . . Nx }, then the weighted sample:
                                                  n (m),m
                                  (ω m , θm , x1:t          )m∈1:Nθ

  follows p(θ, x1:t |y1:t ).

  Memory cost
  Need to store the x-trajectories, if one wants to make inference
  about x1:t (smoothing).
  If the interest is only on parameter inference (θ), filtering (xt ) and
  prediction (yt+1 ), no need to store the trajectories.

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                 54/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Benefits

  Estimating functionals of the states
  We have a test function h and want to estimate E [h(θ, x1:t )|y1:t ].
  Estimator:
                              Nθ
                      1                      n (m),m
                    Nθ
                                 ω m h(θm , x1:t     ).
                    m=1  ω m m=1
  Rao–Blackwellized estimator:
                                     Nθ               Nx
                         1                                                n,m
                      Nθ
                                          ωm                Wt,θm h(θm , x1:t ) .
                                                             n
                           m
                      m=1 ω m=1                       n=1

  (Section 3.4 of the article)


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS               SMC2                    55/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Benefits

  Evidence
  The evidence of the data given the model is defined as:
                                                      t
                                 p(y1:t ) =               p(ys |y1:s−1 )
                                                 s=1

  And it can be used to compare models. SMC2 provides the
  following estimate:
                                                  Nθ
                      ˆ               1
                      Lt =         Nθ
                                                          ω m p (yt |y1:t−1 , θm )
                                                              ˆ
                                        m
                                   m=1 ω m=1

  (Section 3.5 of the article)

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS              SMC2                      56/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Benefits

  Exchange importance sampling step
                                             ˜
  Launch a new SMC for each θ-particle, with Nx x-particles. Joint
  distribution:
                                                              ˜      ˜
                                 1:N      1:Nx
                         πt (θ, x1:t x , a1:t−1 )ψt,θ (˜1:tNx , ˜1:t−1 )
                                                       x 1: a1:Nx

  Retain the new x-particles and drop the old ones, updating the
  θ-weights with:
                                                                          ˜       ˜
                                           ˜       ˜
                                                               ˆ      ˜1: a1:Nx
                                                               Zt (θ, x1:tNx , ˜1:t−1 )
          exch
         ut       θ, x1:t x , a1:t−1 , x1:tNx , ˜1:t−1
                      1:N      1:Nx
                                       ˜1: a1:Nx             =
                                                               ˆ
                                                               Zt (θ, x 1:Nx , a1:Nx )
                                                                         1:t    1:t−1

  (Section 3.6 of the article)

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                                57/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Warning




  Plug in estimates
  Not any SMC sampler can be turned into a SMC2 algorithm, by
  replacing the exact weights with estimates: these have to be
  unbiased. . . !!




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2       58/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Warning

  Example
  For instance, if instead of using the sequence of distributions:

                                          {p(θ|y1:t )}T
                                                      t=1

  one wants to use the “tempered” sequence:

                                        {p(θ|y1:T )γk }K
                                                       k=1

  with γk an increasing sequence from 0 to 1, then one should find
  unbiased estimates of p(θ|y1:T )γk −γk−1 to plug into the idealized
  SMC sampler.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2              59/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations

  Stochastic Volatility (sophisticated)
                                                        1/2
                              yt = µ + βvt + vt               t   ,t ≥ 1


                                                iid                        iid
         k ∼ Poi λξ 2 /ω 2                c1:k ∼ U(t, t + 1)          ei:k ∼ Exp ξ/ω 2
                                k
     zt+1 = e −λ zt +               e −λ(t+1−cj ) ej
                              j=1
                                                 
                                           k
                1
     vt+1 =       zt − zt+1 +                   ej 
                λ
                                          j=1

     xt+1 = (vt+1 , zt+1 )

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS          SMC2                              60/ 72
Introduction and State Space Models
                             Reminder on some Monte Carlo methods
                                  Particle Markov Chain Monte Carlo
                                                             SMC2


Numerical illustrations


                                                                              1.0                                          800


                                                                                                                           700
                         8                                                    0.8
                                                                                                                           600
      Squared observations




                                                               Acceptance rates
                         6                                                    0.6                                          500




                                                                                                                          Nx
                                                                                                                           400
                         4                                                    0.4
                                                                                                                           300


                         2                                                    0.2                                          200


                                                                                                                           100
                         0                                                    0.0

                             200   400      600   800   1000                        0   200   400      600   800   1000          0   200   400          600   800   1000
                                     Time                                                     Iterations                                   Iterations



                                   (a)                                                        (b)                                          (c)

  Figure: Squared observations (synthetic data set), acceptance rates, and
  illustration of the automatic increase of Nx .



 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                                                          SMC2                                                                61/ 72
Introduction and State Space Models
                           Reminder on some Monte Carlo methods
                                Particle Markov Chain Monte Carlo
                                                           SMC2


Numerical illustrations



                            T = 250                           T = 500                           T = 750                           T = 1000
         8



         6
   Density




         4



         2



         0
             −1.0   −0.5      0.0     0.5   1.0 −1.0   −0.5     0.0     0.5   1.0 −1.0   −0.5     0.0     0.5   1.0 −1.0   −0.5     0.0      0.5   1.0
                                                                               µ




              Figure: Concentration of the posterior distribution for parameter µ.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                                       SMC2                                                                 62/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations




  Multifactor model

                                        k1              k2
                 1/2
  yt =   µ+βvt +vt t +ρ1                     e1,j +ρ2         e2,j −ξ(w ρ1 λ1 +(1−w )ρ2 λ2 )
                                       j=1              j=1




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                                 63/ 72
Introduction and State Space Models
                                   Reminder on some Monte Carlo methods
                                        Particle Markov Chain Monte Carlo
                                                                   SMC2


Numerical illustrations




                                                                                Evidence compared to the one factor model
                                                                                                                                variable
                           20                                                                                                    Multi factor without leverage
                                                                                                                            4    Multi factor with leverage
        Squared observations




                           15

                                                                                                                            2


                           10


                                                                                                                            0

                               5


                                                                                                                       −2



                                      100   200   300   400   500   600   700                                                   100     200     300     400      500   600   700
                                                    Time                                                                                        Iterations


                                                  (a)                                                                                            (b)

  Figure: S&P500 squared observations, and log-evidence comparison
  between models (relative to the one-factor model).
 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                                                  SMC2                                                                                64/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations

  Athletics records model

                                                              2
                                                                     g (yi,t |µt , ξ, σ)
    g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)}
                                                                   1 − G (yi,t |µt , ξ, σ)
                                                             i=1


                 xt = (µt , µt ) ,
                            ˙               xt+1 | xt , ν ∼ N (Fxt , Q) ,
  with
                                 1 1                         1/3 1/2
                      F =                    and Q = ν 2
                                 0 1                         1/2 1
                                                                            −1/ξ
                                                              y −µ
             G (y |µ, ξ, σ) = 1 − exp − 1 − ξ
                                                                σ           +


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                                   65/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations

                                            530




                                            520



                              Times (seconds)
                                            510




                                            500




                                            490




                                            480

                                                  1980   1985   1990    1995   2000   2005   2010
                                                                 Year



  Figure: Best two times of each year, in women’s 3000 metres events
  between 1976 and 2010.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                           SMC2                        66/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations

  Motivating question
  How unlikely is Wang Junxia’s record in 1993?

  A smoothing problem
  We want to estimate the likelihood of Wang Junxia’s record in
  1993, given that we observe a better time than the previous world
  record. We want to use all the observations from 1976 to 2010 to
  answer the question.

  Note
  We exclude observations from the year 1993.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2            67/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations



  Some probabilities of interest

       y
      pt = P(yt ≤ y |y1976:2010 )

          =              G (y |µt , θ)p(µt |y1976:2010 , θ)p(θ|y1976:2010 ) dµt dθ
                Θ    X

                        486.11  502.62    cond := p 486.11 /p 502.62 .
  The interest lies in p1993 , p1993 and pt        t         t




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2                           68/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Numerical illustrations

                                        10−1




                                        10−2

                              Probability

                                        10−3




                                        10−4




                                               1980   1985   1990    1995   2000   2005   2010
                                                              Year


                                                             502.62
  Figure: Estimates of the probability of interest (top) pt         , (middle)
    cond                   486.11                         2
  pt     and (bottom) pt          , obtained with the SMC algorithm. The
  y -axis is in log scale, and the dotted line indicates the year 1993 which
  motivated the study.
 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS                        SMC2                        69/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Conclusion



  A powerful framework
         The SMC2 framework allows to obtain various quantities of
         interest, in a quite generic and “black-box” way.
         It extends the PMCMC framework introduced by Andrieu,
         Doucet and Holenstein.
         A package is available:
                           http://code.google.com/p/py-smc2/.




 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2           70/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Acknowledgments


         N. Chopin is supported by the ANR grant
         ANR-008-BLAN-0218 “BigMC” of the French Ministry of
         research.
         P.E. Jacob is supported by a PhD fellowship from the AXA
         Research Fund.
         O. Papaspiliopoulos would like to acknowledge financial
         support by the Spanish government through a “Ramon y
         Cajal” fellowship and grant MTM2009-09063.
  The authors are thankful to Arnaud Doucet (University of British
  Columbia) and to Gareth W. Peters (University of New South
  Wales) for useful comments.


 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2           71/ 72
Introduction and State Space Models
             Reminder on some Monte Carlo methods
                  Particle Markov Chain Monte Carlo
                                             SMC2


Bibliography

  SMC2 : A sequential Monte Carlo algorithm with particle Markov
  chain Monte Carlo updates, N. Chopin, P.E. Jacob, O.
  Papaspiliopoulos, submitted
  Main references:
         Particle Markov Chain Monte Carlo methods, C. Andrieu, A.
         Doucet, R. Holenstein, JRSS B., 2010, 72(3):269–342
         The pseudo-marginal approach for efficient computation, C.
         Andrieu, G.O. Roberts, Ann. Statist., 2009, 37, 697–725
         Random weight particle filtering of continuous time processes,
         P. Fearnhead, O. Papaspiliopoulos, G.O. Roberts, A. Stuart,
         JRSS B., 2010, 72:497–513
         Feynman-Kac Formulae, P. Del Moral, Springer

 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS         SMC2               72/ 72

Weitere ähnliche Inhalte

Was ist angesagt?

Thalesian_Monte_Carlo_NAG
Thalesian_Monte_Carlo_NAGThalesian_Monte_Carlo_NAG
Thalesian_Monte_Carlo_NAG
elviszhang
 
IWLCS'2007: Fuzzy-UCS: Preliminary Results
IWLCS'2007: Fuzzy-UCS: Preliminary ResultsIWLCS'2007: Fuzzy-UCS: Preliminary Results
IWLCS'2007: Fuzzy-UCS: Preliminary Results
Albert Orriols-Puig
 
An Introduction to Hidden Markov Model
An Introduction to Hidden Markov ModelAn Introduction to Hidden Markov Model
An Introduction to Hidden Markov Model
Shih-Hsiang Lin
 
Implementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
Implementing 3D SPHARM Surfaces Registration on Cell B.E. ProcessorImplementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
Implementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
PTIHPA
 
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
Albert Orriols-Puig
 
Quantization
QuantizationQuantization
Quantization
wtyru1989
 
Kinematika rotasi
Kinematika rotasiKinematika rotasi
Kinematika rotasi
rymmanz86
 

Was ist angesagt? (20)

Thalesian_Monte_Carlo_NAG
Thalesian_Monte_Carlo_NAGThalesian_Monte_Carlo_NAG
Thalesian_Monte_Carlo_NAG
 
IWLCS'2007: Fuzzy-UCS: Preliminary Results
IWLCS'2007: Fuzzy-UCS: Preliminary ResultsIWLCS'2007: Fuzzy-UCS: Preliminary Results
IWLCS'2007: Fuzzy-UCS: Preliminary Results
 
Numerical Technique, Initial Conditions, Eos,
Numerical Technique, Initial Conditions, Eos,Numerical Technique, Initial Conditions, Eos,
Numerical Technique, Initial Conditions, Eos,
 
Modeling the coupling between cw lasers and a frequency comb in atomic samples
Modeling the coupling between cw lasers and a frequency comb in atomic samplesModeling the coupling between cw lasers and a frequency comb in atomic samples
Modeling the coupling between cw lasers and a frequency comb in atomic samples
 
Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov Models
 
Markov Models
Markov ModelsMarkov Models
Markov Models
 
An Introduction to Hidden Markov Model
An Introduction to Hidden Markov ModelAn Introduction to Hidden Markov Model
An Introduction to Hidden Markov Model
 
Implementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
Implementing 3D SPHARM Surfaces Registration on Cell B.E. ProcessorImplementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
Implementing 3D SPHARM Surfaces Registration on Cell B.E. Processor
 
Hmm viterbi
Hmm viterbiHmm viterbi
Hmm viterbi
 
Algorithm to remove spectral leakage
Algorithm to remove spectral leakageAlgorithm to remove spectral leakage
Algorithm to remove spectral leakage
 
Jokyokai2
Jokyokai2Jokyokai2
Jokyokai2
 
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
JAEM'2007: Aprendizaje Supervisado de Reglas Difusas mediante un Sistema Clas...
 
Quantization
QuantizationQuantization
Quantization
 
Awg waveform compensation by maximum entropy method
Awg waveform compensation by maximum entropy methodAwg waveform compensation by maximum entropy method
Awg waveform compensation by maximum entropy method
 
Poster rga
Poster rgaPoster rga
Poster rga
 
Dynamics of Satellite With a Tether System
Dynamics of Satellite With a Tether SystemDynamics of Satellite With a Tether System
Dynamics of Satellite With a Tether System
 
Kinematika rotasi
Kinematika rotasiKinematika rotasi
Kinematika rotasi
 
Petri Nets: Properties, Analysis and Applications
Petri Nets: Properties, Analysis and ApplicationsPetri Nets: Properties, Analysis and Applications
Petri Nets: Properties, Analysis and Applications
 
Neuron-computer interface in Dynamic-Clamp experiments
Neuron-computer interface in Dynamic-Clamp experimentsNeuron-computer interface in Dynamic-Clamp experiments
Neuron-computer interface in Dynamic-Clamp experiments
 
quantization
quantizationquantization
quantization
 

Andere mochten auch

Applying Hidden Markov Models to Bioinformatics
Applying Hidden Markov Models to BioinformaticsApplying Hidden Markov Models to Bioinformatics
Applying Hidden Markov Models to Bioinformatics
butest
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
Stephane Senecal
 
Introduction to HMMs in Bioinformatics
Introduction to HMMs in BioinformaticsIntroduction to HMMs in Bioinformatics
Introduction to HMMs in Bioinformatics
avrilcoghlan
 
Stock Market Analysis Markov Models
Stock Market Analysis Markov ModelsStock Market Analysis Markov Models
Stock Market Analysis Markov Models
Gabriel Policiuc
 

Andere mochten auch (19)

Modifed my_poster
Modifed my_posterModifed my_poster
Modifed my_poster
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and ApproachesSpeech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approaches
 
Consequences of Sequential Sampling for Meta-analysis
Consequences of Sequential Sampling for Meta-analysisConsequences of Sequential Sampling for Meta-analysis
Consequences of Sequential Sampling for Meta-analysis
 
Applying Hidden Markov Models to Bioinformatics
Applying Hidden Markov Models to BioinformaticsApplying Hidden Markov Models to Bioinformatics
Applying Hidden Markov Models to Bioinformatics
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
 
Statistical inference of generative network models - Tiago P. Peixoto
Statistical inference of generative network models - Tiago P. PeixotoStatistical inference of generative network models - Tiago P. Peixoto
Statistical inference of generative network models - Tiago P. Peixoto
 
Introduction to HMMs in Bioinformatics
Introduction to HMMs in BioinformaticsIntroduction to HMMs in Bioinformatics
Introduction to HMMs in Bioinformatics
 
Stock Market Analysis Markov Models
Stock Market Analysis Markov ModelsStock Market Analysis Markov Models
Stock Market Analysis Markov Models
 
Search Engine Friendly Web Design: Designing For People Who Use Search Engine...
Search Engine Friendly Web Design: Designing For People Who Use Search Engine...Search Engine Friendly Web Design: Designing For People Who Use Search Engine...
Search Engine Friendly Web Design: Designing For People Who Use Search Engine...
 
Hidden Markov Model & Stock Prediction
Hidden Markov Model & Stock PredictionHidden Markov Model & Stock Prediction
Hidden Markov Model & Stock Prediction
 
Richard Everitt's slides
Richard Everitt's slidesRichard Everitt's slides
Richard Everitt's slides
 
HIDDEN MARKOV MODEL AND ITS APPLICATION
HIDDEN MARKOV MODEL AND ITS APPLICATIONHIDDEN MARKOV MODEL AND ITS APPLICATION
HIDDEN MARKOV MODEL AND ITS APPLICATION
 
Data Science - Part XIII - Hidden Markov Models
Data Science - Part XIII - Hidden Markov ModelsData Science - Part XIII - Hidden Markov Models
Data Science - Part XIII - Hidden Markov Models
 
Stock Market Prediction using Hidden Markov Models and Investor sentiment
Stock Market Prediction using Hidden Markov Models and Investor sentimentStock Market Prediction using Hidden Markov Models and Investor sentiment
Stock Market Prediction using Hidden Markov Models and Investor sentiment
 
3 recursive bayesian estimation
3  recursive bayesian estimation3  recursive bayesian estimation
3 recursive bayesian estimation
 
10 Steps to Great SEO: InfusionCon 2011
10 Steps to Great SEO: InfusionCon 201110 Steps to Great SEO: InfusionCon 2011
10 Steps to Great SEO: InfusionCon 2011
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
Death by PowerPoint
Death by PowerPointDeath by PowerPoint
Death by PowerPoint
 

Ähnlich wie Presentation MCB seminar 09032011

Suppression of correlated electron escape in double ionization in strong lase...
Suppression of correlated electron escape in double ionization in strong lase...Suppression of correlated electron escape in double ionization in strong lase...
Suppression of correlated electron escape in double ionization in strong lase...
Jakub Prauzner-Bechcicki
 
JamesDPearce_MonoX
JamesDPearce_MonoXJamesDPearce_MonoX
JamesDPearce_MonoX
James Pearce
 
Pore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic FieldsPore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic Fields
Alexander Sagidullin
 
Master's presentation (English)
Master's presentation (English)Master's presentation (English)
Master's presentation (English)
Alexander Tsupko
 
Pres110811
Pres110811Pres110811
Pres110811
shotlub
 

Ähnlich wie Presentation MCB seminar 09032011 (20)

Suppression of correlated electron escape in double ionization in strong lase...
Suppression of correlated electron escape in double ionization in strong lase...Suppression of correlated electron escape in double ionization in strong lase...
Suppression of correlated electron escape in double ionization in strong lase...
 
Hidden markovmodel
Hidden markovmodelHidden markovmodel
Hidden markovmodel
 
JamesDPearce_MonoX
JamesDPearce_MonoXJamesDPearce_MonoX
JamesDPearce_MonoX
 
Ph ddefence
Ph ddefencePh ddefence
Ph ddefence
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
 
Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...
Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...
Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
What happens when the Kolmogorov-Zakharov spectrum is nonlocal?
What happens when the Kolmogorov-Zakharov spectrum is nonlocal?What happens when the Kolmogorov-Zakharov spectrum is nonlocal?
What happens when the Kolmogorov-Zakharov spectrum is nonlocal?
 
Nucleating Nematic Droplets
Nucleating Nematic DropletsNucleating Nematic Droplets
Nucleating Nematic Droplets
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Pore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic FieldsPore Geometry from the Internal Magnetic Fields
Pore Geometry from the Internal Magnetic Fields
 
A Closed-Form Expression for Queuing Delay in Rayleigh Fading Channels Using ...
A Closed-Form Expression for Queuing Delay in Rayleigh Fading Channels Using ...A Closed-Form Expression for Queuing Delay in Rayleigh Fading Channels Using ...
A Closed-Form Expression for Queuing Delay in Rayleigh Fading Channels Using ...
 
Geohydrology ii (3)
Geohydrology ii (3)Geohydrology ii (3)
Geohydrology ii (3)
 
Particle filtering
Particle filteringParticle filtering
Particle filtering
 
Solar Cells Lecture 4: What is Different about Thin-Film Solar Cells?
Solar Cells Lecture 4: What is Different about Thin-Film Solar Cells?Solar Cells Lecture 4: What is Different about Thin-Film Solar Cells?
Solar Cells Lecture 4: What is Different about Thin-Film Solar Cells?
 
Master's presentation (English)
Master's presentation (English)Master's presentation (English)
Master's presentation (English)
 
Future CMB Experiments
Future CMB ExperimentsFuture CMB Experiments
Future CMB Experiments
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
Pres110811
Pres110811Pres110811
Pres110811
 
Markov chain
Markov chainMarkov chain
Markov chain
 

Mehr von Pierre Jacob

Mehr von Pierre Jacob (18)

Talk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniquesTalk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniques
 
ISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lectureISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lecture
 
Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
 
On non-negative unbiased estimators
On non-negative unbiased estimatorsOn non-negative unbiased estimators
On non-negative unbiased estimators
 
Path storage in the particle filter
Path storage in the particle filterPath storage in the particle filter
Path storage in the particle filter
 
Density exploration methods
Density exploration methodsDensity exploration methods
Density exploration methods
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ Warwick
 

Kürzlich hochgeladen

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 

Kürzlich hochgeladen (20)

Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Magic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxMagic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 

Presentation MCB seminar 09032011

  • 1. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 SMC2 : A sequential Monte Carlo algorithm with particle Markov chain Monte Carlo updates N. CHOPIN1 , P.E. JACOB2 , & O. PAPASPILIOPOULOS3 MCB seminar, March 9th, 2011 1 ENSAE-CREST 2 CREST & Universit´ Paris Dauphine, funded by AXA research e 3 Universitat Pompeu Fabra N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 1/ 72
  • 2. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Outline 1 Introduction and State Space Models 2 Reminder on some Monte Carlo methods 3 Particle Markov Chain Monte Carlo 4 SMC2 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 2/ 72
  • 3. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Outline 1 Introduction and State Space Models 2 Reminder on some Monte Carlo methods 3 Particle Markov Chain Monte Carlo 4 SMC2 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 3/ 72
  • 4. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 State Space Models Context In these models: we observe some data Y1:T = (Y1 , . . . YT ), we suppose that they depend on some hidden states X1:T . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 4/ 72
  • 5. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 State Space Models A system of equations Hidden states: p(x1 |θ) = µθ (x1 ) and when t ≥ 1 p(xt+1 |x1:t , θ) = p(xt+1 |xt , θ) = fθ (xt+1 |xt ) Observations: p(yt |y1:t−1 , x1:t−1 , θ) = p(yt |xt , θ) = gθ (yt |xt ) Parameter: θ ∈ Θ, prior p(θ). N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 5/ 72
  • 6. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 State Space Models Some interesting distributions Bayesian inference focuses on: p(θ|y1:T ) Filtering (traditionally) focuses on: ∀t ∈ [1, T ] pθ (xt |y1:t ) Smoothing (traditionally) focuses on: ∀t ∈ [1, T ] pθ (xt |y1:T ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 6/ 72
  • 7. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 State Space Models Some interesting distributions [spoiler] PMCMC methods provide a sample from: p(θ, x1:T |y1:T ) SMC2 provides a sample from: ∀t ∈ [1, T ] p(θ, x1:t |y1:t ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 7/ 72
  • 8. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples Local level  yt  = xt + σV εt , εt ∼ N (0, 1), x = xt + σW ηt , ηt ∼ N (0, 1),  t+1 x0 ∼ N (0, 1)  Here: θ = (σV , σW ). The model is linear and Gaussian. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 8/ 72
  • 9. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples Stochastic Volatility (simple)  yt |xt ∼ N (0, e xt )  x = µ + ρ(xt−1 − µ) + σεt  t x0 = µ0  Here: θ = (µ, ρ, σ), or can include µ0 . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 9/ 72
  • 10. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples Population growth model  yt  = nt + σw εt log nt+1 = log nt + b0 + b1 (nt )b2 + σ ηt  log n0 = µ0  Here: θ = (b0 , b1 , b2 , σ , σW ), or can include µ0 . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 10/ 72
  • 11. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples Stochastic Volatility (sophisticated) 1/2 yt = µ + βvt + vt t ,t ≥ 1 iid iid k ∼ Poi λξ 2 /ω 2 c1:k ∼ U(t, t + 1) ei:k ∼ Exp ξ/ω 2 k zt+1 = e −λ zt + e −λ(t+1−cj ) ej j=1   k 1 vt+1 = zt − zt+1 + ej  λ j=1 xt+1 = (vt+1 , zt+1 ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 11/ 72
  • 12. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples 20 2 Squared observations 15 Observations 0 10 −2 5 −4 100 200 300 400 500 600 700 100 200 300 400 500 600 700 Time Time (a) (b) Figure: The S&P 500 data from 03/01/2005 to 21/12/2007. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 12/ 72
  • 13. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples Athletics records model 2 g (yi,t |µt , ξ, σ) g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)} 1 − G (yi,t |µt , ξ, σ) i=1 xt = (µt , µt ) , ˙ xt+1 | xt , ν ∼ N (Fxt , Q) , with 1 1 1/3 1/2 F = and Q = ν 2 0 1 1/2 1 −1/ξ y −µ G (y |µ, ξ, σ) = 1 − exp − 1 − ξ σ + N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 13/ 72
  • 14. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Examples 530 520 Times (seconds) 510 500 490 480 1980 1985 1990 1995 2000 2005 2010 Year Figure: Best two times of each year, in women’s 3000 metres events between 1976 and 2010. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 14/ 72
  • 15. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why are those models challenging? It’s all about dimensions. . . pθ (y1:T |x1:T )pθ (x1:T ) pθ (x1:T |y1:T ) = ∝ pθ (y1:T |x1:T )pθ (x1:T ) pθ (y1:T ) . . . even if it’s not obvious p(θ|y1:T ) ∝ p(y1:T |θ)p(θ) = p(y1:T |x1:T , θ)p(x1:T |θ)dx1:T p(θ) XT N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 15/ 72
  • 16. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Outline 1 Introduction and State Space Models 2 Reminder on some Monte Carlo methods 3 Particle Markov Chain Monte Carlo 4 SMC2 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 16/ 72
  • 17. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Metropolis-Hastings algorithm A popular method to sample from a distribution π. Algorithm 1 Metropolis-Hastings algorithm 1: Set some x (1) 2: for i = 2 to N do 3: Propose x ∗ ∼ q(·|x (i−1) ) 4: Compute the ratio: π(x ) q(x (i−1) |x ) α = min 1, π(x (i−1) ) q(x |x (i−1) ) 5: Set x (i) = x with probability α, otherwise set x (i) = x (i−1) 6: end for N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 17/ 72
  • 18. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Metropolis-Hastings algorithm Requirements π can be evaluated point-wise, up to a multiplicative constant. x is low-dimensional, otherwise designing q gets tedious or even impossible. Back to SSM p(θ|y1:T ) cannot be evaluated point-wise. pθ (x1:T |y1:T ) and p(x1:T , θ|y1:T ) are high-dimensional, and cannot be necessarily computed point-wise either. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 18/ 72
  • 19. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Gibbs sampling Suppose the target distribution π is defined on X d . Algorithm 2 Gibbs sampling (1) 1: Set some x1:d 2: for i = 2 to N do 3: for j = 1 to d do (i) (i) (i) (i−1) 4: Draw xj ∼ π(xj |x1:j−1 , xj+1:d ) 5: end for 6: end for It allows to break a high-dimensional sampling problem into many low-dimensional sampling problems! N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 19/ 72
  • 20. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Gibbs sampling Requirements Conditional distributions π(xj |x1:j−1 , xj+1:d ) can be sampled from, otherwise MH within Gibbs. The components xj are not too correlated one to another. Back to SSM The hidden states x1:T are typically very correlated one to another. If the target is p(θ, x1:T |y1:T ), θ is also very correlated with x1:T . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 20/ 72
  • 21. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Context Suppose we are interested in pθ (x1:T |y1:T ), with θ known. (i) We want to get a sample x1:T , i ∈ [1, N] from it. General idea We introduce the following sequence of distributions: {pθ (x1:t |y1:t ), t ∈ [1, T ]} Sample recursively from pθ (x1:t |y1:t ) to pθ (x1:t+1 |y1:t+1 ). N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 21/ 72
  • 22. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Definition A particle filter is just a collection of weighted points, called particles. Particles Writing (w (i) , x (i) )N ∼ π means that the empirical distribution: i=1 N w (i) δx (i) (dx) i=1 converges towards π when N → +∞. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 22/ 72
  • 23. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Importance Sampling Suppose: (i) (w1 , x (i) )N ∼ π1 i=1 and if we define: (i) (i) π2 (x (i) ) w2 = w1 ∗ π1 (x (i) ) then (i) (w2 , x (i) )N ∼ π2 i=1 under some common-sense assumptions on π1 and π2 . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 23/ 72
  • 24. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering From one time-step to the other Suppose (i) (i) (wt , x1:t )N ∼ pθ (x1:t |y1:t ) i=1 We want (i) (i) (wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 ) i=1 Decomposition pθ (x1:t+1 |y1:t+1 ) ∝ pθ (yt+1 |xt+1 )pθ (xt+1 |xt )pθ (x1:t |y1:t ) ∝ gθ (yt+1 |xt+1 )fθ (xt+1 |xt )pθ (x1:t |y1:t ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 24/ 72
  • 25. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Proposal (i) (i) Propose xt+1 ∼ qθ (xt+1 |x1:t = x1:t , y1:t ). Then: (i) (i) (i) N wt , (x1:t , xt+1 ) ∼ qθ (xt+1 |x1:t , y1:t+1 )pθ (x1:t |y1:t ) i=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 25/ 72
  • 26. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Reweighting (i) (i) (i) (i) (i) gθ (yt+1 |xt+1 )fθ (xt+1 |xt ) wt+1 = wt × (i) (i) qθ (xt+1 |x1:t , y1:t+1 ) and finally we have (i) (i) (wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 ) i=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 26/ 72
  • 27. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Resampling To fight the weight degeneracy we introduce a resampling step. Notation Family of probability distribution on {1, . . . N}N : N N a ∼ r (·|w ) for w ∈ [0, 1] such that w (i) = 1 i=1 (i) (i) The variables (at−1 )N are the indices of the parents of (x1:t )N . i=1 i=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 27/ 72
  • 28. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Algorithm 3 Sequential Monte Carlo algorithm (i) 1: Propose x1 ∼ µθ (·) (i) 2: Compute weights w1 3: for t = 2 to T do 4: Resample at−1 ∼ r (·|wt−1 ) (i) (i) (i)t−1 a (i) t−1 a (i) 5: Propose xt ∼ qθ (·|x1:t−1 , y1:t ), let x1:t = (x1:t−1 , xt ) (i) (i) 6: Update wt to get wt+1 7: end for N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 28/ 72
  • 29. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering time Figure: Three weighted trajectories x1:t at time t. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 29/ 72
  • 30. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering time Figure: Three proposed trajectories x1:t+1 at time t + 1. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 30/ 72
  • 31. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering time Figure: Three reweighted trajectories x1:t+1 at time t + 1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 31/ 72
  • 32. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Output In the end we get particles: (i) (i) (wT , x1:T )N ∼ pθ (x1:T |y1:T ) i=1 Requirements Proposal kernels qθ (·|x1:t−1 , y1:t ) from which we can sample. Weight functions which we can evaluate point-wise. These proposal kernels and weight functions must result in properly weighted samples. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 32/ 72
  • 33. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Sequential Monte Carlo for filtering Marginal likelihood A side effect of the SMC algorithm is that we can approximate the marginal likelihood ZT : ZT = p(y1:T |θ) with the following unbiased estimate: T N ˆN 1 (i) P ZT = wt − − → ZT −− N N→∞ t=1 i=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 33/ 72
  • 34. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Outline 1 Introduction and State Space Models 2 Reminder on some Monte Carlo methods 3 Particle Markov Chain Monte Carlo 4 SMC2 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 34/ 72
  • 35. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Reference Particle Markov Chain Monte Carlo methods is an article by Andrieu, Doucet, Holenstein, JRSS B., 2010, 72(3):269–342 Motivation Bayesian inference in state space models: p(θ, x1:T |y1:T ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 35/ 72
  • 36. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Idealized Metropolis–Hastings for SSM If only. . . . . . we had p(θ|y1:T ) ∝ p(θ)p(y1:T |θ) up to a multiplicative constant, we could run a MH algorithm with acceptance rate: p(θ )p(y1:T |θ ) q(θ(i) |θ ) α(θ(i) , θ ) = min 1, p(θ(i) )p(y1:T |θ(i) ) q(θ |θ(i) ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 36/ 72
  • 37. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Valid Metropolis–Hastings for SSM ?? Plug in estimates ˆN However we have ZT (θ) ≈ p(y1:T |θ) by running a SMC algorithm, and we can try to run a MH algorithm with acceptance rate: ˆN p(θ )ZT (θ ) q(θ(i) |θ ) α(θ(i) , θ ) = min 1, ˆ p(θ(i) )Z N (θ(i) ) q(θ |θ(i) ) T N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 37/ 72
  • 38. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 The Beauty of Particle MCMC “Exact approximation” Turns out it is a valid MH algorithm that targets exactly p(θ|y1:T ), regardless of the number N of particles used in the SMC algorithm ˆN that provides the estimates ZT (θ) at each iteration. State estimation In fact the PMCMC algorithms provide samples from p(θ, x1:T |y1:T ), and not only from the posterior distribution of the parameters. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 38/ 72
  • 39. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Particle Metropolis-Hastings Algorithm 4 Particle Metropolis-Hastings algorithm 1: Set some θ(1) ˆN (1) 2: Run a SMC algorithm, keep ZT (θ(1) ), draw a trajectory x1:T 3: for i = 2 to I do 4: Propose θ ∼ q(·|θ(i−1) ) 5: ˆN Run a SMC algorithm, keep ZT (θ ), draw a trajectory x1:T 6: Compute the ratio: ˆN p(θ )ZT (θ ) q(θ(i−1) |θ ) α(θ(i−1) , θ ) = min 1, ˆ p(θ(i−1) )Z N (θ(i−1) ) q(θ |θ(i−1) ) T (i) 7: Set θ(i) = θ , x1:T = x1:T with probability α, otherwise keep the previous values 8: end for N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 39/ 72
  • 40. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? Variables generated by SMC (1) (N) ∀t ∈ [1, T ] xt = (xt , . . . xt ) (1) (N) ∀t ∈ [1, T − 1] at = (at , . . . at ) Joint distribution N (i) ψ(x1 , . . . xT , a1 , . . . aT −1 ) = qθ (x1 ) i=1 T N (i) (i) a 1:t−1 × r (at−1 |wt−1 ) qθ (xt |x1:t−1 ) t=2 i=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 40/ 72
  • 41. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? Extended proposal distribution k , The PMH proposes: a new parameter θ , a trajectory x1:T , and the rest of the variables generated by the SMC. q N (θ , k , x1 , . . . xT , a1 , . . . aT −1 ) k , = q(θ |θ(i) )wT ψ (x1 , . . . xT , a1 , . . . aT −1 ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 41/ 72
  • 42. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? Extended target distribution π N (θ, k, x1 , . . . xT , a1 , . . . aT −1 ) ˜ p(θ, x1:T |y1:T ) ψ θ (x1 , . . . xT , a1 , . . . aT −1 ) = NT bk qθ (x1 1 ) T r (bt−1 |wt−1 )qθ (xt t |x1:t−1 ) k k b k bt−1 t=2 k (k) with b1:T the index history of particle x1:T . Valid algorithm From the explicit form of the extended distributions, showing that PMH is a standard MH algorithm becomes straightforward. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 42/ 72
  • 43. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Particle MCMC: conclusion Remarks It is exact regardless of N . . . . . . however a sufficient number N of particles is required to get decent acceptance rates. SMC methods are considered expensive, but easy to parallelize. Applies to a broad class of models. More sophisticated SMC and MCMC methods can be used, and result in more sophisticated Particle MCMC methods. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 43/ 72
  • 44. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Outline 1 Introduction and State Space Models 2 Reminder on some Monte Carlo methods 3 Particle Markov Chain Monte Carlo 4 SMC2 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 44/ 72
  • 45. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Our idea. . . . . . was to use the same, very powerful “extended distribution” framework, to build a SMC sampler instead of a MCMC algorithm. Foreseen benefits to sample more efficiently from the posterior distribution p(θ|y1:T ), to sample sequentially from p(θ|y1 ), p(θ|y1 , y2 ), . . . p(θ|y1:T ). and it turns out, it allows even a bit more. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 45/ 72
  • 46. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Idealized SMC sampler for SSM Algorithm 5 Iterated Batch Importance Sampling 1: Sample from the prior θ(m) ∼ p(·) for m ∈ [1, Nθ ] 2: Set ω (m) ← 1 3: for t = 1 to T do 4: Compute ut (θ(m) ) = p(yt |y1:t−1 , θ(m) ) 5: Update ω (m) ← ω (m) × ut (θ(m) ) 6: if some degeneracy criterion is met then 7: Resample the particles, reset the weights ω (m) ← 1 8: Move the particles using a Markov kernel leaving the dis- tribution invariant 9: end if 10: end for N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 46/ 72
  • 47. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Valid SMC sampler for SSM ?? Plug in estimates Similarly to PMCMC methods, we want to replace p(yt |y1:t−1 , θ(m) ) with an unbiased estimate, and see what happens. SMC everywhere We associate Nx x-particles to each of the Nθ θ-particles. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 47/ 72
  • 48. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Valid SMC sampler for SSM ?? Marginal likelihood Remember, a side effect of the SMC algorithm is that we can approximate the incremental likelihood: Nx 1 (i,m) wt ≈ p(yt |y1:t−1 , θ(m) ) Nx i=1 Move steps Instead of simple MH kernels, use PMH kernels. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 48/ 72
  • 49. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? A simple idea. . . . . . especially after the PMCMC article. Still. . . . . . some work had to be done to justify the validity of the algorithm. In short, it leads to a standard SMC sampler on a sequence of extended distributions πt (proposition 1 of the article). N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 49/ 72
  • 50. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? Additional notations hn denotes the index history of xtn , that is, hn (t) = n, and t t n htn (s) = aht (s+1) recursively, for s = t − 1, . . . , 1. s xn denotes a state trajectory finishing in xtn , that is: 1:t hn (s) xn (s) = xs t 1:t , for s = 1, . . . , t. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 50/ 72
  • 51. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? Here is what the distribution πt looks like: 1:N 1:Nx πt (θ, x1:t x , a1:t−1 ) = p(θ|y1:t )   N  N  1 x p(xn |θ, y1:t )  x    1:t i × t−1 q1,θ (x1 ) Nx Nx   n=1  i=1  n   i=ht (1)     t Nx    i as−1 i as−1  i × Ws−1,θ qs,θ (xs |xs−1 )  s=2 i=1    n  i=ht (s) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 51/ 72
  • 52. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Why does it work? PMCMC move steps These steps are valid because the PMCMC invariant distribution πt defined on 1:N 1:Nx θ, k, x1:t x , a1:t−1 is such that πt is the marginal distribution of 1:N 1:Nx θ, x1:t x , a1:t−1 with respect to πt . (Sections 3.2, 3.3 of the article) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 52/ 72
  • 53. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Benefits Explicit form of the distribution It allows to prove the validity of the algorithm, but also: to get samples from p(θ, x1:t |y1:t ), to validate an automatic calibration of Nx . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 53/ 72
  • 54. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Benefits Drawing trajectories If for every θ-particle θ(m) one draws an index n (m) uniformly on {1, . . . Nx }, then the weighted sample: n (m),m (ω m , θm , x1:t )m∈1:Nθ follows p(θ, x1:t |y1:t ). Memory cost Need to store the x-trajectories, if one wants to make inference about x1:t (smoothing). If the interest is only on parameter inference (θ), filtering (xt ) and prediction (yt+1 ), no need to store the trajectories. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 54/ 72
  • 55. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Benefits Estimating functionals of the states We have a test function h and want to estimate E [h(θ, x1:t )|y1:t ]. Estimator: Nθ 1 n (m),m Nθ ω m h(θm , x1:t ). m=1 ω m m=1 Rao–Blackwellized estimator: Nθ Nx 1 n,m Nθ ωm Wt,θm h(θm , x1:t ) . n m m=1 ω m=1 n=1 (Section 3.4 of the article) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 55/ 72
  • 56. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Benefits Evidence The evidence of the data given the model is defined as: t p(y1:t ) = p(ys |y1:s−1 ) s=1 And it can be used to compare models. SMC2 provides the following estimate: Nθ ˆ 1 Lt = Nθ ω m p (yt |y1:t−1 , θm ) ˆ m m=1 ω m=1 (Section 3.5 of the article) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 56/ 72
  • 57. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Benefits Exchange importance sampling step ˜ Launch a new SMC for each θ-particle, with Nx x-particles. Joint distribution: ˜ ˜ 1:N 1:Nx πt (θ, x1:t x , a1:t−1 )ψt,θ (˜1:tNx , ˜1:t−1 ) x 1: a1:Nx Retain the new x-particles and drop the old ones, updating the θ-weights with: ˜ ˜ ˜ ˜ ˆ ˜1: a1:Nx Zt (θ, x1:tNx , ˜1:t−1 ) exch ut θ, x1:t x , a1:t−1 , x1:tNx , ˜1:t−1 1:N 1:Nx ˜1: a1:Nx = ˆ Zt (θ, x 1:Nx , a1:Nx ) 1:t 1:t−1 (Section 3.6 of the article) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 57/ 72
  • 58. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Warning Plug in estimates Not any SMC sampler can be turned into a SMC2 algorithm, by replacing the exact weights with estimates: these have to be unbiased. . . !! N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 58/ 72
  • 59. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Warning Example For instance, if instead of using the sequence of distributions: {p(θ|y1:t )}T t=1 one wants to use the “tempered” sequence: {p(θ|y1:T )γk }K k=1 with γk an increasing sequence from 0 to 1, then one should find unbiased estimates of p(θ|y1:T )γk −γk−1 to plug into the idealized SMC sampler. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 59/ 72
  • 60. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Stochastic Volatility (sophisticated) 1/2 yt = µ + βvt + vt t ,t ≥ 1 iid iid k ∼ Poi λξ 2 /ω 2 c1:k ∼ U(t, t + 1) ei:k ∼ Exp ξ/ω 2 k zt+1 = e −λ zt + e −λ(t+1−cj ) ej j=1   k 1 vt+1 = zt − zt+1 + ej  λ j=1 xt+1 = (vt+1 , zt+1 ) N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 60/ 72
  • 61. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations 1.0 800 700 8 0.8 600 Squared observations Acceptance rates 6 0.6 500 Nx 400 4 0.4 300 2 0.2 200 100 0 0.0 200 400 600 800 1000 0 200 400 600 800 1000 0 200 400 600 800 1000 Time Iterations Iterations (a) (b) (c) Figure: Squared observations (synthetic data set), acceptance rates, and illustration of the automatic increase of Nx . N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 61/ 72
  • 62. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations T = 250 T = 500 T = 750 T = 1000 8 6 Density 4 2 0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 µ Figure: Concentration of the posterior distribution for parameter µ. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 62/ 72
  • 63. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Multifactor model k1 k2 1/2 yt = µ+βvt +vt t +ρ1 e1,j +ρ2 e2,j −ξ(w ρ1 λ1 +(1−w )ρ2 λ2 ) j=1 j=1 N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 63/ 72
  • 64. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Evidence compared to the one factor model variable 20 Multi factor without leverage 4 Multi factor with leverage Squared observations 15 2 10 0 5 −2 100 200 300 400 500 600 700 100 200 300 400 500 600 700 Time Iterations (a) (b) Figure: S&P500 squared observations, and log-evidence comparison between models (relative to the one-factor model). N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 64/ 72
  • 65. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Athletics records model 2 g (yi,t |µt , ξ, σ) g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)} 1 − G (yi,t |µt , ξ, σ) i=1 xt = (µt , µt ) , ˙ xt+1 | xt , ν ∼ N (Fxt , Q) , with 1 1 1/3 1/2 F = and Q = ν 2 0 1 1/2 1 −1/ξ y −µ G (y |µ, ξ, σ) = 1 − exp − 1 − ξ σ + N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 65/ 72
  • 66. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations 530 520 Times (seconds) 510 500 490 480 1980 1985 1990 1995 2000 2005 2010 Year Figure: Best two times of each year, in women’s 3000 metres events between 1976 and 2010. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 66/ 72
  • 67. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Motivating question How unlikely is Wang Junxia’s record in 1993? A smoothing problem We want to estimate the likelihood of Wang Junxia’s record in 1993, given that we observe a better time than the previous world record. We want to use all the observations from 1976 to 2010 to answer the question. Note We exclude observations from the year 1993. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 67/ 72
  • 68. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations Some probabilities of interest y pt = P(yt ≤ y |y1976:2010 ) = G (y |µt , θ)p(µt |y1976:2010 , θ)p(θ|y1976:2010 ) dµt dθ Θ X 486.11 502.62 cond := p 486.11 /p 502.62 . The interest lies in p1993 , p1993 and pt t t N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 68/ 72
  • 69. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Numerical illustrations 10−1 10−2 Probability 10−3 10−4 1980 1985 1990 1995 2000 2005 2010 Year 502.62 Figure: Estimates of the probability of interest (top) pt , (middle) cond 486.11 2 pt and (bottom) pt , obtained with the SMC algorithm. The y -axis is in log scale, and the dotted line indicates the year 1993 which motivated the study. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 69/ 72
  • 70. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Conclusion A powerful framework The SMC2 framework allows to obtain various quantities of interest, in a quite generic and “black-box” way. It extends the PMCMC framework introduced by Andrieu, Doucet and Holenstein. A package is available: http://code.google.com/p/py-smc2/. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 70/ 72
  • 71. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Acknowledgments N. Chopin is supported by the ANR grant ANR-008-BLAN-0218 “BigMC” of the French Ministry of research. P.E. Jacob is supported by a PhD fellowship from the AXA Research Fund. O. Papaspiliopoulos would like to acknowledge financial support by the Spanish government through a “Ramon y Cajal” fellowship and grant MTM2009-09063. The authors are thankful to Arnaud Doucet (University of British Columbia) and to Gareth W. Peters (University of New South Wales) for useful comments. N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 71/ 72
  • 72. Introduction and State Space Models Reminder on some Monte Carlo methods Particle Markov Chain Monte Carlo SMC2 Bibliography SMC2 : A sequential Monte Carlo algorithm with particle Markov chain Monte Carlo updates, N. Chopin, P.E. Jacob, O. Papaspiliopoulos, submitted Main references: Particle Markov Chain Monte Carlo methods, C. Andrieu, A. Doucet, R. Holenstein, JRSS B., 2010, 72(3):269–342 The pseudo-marginal approach for efficient computation, C. Andrieu, G.O. Roberts, Ann. Statist., 2009, 37, 697–725 Random weight particle filtering of continuous time processes, P. Fearnhead, O. Papaspiliopoulos, G.O. Roberts, A. Stuart, JRSS B., 2010, 72:497–513 Feynman-Kac Formulae, P. Del Moral, Springer N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 72/ 72