SlideShare ist ein Scribd-Unternehmen logo
1 von 66
Downloaden Sie, um offline zu lesen
DISCUSSION
                    of
Bayesian Computation via empirical likelihood

     Stefano Cabras, stefano.cabras@uc3m.es
        Universidad Carlos III de Madrid (Spain)
              Universit` di Cagliari (Italy)
                       a



                Padova, 21-Mar-2013
Summary
   ◮   Problem:
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
   ◮   we want to obtain the posterior

                             πN (θ | y ) ∝ LN (θ)π(θ).
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
   ◮   we want to obtain the posterior

                             πN (θ | y ) ∝ LN (θ)π(θ).



   ◮   BUT
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
   ◮   we want to obtain the posterior

                             πN (θ | y ) ∝ LN (θ)π(θ).



   ◮   BUT
         ◮   IF LN (θ) is not available:
               ◮   THEN all life ABC;
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
   ◮   we want to obtain the posterior

                             πN (θ | y ) ∝ LN (θ)π(θ).



   ◮   BUT
         ◮   IF LN (θ) is not available:
               ◮   THEN all life ABC;
         ◮   IF it is not even possible to simulate from f (y | θ):
Summary
   ◮   Problem:
         ◮   a statistical model f (y | θ);
         ◮   a prior π(θ) on θ;
   ◮   we want to obtain the posterior

                             πN (θ | y ) ∝ LN (θ)π(θ).



   ◮   BUT
         ◮   IF LN (θ) is not available:
               ◮   THEN all life ABC;
         ◮   IF it is not even possible to simulate from f (y | θ):
               ◮   THEN replace LN (θ) with LEL (θ)
                   (the proposed BCel procedure):

                                     π(θ|y ) ∝ LEL (θ) × π(θ).

                   .
... what remains about the f (y | θ) ?
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                              Ef (y |θ) [h(Y , θ)] = 0.
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                              Ef (y |θ) [h(Y , θ)] = 0.



     ◮   The relation between θ and obs. Y is model conditioned and
         expressed by h(Y , θ);
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                              Ef (y |θ) [h(Y , θ)] = 0.



     ◮   The relation between θ and obs. Y is model conditioned and
         expressed by h(Y , θ);

     ◮   Constraints are model driven and so there is still a timid trace
         of f (y | θ) in BCel .
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                              Ef (y |θ) [h(Y , θ)] = 0.



     ◮   The relation between θ and obs. Y is model conditioned and
         expressed by h(Y , θ);

     ◮   Constraints are model driven and so there is still a timid trace
         of f (y | θ) in BCel .
     ◮   Examples:
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                                Ef (y |θ) [h(Y , θ)] = 0.



     ◮   The relation between θ and obs. Y is model conditioned and
         expressed by h(Y , θ);

     ◮   Constraints are model driven and so there is still a timid trace
         of f (y | θ) in BCel .
     ◮   Examples:
           ◮   The coalescent model example is illuminating in suggesting the
               score of the pairwise likelihood;
... what remains about the f (y | θ) ?

     ◮   Recall that the Empirical Likelihood is defined, for iid sample,
         by means of a set of constraints:

                                Ef (y |θ) [h(Y , θ)] = 0.



     ◮   The relation between θ and obs. Y is model conditioned and
         expressed by h(Y , θ);

     ◮   Constraints are model driven and so there is still a timid trace
         of f (y | θ) in BCel .
     ◮   Examples:
           ◮   The coalescent model example is illuminating in suggesting the
               score of the pairwise likelihood;
           ◮   The residuals in GARCH models.
... a suggestion




               What if we do not even known h(·) ?
... how to elicit h(·) automatically
... how to elicit h(·) automatically
... how to elicit h(·) automatically




     ◮   Set h(Y , θ) = Y − g (θ), where

                              g (θ) = Ef (y |θ) (Y |θ),

         is the regression function of Y |θ;
... how to elicit h(·) automatically




     ◮   Set h(Y , θ) = Y − g (θ), where

                              g (θ) = Ef (y |θ) (Y |θ),

         is the regression function of Y |θ;

     ◮   g (θ) should be replaced by an estimator g (θ).
How to estimate g (θ) ?




      1
       ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,
   Castellanos, Ruli (Ercim-2012, Oviedo).
How to estimate g (θ) ?



     ◮    Use a once forever pilot-run simulation study:      1




      1
       ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,
   Castellanos, Ruli (Ercim-2012, Oviedo).
How to estimate g (θ) ?



     ◮    Use a once forever pilot-run simulation study:      1


    1. Consider a grid (or regular lattice) of θ made by M points:
       θ1 , . . . , θM




      1
       ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,
   Castellanos, Ruli (Ercim-2012, Oviedo).
How to estimate g (θ) ?



     ◮    Use a once forever pilot-run simulation study:      1


    1. Consider a grid (or regular lattice) of θ made by M points:
       θ1 , . . . , θM
    2. Simulate the corresponding y1 , . . . , yM




      1
       ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,
   Castellanos, Ruli (Ercim-2012, Oviedo).
How to estimate g (θ) ?



     ◮    Use a once forever pilot-run simulation study:       1


    1. Consider a grid (or regular lattice) of θ made by M points:
       θ1 , . . . , θM
    2. Simulate the corresponding y1 , . . . , yM
    3. Regress y1 , . . . , yM on θ 1 , . . . , θ M obtaining g (θ).




      1
       ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras,
   Castellanos, Ruli (Ercim-2012, Oviedo).
... example: y ∼ N(|θ|, 1)
   For a pilot run of M = 1000 we have g (θ) = |θ|.
                                       ˆ


                              Pilot−Run s.s.


                                 g(θ)
       10
   y

       5
       0




            −10      −5             0          5      10

                                    θ
... example: y ∼ N(|θ|, 1)
   Suppose to draw a n = 100 sample from θ = 2:


                             Histogram of y
               20
               15
   Frequency

               10
               5
               0




                    0   1     2               3   4

                                   y
... example: y ∼ N(|θ|, 1)
   The Empirical Likelihood is this
               2.5
               2.0
   Emp. Lik.

               1.5
               1.0




                     −4   −2          0   2   4

                                      θ
1st Point: Do we need necessarily have to use f (y | θ) ?
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
     ◮   How this is reflected in the BCel ?
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
     ◮   How this is reflected in the BCel ?
           ◮   For a given data y;
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
     ◮   How this is reflected in the BCel ?
           ◮   For a given data y;
           ◮   and h(Y , θ) fixed;
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
     ◮   How this is reflected in the BCel ?
           ◮   For a given data y;
           ◮   and h(Y , θ) fixed;
           ◮   the LEL (θ) is the same regardless of f (y | θ).
1st Point: Do we need necessarily have to use f (y | θ) ?




     ◮   The above data maybe drawn from a (e.g.) a Half Normal;
     ◮   How this is reflected in the BCel ?
           ◮   For a given data y;
           ◮   and h(Y , θ) fixed;
           ◮   the LEL (θ) is the same regardless of f (y | θ).

                            Can we ignore f (y | θ) ?
2nd Point: Sample free vs Simulation free
2nd Point: Sample free vs Simulation free


     ◮   The Empirical Likelihood is ”simulation free” but not ”sample
         free”, i.e.
2nd Point: Sample free vs Simulation free


     ◮   The Empirical Likelihood is ”simulation free” but not ”sample
         free”, i.e.
           ◮   LEL (θ) → LN (θ) for n → ∞,
           ◮   implying π(θ|y) → πN (θ | y ) asymptotically in n.
2nd Point: Sample free vs Simulation free


     ◮   The Empirical Likelihood is ”simulation free” but not ”sample
         free”, i.e.
           ◮   LEL (θ) → LN (θ) for n → ∞,
           ◮   implying π(θ|y) → πN (θ | y ) asymptotically in n.
     ◮   The ABC is ”sample free” but not ”simulation free”, i.e.
2nd Point: Sample free vs Simulation free


     ◮   The Empirical Likelihood is ”simulation free” but not ”sample
         free”, i.e.
           ◮   LEL (θ) → LN (θ) for n → ∞,
           ◮   implying π(θ|y) → πN (θ | y ) asymptotically in n.
     ◮   The ABC is ”sample free” but not ”simulation free”, i.e.
           ◮   π(θ|ρ(s(y), so bs) < ǫ) → πN (θ | y ) as ǫ → 0
           ◮   implying convergence in the number of simulations if s(y ) were
               sufficient.
2nd Point: Sample free vs Simulation free


     ◮   The Empirical Likelihood is ”simulation free” but not ”sample
         free”, i.e.
           ◮   LEL (θ) → LN (θ) for n → ∞,
           ◮   implying π(θ|y) → πN (θ | y ) asymptotically in n.
     ◮   The ABC is ”sample free” but not ”simulation free”, i.e.
           ◮   π(θ|ρ(s(y), so bs) < ǫ) → πN (θ | y ) as ǫ → 0
           ◮   implying convergence in the number of simulations if s(y ) were
               sufficient.

                    A quick answer recommends use BCel
                                    BUT
                  a small sample would recommend ABC ?
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003)
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003)
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
          ◮   Modified-Likelihoods:
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003)
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009)

                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003)
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009)

                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
          ◮   Quasi-Likelihoods:
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003)
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009)

                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006)
                ◮   Greco et al. (JSPI, 2008)
                ◮   Ventura et al. (JSPI, 2010)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003) : examples and coverages of C.I.
                ◮   Mengersen et al. (PNAS, 2012)

                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009)

                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006)
                ◮   Greco et al. (JSPI, 2008)
                ◮   Ventura et al. (JSPI, 2010)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003) : examples and coverages of C.I.
                ◮   Mengersen et al. (PNAS, 2012) : examples and coverages of
                    C.I.
                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009)

                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006)
                ◮   Greco et al. (JSPI, 2008)
                ◮   Ventura et al. (JSPI, 2010)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003) : examples and coverages of C.I.
                ◮   Mengersen et al. (PNAS, 2012) : examples and coverages of
                    C.I.
                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009) : second order matching
                    properties;
                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006)
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006)
                ◮   Greco et al. (JSPI, 2008)
                ◮   Ventura et al. (JSPI, 2010)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003) : examples and coverages of C.I.
                ◮   Mengersen et al. (PNAS, 2012) : examples and coverages of
                    C.I.
                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009) : second order matching
                    properties;
                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples;
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006)
                ◮   Greco et al. (JSPI, 2008)
                ◮   Ventura et al. (JSPI, 2010)
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?
    ◮   The use of pseudo-likelihoods is not new in the Bayesian
        setting:
          ◮   Empirical Likelihoods:
                ◮   Lazar (Biometrika, 2003) : examples and coverages of C.I.
                ◮   Mengersen et al. (PNAS, 2012) : examples and coverages of
                    C.I.
                ◮   ...
          ◮   Modified-Likelihoods:
                ◮   Ventura et al. (JASA, 2009) : second order matching
                    properties;
                ◮   Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples;
                ◮   ...
          ◮   Quasi-Likelihoods:
                ◮   Lin (Statist. Methodol., 2006) : examples;
                ◮   Greco et al. (JSPI, 2008) : robustness properties;
                ◮   Ventura et al. (JSPI, 2010) : examples and coverages of C.I.;
                ◮   ...
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?


    ◮   Monahan & Boos (Biometrika, 1992) proposed a notion of
        validity:
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?


    ◮   Monahan & Boos (Biometrika, 1992) proposed a notion of
        validity:

     π(θ|y ) should obey the laws of probability in a fashion that is
          consistent with statements derived from Bayes’rule.
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?


    ◮   Monahan & Boos (Biometrika, 1992) proposed a notion of
        validity:

     π(θ|y ) should obey the laws of probability in a fashion that is
          consistent with statements derived from Bayes’rule.

    ◮   Very difficult!
3nd Point: How to validate a pseudo-posterior
π(θ|y ) ∝ LEL (θ) × π(θ) ?


    ◮   Monahan & Boos (Biometrika, 1992) proposed a notion of
        validity:

     π(θ|y ) should obey the laws of probability in a fashion that is
          consistent with statements derived from Bayes’rule.

    ◮   Very difficult!

     How to validate the pseudo-posterior π(θ|y ) when this is not
                              possible ?
... Last point: the ABC is still a terrific tool
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
           ◮   Statistical Journals;
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
           ◮   Statistical Journals;
           ◮   Twitter;
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
           ◮   Statistical Journals;
           ◮   Twitter;
           ◮   Xiang’s blog ( xianblog.wordpress.com )
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
           ◮   Statistical Journals;
           ◮   Twitter;
           ◮   Xiang’s blog ( xianblog.wordpress.com )
     ◮   ... it is tailored to Approximate LN (θ).
... Last point: the ABC is still a terrific tool




     ◮   ... a lot of references:
           ◮   Statistical Journals;
           ◮   Twitter;
           ◮   Xiang’s blog ( xianblog.wordpress.com )
     ◮   ... it is tailored to Approximate LN (θ).

                          Where is the A in BCel ?
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Weitere ähnliche Inhalte

Was ist angesagt?

Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?Christian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationChristian Robert
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
 
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...Umberto Picchini
 
Intro to Approximate Bayesian Computation (ABC)
Intro to Approximate Bayesian Computation (ABC)Intro to Approximate Bayesian Computation (ABC)
Intro to Approximate Bayesian Computation (ABC)Umberto Picchini
 
Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Umberto Picchini
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
Considerate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionConsiderate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionMichael Stumpf
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsChristian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methodsChristian Robert
 

Was ist angesagt? (20)

Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...
 
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
 
Intro to Approximate Bayesian Computation (ABC)
Intro to Approximate Bayesian Computation (ABC)Intro to Approximate Bayesian Computation (ABC)
Intro to Approximate Bayesian Computation (ABC)
 
Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...Accelerated approximate Bayesian computation with applications to protein fol...
Accelerated approximate Bayesian computation with applications to protein fol...
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
Considerate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionConsiderate Approaches to ABC Model Selection
Considerate Approaches to ABC Model Selection
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
von Mises lecture, Berlin
von Mises lecture, Berlinvon Mises lecture, Berlin
von Mises lecture, Berlin
 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
Desy
DesyDesy
Desy
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Bayesian computation with INLA
Bayesian computation with INLABayesian computation with INLA
Bayesian computation with INLA
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
 

Ähnlich wie Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapterChristian Robert
 
Machine learning (9)
Machine learning (9)Machine learning (9)
Machine learning (9)NYversity
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Tomasz Kusmierczyk
 
Jensen's inequality, EM 알고리즘
Jensen's inequality, EM 알고리즘 Jensen's inequality, EM 알고리즘
Jensen's inequality, EM 알고리즘 Jungkyu Lee
 
Conceptual Introduction to Gaussian Processes
Conceptual Introduction to Gaussian ProcessesConceptual Introduction to Gaussian Processes
Conceptual Introduction to Gaussian ProcessesJuanPabloCarbajal3
 
P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2S.Shayan Daneshvar
 
Algebras for programming languages
Algebras for programming languagesAlgebras for programming languages
Algebras for programming languagesYoshihiro Mizoguchi
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Cs229 notes8
Cs229 notes8Cs229 notes8
Cs229 notes8VuTran231
 
STOMA FULL SLIDE (probability of IISc bangalore)
STOMA FULL SLIDE (probability of IISc bangalore)STOMA FULL SLIDE (probability of IISc bangalore)
STOMA FULL SLIDE (probability of IISc bangalore)2010111
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Optimization of probabilistic argumentation with Markov processes
Optimization of probabilistic argumentation with Markov processesOptimization of probabilistic argumentation with Markov processes
Optimization of probabilistic argumentation with Markov processesEmmanuel Hadoux
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...Cristiano Longo
 
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionA Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionKonkuk University, Korea
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)NYversity
 

Ähnlich wie Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013 (20)

Lecture_9.pdf
Lecture_9.pdfLecture_9.pdf
Lecture_9.pdf
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapter
 
Machine learning (9)
Machine learning (9)Machine learning (9)
Machine learning (9)
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.
 
Jensen's inequality, EM 알고리즘
Jensen's inequality, EM 알고리즘 Jensen's inequality, EM 알고리즘
Jensen's inequality, EM 알고리즘
 
Conceptual Introduction to Gaussian Processes
Conceptual Introduction to Gaussian ProcessesConceptual Introduction to Gaussian Processes
Conceptual Introduction to Gaussian Processes
 
P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2
 
Algebras for programming languages
Algebras for programming languagesAlgebras for programming languages
Algebras for programming languages
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Cs229 notes8
Cs229 notes8Cs229 notes8
Cs229 notes8
 
STOMA FULL SLIDE (probability of IISc bangalore)
STOMA FULL SLIDE (probability of IISc bangalore)STOMA FULL SLIDE (probability of IISc bangalore)
STOMA FULL SLIDE (probability of IISc bangalore)
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Optimization of probabilistic argumentation with Markov processes
Optimization of probabilistic argumentation with Markov processesOptimization of probabilistic argumentation with Markov processes
Optimization of probabilistic argumentation with Markov processes
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...
 
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionA Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)
 
Congrès SMAI 2019
Congrès SMAI 2019Congrès SMAI 2019
Congrès SMAI 2019
 

Mehr von Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 

Mehr von Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 

Kürzlich hochgeladen

Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesShubhangi Sonawane
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIShubhangi Sonawane
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 

Kürzlich hochgeladen (20)

Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 

Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013

  • 1. DISCUSSION of Bayesian Computation via empirical likelihood Stefano Cabras, stefano.cabras@uc3m.es Universidad Carlos III de Madrid (Spain) Universit` di Cagliari (Italy) a Padova, 21-Mar-2013
  • 2. Summary ◮ Problem:
  • 3. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ;
  • 4. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ; ◮ we want to obtain the posterior πN (θ | y ) ∝ LN (θ)π(θ).
  • 5. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ; ◮ we want to obtain the posterior πN (θ | y ) ∝ LN (θ)π(θ). ◮ BUT
  • 6. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ; ◮ we want to obtain the posterior πN (θ | y ) ∝ LN (θ)π(θ). ◮ BUT ◮ IF LN (θ) is not available: ◮ THEN all life ABC;
  • 7. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ; ◮ we want to obtain the posterior πN (θ | y ) ∝ LN (θ)π(θ). ◮ BUT ◮ IF LN (θ) is not available: ◮ THEN all life ABC; ◮ IF it is not even possible to simulate from f (y | θ):
  • 8. Summary ◮ Problem: ◮ a statistical model f (y | θ); ◮ a prior π(θ) on θ; ◮ we want to obtain the posterior πN (θ | y ) ∝ LN (θ)π(θ). ◮ BUT ◮ IF LN (θ) is not available: ◮ THEN all life ABC; ◮ IF it is not even possible to simulate from f (y | θ): ◮ THEN replace LN (θ) with LEL (θ) (the proposed BCel procedure): π(θ|y ) ∝ LEL (θ) × π(θ). .
  • 9. ... what remains about the f (y | θ) ?
  • 10. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0.
  • 11. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0. ◮ The relation between θ and obs. Y is model conditioned and expressed by h(Y , θ);
  • 12. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0. ◮ The relation between θ and obs. Y is model conditioned and expressed by h(Y , θ); ◮ Constraints are model driven and so there is still a timid trace of f (y | θ) in BCel .
  • 13. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0. ◮ The relation between θ and obs. Y is model conditioned and expressed by h(Y , θ); ◮ Constraints are model driven and so there is still a timid trace of f (y | θ) in BCel . ◮ Examples:
  • 14. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0. ◮ The relation between θ and obs. Y is model conditioned and expressed by h(Y , θ); ◮ Constraints are model driven and so there is still a timid trace of f (y | θ) in BCel . ◮ Examples: ◮ The coalescent model example is illuminating in suggesting the score of the pairwise likelihood;
  • 15. ... what remains about the f (y | θ) ? ◮ Recall that the Empirical Likelihood is defined, for iid sample, by means of a set of constraints: Ef (y |θ) [h(Y , θ)] = 0. ◮ The relation between θ and obs. Y is model conditioned and expressed by h(Y , θ); ◮ Constraints are model driven and so there is still a timid trace of f (y | θ) in BCel . ◮ Examples: ◮ The coalescent model example is illuminating in suggesting the score of the pairwise likelihood; ◮ The residuals in GARCH models.
  • 16. ... a suggestion What if we do not even known h(·) ?
  • 17. ... how to elicit h(·) automatically
  • 18. ... how to elicit h(·) automatically
  • 19. ... how to elicit h(·) automatically ◮ Set h(Y , θ) = Y − g (θ), where g (θ) = Ef (y |θ) (Y |θ), is the regression function of Y |θ;
  • 20. ... how to elicit h(·) automatically ◮ Set h(Y , θ) = Y − g (θ), where g (θ) = Ef (y |θ) (Y |θ), is the regression function of Y |θ; ◮ g (θ) should be replaced by an estimator g (θ).
  • 21. How to estimate g (θ) ? 1 ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras, Castellanos, Ruli (Ercim-2012, Oviedo).
  • 22. How to estimate g (θ) ? ◮ Use a once forever pilot-run simulation study: 1 1 ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras, Castellanos, Ruli (Ercim-2012, Oviedo).
  • 23. How to estimate g (θ) ? ◮ Use a once forever pilot-run simulation study: 1 1. Consider a grid (or regular lattice) of θ made by M points: θ1 , . . . , θM 1 ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras, Castellanos, Ruli (Ercim-2012, Oviedo).
  • 24. How to estimate g (θ) ? ◮ Use a once forever pilot-run simulation study: 1 1. Consider a grid (or regular lattice) of θ made by M points: θ1 , . . . , θM 2. Simulate the corresponding y1 , . . . , yM 1 ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras, Castellanos, Ruli (Ercim-2012, Oviedo).
  • 25. How to estimate g (θ) ? ◮ Use a once forever pilot-run simulation study: 1 1. Consider a grid (or regular lattice) of θ made by M points: θ1 , . . . , θM 2. Simulate the corresponding y1 , . . . , yM 3. Regress y1 , . . . , yM on θ 1 , . . . , θ M obtaining g (θ). 1 ... similar to Fearnhead, P. and D. Prangle (JRRS-B, 2012) or Cabras, Castellanos, Ruli (Ercim-2012, Oviedo).
  • 26. ... example: y ∼ N(|θ|, 1) For a pilot run of M = 1000 we have g (θ) = |θ|. ˆ Pilot−Run s.s. g(θ) 10 y 5 0 −10 −5 0 5 10 θ
  • 27. ... example: y ∼ N(|θ|, 1) Suppose to draw a n = 100 sample from θ = 2: Histogram of y 20 15 Frequency 10 5 0 0 1 2 3 4 y
  • 28. ... example: y ∼ N(|θ|, 1) The Empirical Likelihood is this 2.5 2.0 Emp. Lik. 1.5 1.0 −4 −2 0 2 4 θ
  • 29. 1st Point: Do we need necessarily have to use f (y | θ) ?
  • 30. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal;
  • 31. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal; ◮ How this is reflected in the BCel ?
  • 32. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal; ◮ How this is reflected in the BCel ? ◮ For a given data y;
  • 33. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal; ◮ How this is reflected in the BCel ? ◮ For a given data y; ◮ and h(Y , θ) fixed;
  • 34. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal; ◮ How this is reflected in the BCel ? ◮ For a given data y; ◮ and h(Y , θ) fixed; ◮ the LEL (θ) is the same regardless of f (y | θ).
  • 35. 1st Point: Do we need necessarily have to use f (y | θ) ? ◮ The above data maybe drawn from a (e.g.) a Half Normal; ◮ How this is reflected in the BCel ? ◮ For a given data y; ◮ and h(Y , θ) fixed; ◮ the LEL (θ) is the same regardless of f (y | θ). Can we ignore f (y | θ) ?
  • 36. 2nd Point: Sample free vs Simulation free
  • 37. 2nd Point: Sample free vs Simulation free ◮ The Empirical Likelihood is ”simulation free” but not ”sample free”, i.e.
  • 38. 2nd Point: Sample free vs Simulation free ◮ The Empirical Likelihood is ”simulation free” but not ”sample free”, i.e. ◮ LEL (θ) → LN (θ) for n → ∞, ◮ implying π(θ|y) → πN (θ | y ) asymptotically in n.
  • 39. 2nd Point: Sample free vs Simulation free ◮ The Empirical Likelihood is ”simulation free” but not ”sample free”, i.e. ◮ LEL (θ) → LN (θ) for n → ∞, ◮ implying π(θ|y) → πN (θ | y ) asymptotically in n. ◮ The ABC is ”sample free” but not ”simulation free”, i.e.
  • 40. 2nd Point: Sample free vs Simulation free ◮ The Empirical Likelihood is ”simulation free” but not ”sample free”, i.e. ◮ LEL (θ) → LN (θ) for n → ∞, ◮ implying π(θ|y) → πN (θ | y ) asymptotically in n. ◮ The ABC is ”sample free” but not ”simulation free”, i.e. ◮ π(θ|ρ(s(y), so bs) < ǫ) → πN (θ | y ) as ǫ → 0 ◮ implying convergence in the number of simulations if s(y ) were sufficient.
  • 41. 2nd Point: Sample free vs Simulation free ◮ The Empirical Likelihood is ”simulation free” but not ”sample free”, i.e. ◮ LEL (θ) → LN (θ) for n → ∞, ◮ implying π(θ|y) → πN (θ | y ) asymptotically in n. ◮ The ABC is ”sample free” but not ”simulation free”, i.e. ◮ π(θ|ρ(s(y), so bs) < ǫ) → πN (θ | y ) as ǫ → 0 ◮ implying convergence in the number of simulations if s(y ) were sufficient. A quick answer recommends use BCel BUT a small sample would recommend ABC ?
  • 42. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ?
  • 43. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting:
  • 44. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods:
  • 45. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) ◮ Mengersen et al. (PNAS, 2012) ◮ ...
  • 46. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) ◮ Mengersen et al. (PNAS, 2012) ◮ ... ◮ Modified-Likelihoods:
  • 47. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) ◮ Mengersen et al. (PNAS, 2012) ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ...
  • 48. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) ◮ Mengersen et al. (PNAS, 2012) ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ... ◮ Quasi-Likelihoods:
  • 49. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) ◮ Mengersen et al. (PNAS, 2012) ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) ◮ Greco et al. (JSPI, 2008) ◮ Ventura et al. (JSPI, 2010) ◮ ...
  • 50. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) : examples and coverages of C.I. ◮ Mengersen et al. (PNAS, 2012) ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) ◮ Greco et al. (JSPI, 2008) ◮ Ventura et al. (JSPI, 2010) ◮ ...
  • 51. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) : examples and coverages of C.I. ◮ Mengersen et al. (PNAS, 2012) : examples and coverages of C.I. ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) ◮ Greco et al. (JSPI, 2008) ◮ Ventura et al. (JSPI, 2010) ◮ ...
  • 52. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) : examples and coverages of C.I. ◮ Mengersen et al. (PNAS, 2012) : examples and coverages of C.I. ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) : second order matching properties; ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) ◮ Greco et al. (JSPI, 2008) ◮ Ventura et al. (JSPI, 2010) ◮ ...
  • 53. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) : examples and coverages of C.I. ◮ Mengersen et al. (PNAS, 2012) : examples and coverages of C.I. ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) : second order matching properties; ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples; ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) ◮ Greco et al. (JSPI, 2008) ◮ Ventura et al. (JSPI, 2010) ◮ ...
  • 54. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ The use of pseudo-likelihoods is not new in the Bayesian setting: ◮ Empirical Likelihoods: ◮ Lazar (Biometrika, 2003) : examples and coverages of C.I. ◮ Mengersen et al. (PNAS, 2012) : examples and coverages of C.I. ◮ ... ◮ Modified-Likelihoods: ◮ Ventura et al. (JASA, 2009) : second order matching properties; ◮ Chang and Mukerjee (Stat. & Prob. Letters 2006) : examples; ◮ ... ◮ Quasi-Likelihoods: ◮ Lin (Statist. Methodol., 2006) : examples; ◮ Greco et al. (JSPI, 2008) : robustness properties; ◮ Ventura et al. (JSPI, 2010) : examples and coverages of C.I.; ◮ ...
  • 55. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ Monahan & Boos (Biometrika, 1992) proposed a notion of validity:
  • 56. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ Monahan & Boos (Biometrika, 1992) proposed a notion of validity: π(θ|y ) should obey the laws of probability in a fashion that is consistent with statements derived from Bayes’rule.
  • 57. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ Monahan & Boos (Biometrika, 1992) proposed a notion of validity: π(θ|y ) should obey the laws of probability in a fashion that is consistent with statements derived from Bayes’rule. ◮ Very difficult!
  • 58. 3nd Point: How to validate a pseudo-posterior π(θ|y ) ∝ LEL (θ) × π(θ) ? ◮ Monahan & Boos (Biometrika, 1992) proposed a notion of validity: π(θ|y ) should obey the laws of probability in a fashion that is consistent with statements derived from Bayes’rule. ◮ Very difficult! How to validate the pseudo-posterior π(θ|y ) when this is not possible ?
  • 59. ... Last point: the ABC is still a terrific tool
  • 60. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references:
  • 61. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references: ◮ Statistical Journals;
  • 62. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references: ◮ Statistical Journals; ◮ Twitter;
  • 63. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references: ◮ Statistical Journals; ◮ Twitter; ◮ Xiang’s blog ( xianblog.wordpress.com )
  • 64. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references: ◮ Statistical Journals; ◮ Twitter; ◮ Xiang’s blog ( xianblog.wordpress.com ) ◮ ... it is tailored to Approximate LN (θ).
  • 65. ... Last point: the ABC is still a terrific tool ◮ ... a lot of references: ◮ Statistical Journals; ◮ Twitter; ◮ Xiang’s blog ( xianblog.wordpress.com ) ◮ ... it is tailored to Approximate LN (θ). Where is the A in BCel ?