SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
Sequential Quasi-Monte Carlo: From the Curse of
Dimensionality to High-Dimensional Filtering
Problems?
Mathieu Gerber
University of Bristol, School of Mathematics
Based on joint works with Nicolas Chopin (ENSAE/CREST)
SAMSI Opening Workshop on QMC
August 30, 2017
In this talk
I am first going to describe the kind of integration problems SQMC
is designed to solve.
I will then show that SQMC suffers from the curse of
dimensionality.
I will finally give some evidence that SQMC may be useful in
high-dimensional settings and provide ideas to pursue the research
in this direction.
Starting point: QMC integration
We consider in this talk the problem of computing
I(f ) =
ˆ
[0,1]s
f (u)du
QMC integration approximates I(f ) by
IN(f ) =
1
N
N
n=1
f (un
) where u1:N
is a QMC point set
If u1:N is a scrambled net, and provided that f is sufficiently
smooth (Owen 1997)
Var(IN(f )) = O N−3
{log N}s−1
Dimension versus effective dimension
RQMC based integration methods may converge at a much
faster rate than O N−3{log N}s−1 if the effective dimension
of f is small.
Trivial example: f (u) = s
i=1 fi (ui ).
In general, “sophisticated” QMC based integration methods are
usually needed to take advantage of the low effective
dimension of f .
SQMC: Set-up
SQMC is designed to approximate high dimensional integrals I(f )
for f of the form
f (u0:t) = ϕ(u0:t)Qt(u0:t), u0:t = (u0, . . . , ut) ∈ [0, 1]d(t+1)
where Qt(u0:t) is the p.d.f. on [0, 1]d(t+1) defined by
Qt(u0:t) =
1
Zt
m0(u0)G0(u0)
t
s=1
ms(us|us−1)Gs(us−1, us),
with
m0(u0)du0 a probability measure on [0, 1]d
ms(us|us−1)dus a Markov kernel acting from [0, 1]d into itself
G0(u0) > 0 and Gs(us−1, us) > 0
Zt a normalizing constant.
Jargon: Qt(u)du is called a Feynman-Kac measure.
Motivation: Inference in state-space models
State-space models consider an unobserved Markov chain (xt)t≥0
x0 ∼ η0(x0)dx0, xt|xt−1 ∼ qt(xt|xt−1)dxt
taking values in X = [0, 1]d , and an observed process (yt)t≥0,
yt|xt ∼ gt(yt|xt)dyt.
Typically, we are interested in recovering p(xt|y0:t) (filtering
distribution) or p(x0:t|y0:t) (smoothing distribution)
Many applications in engineering (tracking), finance (stochastic
volatility), epidemiology, ecology, neurosciences, etc.
Remark: There is only little loss of generality of assuming that
X = [0, 1]d .
Feynman-Kac measure and state-space models
Taking e.g. m0(x0) = η0(x0) and
ms(xs|xs−1) = qs(xs|xs−1), Gs(xs−1, xs) := gs(ys|xs) s ≥ 1
we see that Qt(x0:t)dx0:t = p(x0:t|y0:t)dx0:t.
Computing I(f ) with f = ϕ(u)Qt(u) amounts to computing the
smoothing expectation
Qt(ϕ) := E[ϕ(x0:t)|y0:t].
Important particular case: When ϕ(u) = ϕ(ut) computing I(f )
amounts to computing the filtering expectation
Qt(ϕ) := E[ϕ(xt)|y0:t].
Remark: This is an integration problem of dimension s = d(t + 1).
The Monte Carlo solution: Sequential Monte Carlo (or
particle filtering)
Recall that our goal is to compute
Qt(ϕ) =
ˆ
[0,1]d(t+1)
ϕ(ut)Qt(ut)dut
which is a high dimensional problem.
However, this integral can be “efficiently” computed thanks to the
following recursive property of Qt:
Qt(ut) =
1
lt
ˆ
[0,1]d
mt(ut|ut−1)Gt(ut−1, ut)Qt−1(ut−1)dut−1
Monte Carlo algorithms used to approximate this kind of integrals
are known as sequential Monte Carlo samplers (or particle filters).
Sequential Monte Carlo
Operations must be be performed for all n ∈ 1 : N.
At time 0,
(a) Generate xn
0 ∼ m0(dx0)
(b) Compute W n
0 = G0(xn
0)/ N
m=1 G0(xm
0 )
Recursively, for time t = 1 : T,
(a) Generate an
t−1 ∼ M(W 1:N
t−1), the multinomial
distribution that produces outcome m with probability
W m
t−1 [resampling step]
(b) Generate xn
t ∼ mt(x
an
t−1
t−1 , dxt) [mutation step]
(c) Compute W n
t = Gt(x
an
t−1
t−1 , xn
t )/ N
m=1 Gt(x
am
t−1
t−1 , xm
t )
Output at time t ≥ 0:
QN
t (dxt) :=
N
n=1
W n
t δxn
t
(dxt) ≈ Qt(dxt).
Why do I mean by “efficient”
Time uniform bound:
sup
t≥0
sup
ϕ∈F
E |QN
t (ϕ) − Qt(ϕ)|p 1/p
≤
C
N1/2
.
Central limit theorem:
√
N QN
t (ϕ) − Qt(ϕ) ⇒ N1(0, σ2
t,ϕ).
Law of large numbers
etc...
See the book by Del Moral (2004).
Sequential quasi-Monte Carlo
SQMC is a QMC version of SMC.
Each iteration is based on a QMC point set of dimension
d + 1, where the first component is use for the resampling step
and the remaining ones for the mutation.
The resampling step of SQMC requires to sort the particles
using the Hilbert space-filling curve H : [0, 1] → [0, 1]d .
The cost of SQMC is O(N log N).
We notably show that for SQMC based on scrambled nets is
such that
MSE QN
t (ϕ) = O(N−1
), ∀ϕ ∈ Cb((0, 1)d
)
Related approach: Array-RQMC of L’Ecuyer et al. (2006)
An example: Stochastic volatility model
Model is
yt = S
1
2
t t
xt = µ + Φ(xt−1 − µ) + Ψ
1
2 νt
with correlated noise terms: ( t, νt) ∼ N2d (0, C) and where
St = diag(exp(xt1), · · · , exp(xtd )).
Parameters are set to their true value and we compare SQMC with
SMC for the estimation of the log-likelihood function (i.e. log ZT ).
SQMC is implemented using nested scrambled Sobol’ sequences as
input.
Simulation results for d ∈ {1, 2, 4, 10}
d=1
d=2
d=4
d=10
100
101
102
103
104
102
103
104
105
Number of particles ( log10 scale)
Gainfactor(log10scale)
Log-likelihood evaluation (based on T = 400 data points and 200 independent
SMC and SQMC runs). Remark: Integration is on a space of dimension dT.
SQMC and the curse of dimensionality: regularity of the
Hilbert curve
The Hilbert curve is Hölder continuous with Hölder exponent 1/d.
Hence, as d increases the smoothness of the curve deteriorates very
quickly.
We have recently established that
Var
1
N
N
n=1
ϕ(x
an
t−1
t−1 , ) x1:N
t−1 ≤
Cd
N1+ 1
d
.
Key message: the dimension of the resampling step has a major
impact on the performance of SQMC.
Remark: 1/d is the best possible exponent for a continuous
measure preserving mapping f : [0, 1] → [0, 1]d (Jaffard and
Nicolay, 2006)
Toward a new implementation of SQMC
The key limitation of SQMC when d increases is its resampling step
that introduces a noise of size N−1−1/d .
Natural question: Can we come up with an other implementation
to bypass this problem?
Toward a new implementation of SQMC
The key limitation of SQMC when d increases is its resampling step
that introduces a noise of size N−1−1/d .
Natural question: Can we come up with an other implementation
to bypass this problem?
Good news: Yes, it is possible to implement SQMC such that only
univariate resampling steps are needed.
Toward a new implementation of SQMC
The key limitation of SQMC when d increases is its resampling step
that introduces a noise of size N−1−1/d .
Natural question: Can we come up with an other implementation
to bypass this problem?
Good news: Yes, it is possible to implement SQMC such that only
univariate resampling steps are needed.
Bad news: This increases the running time from O(N log N) to
O(N2).
Toward a new implementation of SQMC
The key limitation of SQMC when d increases is its resampling step
that introduces a noise of size N−1−1/d .
Natural question: Can we come up with an other implementation
to bypass this problem?
Good news: Yes, it is possible to implement SQMC such that only
univariate resampling steps are needed.
Bad news: This increases the running time from O(N log N) to
O(N2).
Good news: As explained below, this quadratic costs is not an issue
when dealing with high-dimensional state-space models.
Filtering in high-dimensional spaces
Particle filters suffer from the curse of dimensionality because they
rely on importance sampling.
To perform particle filtering in high-dimensional spaces we need
That the model has some special structure (low “effective
dimension”).
To create algorithms able to exploit this special structure
We focus below on the algorithm proposed by Beskos et al. (2014)/
Naesseth et al. (2016), which is one of the two known particle filter
algorithms whose error is stable w.r.t. d (for some models).
PF of Beskos et al. (2014), Naesseth et al. (2016)
The basic idea is to incorporate information coming from
observations yt progressively as we sample the components of xt.
To this end, at each time step and for each particles we run an
internal particle filter based on M ≥ 1 particles.
Each internal particle filter aims at approximating the “optimal”
proposal distribution mopt
t (xt−1, dxt), where M controls for the
quality of the approximation.
However, for any M ≥ 1 the algorithm is valid in the sense that it
converges at any time t to the filtering distribution as N → +∞.
Each step of the algorithm costs O(NM2d2) operations; that is,
the cost of the internal filters is quadratic in M.
PF in high-dimension: A naive SQMC version
To get some insights about why SQMC may be useful to solve
high-dimensional filtering problems we compare below the original
algorithm with a version where SQMC is used insight the internal
filters.
The external filter is a plain Monte Carlo filter and thus we cannot
hope that the variance converges faster than N−1.
The idea is that with QMC a smaller value of M is needed to get a
“good” approximation of the “optimal” proposal distribution
mopt
t (xt−1, dxt).
Toy example: Linear Gaussian model
We consider the following model:
xt =
1
2
xt−2 + t, t ∼ f ( t)d t
yt = xt + νt, νt ∼ Nd (0, σ2
Id )
where
f ( ) ∝ exp −
τ
2
d
i=1
2
t,i −
λ
2
d
i=2
( t,i − t,i−1)2
.
In this case, each particle internal particle filter amounts to running
a particle filter in dimension 1.
Linear Gaussian model: Simulation results (d = 256)
0.0025
0.0050
0.0075
0.0100
0 5 10 15 20 25
Time
Variance
Variances
Without QMC
With QMC
MAP variance comparison
Estimation of E[xt,1|y0:t]. We take N = 100 and M = 32 in the
simulations.
A more interesting example: Spatio-temporal model
Xt,1
Xt,2
Xt,15
Xt,16
Xt,4
Xt,3
Xt,14
Xt,13
Xt,5
Xt,8
Xt,9
Xt,12
Xt,6
Xt,7
Xt,10
Xt,11
Spatio-temporal model: Some remarks
Because of the complex dependence structure among the
components of xt the internal filters are not classical particle filters.
For the internal filters we use the algorithm proposed by Lindsten et
al. (2017), as in Naesseth et al. (2016).
In the QMC version, SQMC is used only in the first step of the
internal filters.
Spatio-temporal model: Simulation results (d = 64)
0.000
0.005
0.010
0.015
0.020
0 10 20 30
Time
Variance
colour
QMC
Without QMC
Estimation of E[xt,1|y0:t ]: We take N = 50 and M = 32 in the simulations
Variance reduction v.s. running time reduction
The use of SQMC allows to reduce the variance in these
high-dimensional filtering problems.
The variance reductions brought by SQMC is not impressive.
However, each iteration of this algorithm costs O(NM2d).
Roughly speaking, reducing the variance by 2 with Monte Carlo
would require to double N and thus to increase the cost by a factor
O(M2d).
For the Gaussian model M2d = 262 144 while, for the
spatio-temporal model, M2d = 65 536.
In both cases, reducing the variance by a factor 2 significantly
reduces the running time.
SMC in high-dimension and the modified SQMC algorithm
To break the d-dimensional resampling step the
aforementioned modified implementation of SQMC moves
from xt−1 to xt component by component.
SMC in high-dimension and the modified SQMC algorithm
To break the d-dimensional resampling step the
aforementioned modified implementation of SQMC moves
from xt−1 to xt component by component.
To break the curse of dimensionality due to importance
resampling, particle filtering in high-dimension requires to
incorporate information coming from yt progressively as we
sample the components of xt
SMC in high-dimension and the modified SQMC algorithm
To break the d-dimensional resampling step the
aforementioned modified implementation of SQMC moves
from xt−1 to xt component by component.
To break the curse of dimensionality due to importance
resampling, particle filtering in high-dimension requires to
incorporate information coming from yt progressively as we
sample the components of xt
Hence, what SQMC is ideal for SQMC is needed to perform
particle filtering in high-dimension!!
S(Q)MC in high-dimension: A new idea
One can used the aforementioned implementation of SMC/SQMC
for the internal filters.
This resulting algorithm
1. Seems to have some clear advantages over the Beskos et al.
(2014)/ Naesseth et al. (2016) algorithm.
2. Seems SQMC friendly in the sense that it could be
implemented so that only 1-dimensional resampling steps are
needed.
3. Is such that QMC can be efficiently used for both the internal
and external filters (plain QMC algorithm).
4. Cost O(NM2d), as the Beskos et al. (2014)/ Naesseth et al.
(2016) algorithm.
Some questions
Some practical questions:
How does this algorithm performs in practise (MC and QMC).
For the QMC version there is a trade-off between the
dimension of the resampling steps and the dimension of the
QMC point sets used as input. What is a good choice?
For the spatio-temporal model different implementations are
possible. What is a good choice?
Some theoretical questions
Theoretical validity for any fixed M ≥ 1?
Stability as d increases? Can we borrow the results of Beskos
et al. (2014) to say something about this?
Convergence rate?
Conclusion
SQMC is a QMC version of particle filtering that converges faster
than N−1/2.
In practice, we observe that SQMC
1. Converges faster than SMC when d is small (let’s say
d = 1, 2, 3).
2. Yields important gains in term of running time when d is large
so that “high-dimensional” particle filters have to be used.
3. Is, in general, not so useful for moderate values of d.
2. is probably the most interesting application of SQMC and I
proposed in this talk an idea to pursue the research in this
direction.
Beyond the QMC motivation, the development of Monte Carlo
particle filters to solve high-dimensional filtering problems is of
great interest.
QMC and computational statistics
Most people that work on computational statistics do not believe in
QMC.
I think that the main reasons are:
People that work in statistics care about variance reductions
but only up to a certain level.
Existing successful applications of QMC to solve statistical
problems are all for problems that are considered as
1. Easy (and thus not very exciting....)
2. Solved (see my first point)
High-dimensional particle filtering is both a complicated and an
unsolved problem and could potentially be a good problem to
convince statisticians that QMC is actually useful...

Weitere ähnliche Inhalte

Was ist angesagt?

Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methodsChristian Robert
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningAndres Hernandez
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmologyChristian Robert
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesisValentin De Bortoli
 

Was ist angesagt? (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Gtti 10032021
Gtti 10032021Gtti 10032021
Gtti 10032021
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesis
 

Ähnlich wie Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Introduction to Sequential quasi-Monte Carlo - Mathieu Gerber, Aug 30, 2017

Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT posterAlexander Litvinenko
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
a decomposition methodMin quasdratic.pdf
a decomposition methodMin quasdratic.pdfa decomposition methodMin quasdratic.pdf
a decomposition methodMin quasdratic.pdfAnaRojas146538
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodSSA KPI
 
Probabilistic Control of Uncertain Linear Systems Using Stochastic Reachability
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityProbabilistic Control of Uncertain Linear Systems Using Stochastic Reachability
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityLeo Asselborn
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplingsPierre Jacob
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
 
A kernel-free particle method: Smile Problem Resolved
A kernel-free particle method: Smile Problem ResolvedA kernel-free particle method: Smile Problem Resolved
A kernel-free particle method: Smile Problem ResolvedKaiju Capital Management
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsJagadeeswaran Rathinavel
 
Joint blind calibration and time-delay estimation for multiband ranging
Joint blind calibration and time-delay estimation for multiband rangingJoint blind calibration and time-delay estimation for multiband ranging
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
 
Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Rediet Moges
 

Ähnlich wie Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Introduction to Sequential quasi-Monte Carlo - Mathieu Gerber, Aug 30, 2017 (20)

Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
 
intro
introintro
intro
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
a decomposition methodMin quasdratic.pdf
a decomposition methodMin quasdratic.pdfa decomposition methodMin quasdratic.pdf
a decomposition methodMin quasdratic.pdf
 
Cdc18 dg lee
Cdc18 dg leeCdc18 dg lee
Cdc18 dg lee
 
residue
residueresidue
residue
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
Probabilistic Control of Uncertain Linear Systems Using Stochastic Reachability
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityProbabilistic Control of Uncertain Linear Systems Using Stochastic Reachability
Probabilistic Control of Uncertain Linear Systems Using Stochastic Reachability
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
numerical.ppt
numerical.pptnumerical.ppt
numerical.ppt
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
 
A kernel-free particle method: Smile Problem Resolved
A kernel-free particle method: Smile Problem ResolvedA kernel-free particle method: Smile Problem Resolved
A kernel-free particle method: Smile Problem Resolved
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
 
Joint blind calibration and time-delay estimation for multiband ranging
Joint blind calibration and time-delay estimation for multiband rangingJoint blind calibration and time-delay estimation for multiband ranging
Joint blind calibration and time-delay estimation for multiband ranging
 
Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06
 

Mehr von The Statistical and Applied Mathematical Sciences Institute

Mehr von The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Kürzlich hochgeladen

Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Association for Project Management
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfSherif Taha
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfNirmal Dwivedi
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxcallscotland1987
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxnegromaestrong
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfPoh-Sun Goh
 

Kürzlich hochgeladen (20)

Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Introduction to Sequential quasi-Monte Carlo - Mathieu Gerber, Aug 30, 2017

  • 1. Sequential Quasi-Monte Carlo: From the Curse of Dimensionality to High-Dimensional Filtering Problems? Mathieu Gerber University of Bristol, School of Mathematics Based on joint works with Nicolas Chopin (ENSAE/CREST) SAMSI Opening Workshop on QMC August 30, 2017
  • 2. In this talk I am first going to describe the kind of integration problems SQMC is designed to solve. I will then show that SQMC suffers from the curse of dimensionality. I will finally give some evidence that SQMC may be useful in high-dimensional settings and provide ideas to pursue the research in this direction.
  • 3. Starting point: QMC integration We consider in this talk the problem of computing I(f ) = ˆ [0,1]s f (u)du QMC integration approximates I(f ) by IN(f ) = 1 N N n=1 f (un ) where u1:N is a QMC point set If u1:N is a scrambled net, and provided that f is sufficiently smooth (Owen 1997) Var(IN(f )) = O N−3 {log N}s−1
  • 4. Dimension versus effective dimension RQMC based integration methods may converge at a much faster rate than O N−3{log N}s−1 if the effective dimension of f is small. Trivial example: f (u) = s i=1 fi (ui ). In general, “sophisticated” QMC based integration methods are usually needed to take advantage of the low effective dimension of f .
  • 5. SQMC: Set-up SQMC is designed to approximate high dimensional integrals I(f ) for f of the form f (u0:t) = ϕ(u0:t)Qt(u0:t), u0:t = (u0, . . . , ut) ∈ [0, 1]d(t+1) where Qt(u0:t) is the p.d.f. on [0, 1]d(t+1) defined by Qt(u0:t) = 1 Zt m0(u0)G0(u0) t s=1 ms(us|us−1)Gs(us−1, us), with m0(u0)du0 a probability measure on [0, 1]d ms(us|us−1)dus a Markov kernel acting from [0, 1]d into itself G0(u0) > 0 and Gs(us−1, us) > 0 Zt a normalizing constant. Jargon: Qt(u)du is called a Feynman-Kac measure.
  • 6. Motivation: Inference in state-space models State-space models consider an unobserved Markov chain (xt)t≥0 x0 ∼ η0(x0)dx0, xt|xt−1 ∼ qt(xt|xt−1)dxt taking values in X = [0, 1]d , and an observed process (yt)t≥0, yt|xt ∼ gt(yt|xt)dyt. Typically, we are interested in recovering p(xt|y0:t) (filtering distribution) or p(x0:t|y0:t) (smoothing distribution) Many applications in engineering (tracking), finance (stochastic volatility), epidemiology, ecology, neurosciences, etc. Remark: There is only little loss of generality of assuming that X = [0, 1]d .
  • 7. Feynman-Kac measure and state-space models Taking e.g. m0(x0) = η0(x0) and ms(xs|xs−1) = qs(xs|xs−1), Gs(xs−1, xs) := gs(ys|xs) s ≥ 1 we see that Qt(x0:t)dx0:t = p(x0:t|y0:t)dx0:t. Computing I(f ) with f = ϕ(u)Qt(u) amounts to computing the smoothing expectation Qt(ϕ) := E[ϕ(x0:t)|y0:t]. Important particular case: When ϕ(u) = ϕ(ut) computing I(f ) amounts to computing the filtering expectation Qt(ϕ) := E[ϕ(xt)|y0:t]. Remark: This is an integration problem of dimension s = d(t + 1).
  • 8. The Monte Carlo solution: Sequential Monte Carlo (or particle filtering) Recall that our goal is to compute Qt(ϕ) = ˆ [0,1]d(t+1) ϕ(ut)Qt(ut)dut which is a high dimensional problem. However, this integral can be “efficiently” computed thanks to the following recursive property of Qt: Qt(ut) = 1 lt ˆ [0,1]d mt(ut|ut−1)Gt(ut−1, ut)Qt−1(ut−1)dut−1 Monte Carlo algorithms used to approximate this kind of integrals are known as sequential Monte Carlo samplers (or particle filters).
  • 9. Sequential Monte Carlo Operations must be be performed for all n ∈ 1 : N. At time 0, (a) Generate xn 0 ∼ m0(dx0) (b) Compute W n 0 = G0(xn 0)/ N m=1 G0(xm 0 ) Recursively, for time t = 1 : T, (a) Generate an t−1 ∼ M(W 1:N t−1), the multinomial distribution that produces outcome m with probability W m t−1 [resampling step] (b) Generate xn t ∼ mt(x an t−1 t−1 , dxt) [mutation step] (c) Compute W n t = Gt(x an t−1 t−1 , xn t )/ N m=1 Gt(x am t−1 t−1 , xm t ) Output at time t ≥ 0: QN t (dxt) := N n=1 W n t δxn t (dxt) ≈ Qt(dxt).
  • 10. Why do I mean by “efficient” Time uniform bound: sup t≥0 sup ϕ∈F E |QN t (ϕ) − Qt(ϕ)|p 1/p ≤ C N1/2 . Central limit theorem: √ N QN t (ϕ) − Qt(ϕ) ⇒ N1(0, σ2 t,ϕ). Law of large numbers etc... See the book by Del Moral (2004).
  • 11. Sequential quasi-Monte Carlo SQMC is a QMC version of SMC. Each iteration is based on a QMC point set of dimension d + 1, where the first component is use for the resampling step and the remaining ones for the mutation. The resampling step of SQMC requires to sort the particles using the Hilbert space-filling curve H : [0, 1] → [0, 1]d . The cost of SQMC is O(N log N). We notably show that for SQMC based on scrambled nets is such that MSE QN t (ϕ) = O(N−1 ), ∀ϕ ∈ Cb((0, 1)d ) Related approach: Array-RQMC of L’Ecuyer et al. (2006)
  • 12. An example: Stochastic volatility model Model is yt = S 1 2 t t xt = µ + Φ(xt−1 − µ) + Ψ 1 2 νt with correlated noise terms: ( t, νt) ∼ N2d (0, C) and where St = diag(exp(xt1), · · · , exp(xtd )). Parameters are set to their true value and we compare SQMC with SMC for the estimation of the log-likelihood function (i.e. log ZT ). SQMC is implemented using nested scrambled Sobol’ sequences as input.
  • 13. Simulation results for d ∈ {1, 2, 4, 10} d=1 d=2 d=4 d=10 100 101 102 103 104 102 103 104 105 Number of particles ( log10 scale) Gainfactor(log10scale) Log-likelihood evaluation (based on T = 400 data points and 200 independent SMC and SQMC runs). Remark: Integration is on a space of dimension dT.
  • 14. SQMC and the curse of dimensionality: regularity of the Hilbert curve The Hilbert curve is Hölder continuous with Hölder exponent 1/d. Hence, as d increases the smoothness of the curve deteriorates very quickly. We have recently established that Var 1 N N n=1 ϕ(x an t−1 t−1 , ) x1:N t−1 ≤ Cd N1+ 1 d . Key message: the dimension of the resampling step has a major impact on the performance of SQMC. Remark: 1/d is the best possible exponent for a continuous measure preserving mapping f : [0, 1] → [0, 1]d (Jaffard and Nicolay, 2006)
  • 15. Toward a new implementation of SQMC The key limitation of SQMC when d increases is its resampling step that introduces a noise of size N−1−1/d . Natural question: Can we come up with an other implementation to bypass this problem?
  • 16. Toward a new implementation of SQMC The key limitation of SQMC when d increases is its resampling step that introduces a noise of size N−1−1/d . Natural question: Can we come up with an other implementation to bypass this problem? Good news: Yes, it is possible to implement SQMC such that only univariate resampling steps are needed.
  • 17. Toward a new implementation of SQMC The key limitation of SQMC when d increases is its resampling step that introduces a noise of size N−1−1/d . Natural question: Can we come up with an other implementation to bypass this problem? Good news: Yes, it is possible to implement SQMC such that only univariate resampling steps are needed. Bad news: This increases the running time from O(N log N) to O(N2).
  • 18. Toward a new implementation of SQMC The key limitation of SQMC when d increases is its resampling step that introduces a noise of size N−1−1/d . Natural question: Can we come up with an other implementation to bypass this problem? Good news: Yes, it is possible to implement SQMC such that only univariate resampling steps are needed. Bad news: This increases the running time from O(N log N) to O(N2). Good news: As explained below, this quadratic costs is not an issue when dealing with high-dimensional state-space models.
  • 19. Filtering in high-dimensional spaces Particle filters suffer from the curse of dimensionality because they rely on importance sampling. To perform particle filtering in high-dimensional spaces we need That the model has some special structure (low “effective dimension”). To create algorithms able to exploit this special structure We focus below on the algorithm proposed by Beskos et al. (2014)/ Naesseth et al. (2016), which is one of the two known particle filter algorithms whose error is stable w.r.t. d (for some models).
  • 20. PF of Beskos et al. (2014), Naesseth et al. (2016) The basic idea is to incorporate information coming from observations yt progressively as we sample the components of xt. To this end, at each time step and for each particles we run an internal particle filter based on M ≥ 1 particles. Each internal particle filter aims at approximating the “optimal” proposal distribution mopt t (xt−1, dxt), where M controls for the quality of the approximation. However, for any M ≥ 1 the algorithm is valid in the sense that it converges at any time t to the filtering distribution as N → +∞. Each step of the algorithm costs O(NM2d2) operations; that is, the cost of the internal filters is quadratic in M.
  • 21. PF in high-dimension: A naive SQMC version To get some insights about why SQMC may be useful to solve high-dimensional filtering problems we compare below the original algorithm with a version where SQMC is used insight the internal filters. The external filter is a plain Monte Carlo filter and thus we cannot hope that the variance converges faster than N−1. The idea is that with QMC a smaller value of M is needed to get a “good” approximation of the “optimal” proposal distribution mopt t (xt−1, dxt).
  • 22. Toy example: Linear Gaussian model We consider the following model: xt = 1 2 xt−2 + t, t ∼ f ( t)d t yt = xt + νt, νt ∼ Nd (0, σ2 Id ) where f ( ) ∝ exp − τ 2 d i=1 2 t,i − λ 2 d i=2 ( t,i − t,i−1)2 . In this case, each particle internal particle filter amounts to running a particle filter in dimension 1.
  • 23. Linear Gaussian model: Simulation results (d = 256) 0.0025 0.0050 0.0075 0.0100 0 5 10 15 20 25 Time Variance Variances Without QMC With QMC MAP variance comparison Estimation of E[xt,1|y0:t]. We take N = 100 and M = 32 in the simulations.
  • 24. A more interesting example: Spatio-temporal model Xt,1 Xt,2 Xt,15 Xt,16 Xt,4 Xt,3 Xt,14 Xt,13 Xt,5 Xt,8 Xt,9 Xt,12 Xt,6 Xt,7 Xt,10 Xt,11
  • 25. Spatio-temporal model: Some remarks Because of the complex dependence structure among the components of xt the internal filters are not classical particle filters. For the internal filters we use the algorithm proposed by Lindsten et al. (2017), as in Naesseth et al. (2016). In the QMC version, SQMC is used only in the first step of the internal filters.
  • 26. Spatio-temporal model: Simulation results (d = 64) 0.000 0.005 0.010 0.015 0.020 0 10 20 30 Time Variance colour QMC Without QMC Estimation of E[xt,1|y0:t ]: We take N = 50 and M = 32 in the simulations
  • 27. Variance reduction v.s. running time reduction The use of SQMC allows to reduce the variance in these high-dimensional filtering problems. The variance reductions brought by SQMC is not impressive. However, each iteration of this algorithm costs O(NM2d). Roughly speaking, reducing the variance by 2 with Monte Carlo would require to double N and thus to increase the cost by a factor O(M2d). For the Gaussian model M2d = 262 144 while, for the spatio-temporal model, M2d = 65 536. In both cases, reducing the variance by a factor 2 significantly reduces the running time.
  • 28. SMC in high-dimension and the modified SQMC algorithm To break the d-dimensional resampling step the aforementioned modified implementation of SQMC moves from xt−1 to xt component by component.
  • 29. SMC in high-dimension and the modified SQMC algorithm To break the d-dimensional resampling step the aforementioned modified implementation of SQMC moves from xt−1 to xt component by component. To break the curse of dimensionality due to importance resampling, particle filtering in high-dimension requires to incorporate information coming from yt progressively as we sample the components of xt
  • 30. SMC in high-dimension and the modified SQMC algorithm To break the d-dimensional resampling step the aforementioned modified implementation of SQMC moves from xt−1 to xt component by component. To break the curse of dimensionality due to importance resampling, particle filtering in high-dimension requires to incorporate information coming from yt progressively as we sample the components of xt Hence, what SQMC is ideal for SQMC is needed to perform particle filtering in high-dimension!!
  • 31. S(Q)MC in high-dimension: A new idea One can used the aforementioned implementation of SMC/SQMC for the internal filters. This resulting algorithm 1. Seems to have some clear advantages over the Beskos et al. (2014)/ Naesseth et al. (2016) algorithm. 2. Seems SQMC friendly in the sense that it could be implemented so that only 1-dimensional resampling steps are needed. 3. Is such that QMC can be efficiently used for both the internal and external filters (plain QMC algorithm). 4. Cost O(NM2d), as the Beskos et al. (2014)/ Naesseth et al. (2016) algorithm.
  • 32. Some questions Some practical questions: How does this algorithm performs in practise (MC and QMC). For the QMC version there is a trade-off between the dimension of the resampling steps and the dimension of the QMC point sets used as input. What is a good choice? For the spatio-temporal model different implementations are possible. What is a good choice? Some theoretical questions Theoretical validity for any fixed M ≥ 1? Stability as d increases? Can we borrow the results of Beskos et al. (2014) to say something about this? Convergence rate?
  • 33. Conclusion SQMC is a QMC version of particle filtering that converges faster than N−1/2. In practice, we observe that SQMC 1. Converges faster than SMC when d is small (let’s say d = 1, 2, 3). 2. Yields important gains in term of running time when d is large so that “high-dimensional” particle filters have to be used. 3. Is, in general, not so useful for moderate values of d. 2. is probably the most interesting application of SQMC and I proposed in this talk an idea to pursue the research in this direction. Beyond the QMC motivation, the development of Monte Carlo particle filters to solve high-dimensional filtering problems is of great interest.
  • 34. QMC and computational statistics Most people that work on computational statistics do not believe in QMC. I think that the main reasons are: People that work in statistics care about variance reductions but only up to a certain level. Existing successful applications of QMC to solve statistical problems are all for problems that are considered as 1. Easy (and thus not very exciting....) 2. Solved (see my first point) High-dimensional particle filtering is both a complicated and an unsolved problem and could potentially be a good problem to convince statisticians that QMC is actually useful...