SlideShare ist ein Scribd-Unternehmen logo
1 von 35
Downloaden Sie, um offline zu lesen
Jittered Sampling: Bounds and Problems
Stefan Steinerberger
joint with Florian Pausinger (Belfast), Manas Rachh (Yale)
Florian Pausinger (Belfast) and Manas Rachh (Yale)
QMC: the standard Dogma
Star discrepancy.
D⇤
N(X) = sup
R⇢[0,1]d
# {i : xi 2 R}
N
|R|
This is a good quantity to minimize because
Theorem (Koksma-Hlawka)
Z
[0,1]d
f (x)dx
1
N
NX
n=1
f (xn) . (D⇤
N) (var(f )) .
In particular: error only depends on the oscillation of f .
QMC: the standard Dogma
Star discrepancy.
D⇤
N(X) = sup
R⇢[0,1]d
# {i : xi 2 R}
N
|R|
Two competing conjectures (emotionally charged subject)
D⇤
N &
(log N)d 1
N
or D⇤
N &
(log N)d/2
N
.
There are many clever constructions of point set that achieve
D⇤
N .
(log N)d 1
N
.
QMC: the standard Dogma
D⇤
N &
(log N)d 1
N
or D⇤
N &
(log N)d/2
N
.
How would one actually try to prove this? Open for 80+ years,
that sounds bad.
Small ball conjecture seems spiritually related.
Interlude: the small ball conjecture
+1 1
1 +1
+1 1
1 +1
Haar functions hR on rectangles R.
Interlude: the small ball conjecture
All dyadic rectangles of area 2 2.
Interlude: the small ball conjecture
Small ball conjecture, Talagrand (1994)
For all choices of sign "R 2 { 1, 1}
X
|R|=2 n
"RhR
L1
& nd/2
.
1. Talagrand cared about behavior of the Brownian sheet.
2. The lower bound & n(d 1)/2 is easy.
3. The case d = 2 is the only one that has been settled: three
proofs due to M. Talagrand, V. Temlyakov (via Riesz
products) and a beautiful one by Bilyk & Feldheim.
4. Only partial results in d 3 (Bilyk, Lacey, etc.)
Interlude: the small ball conjecture
Small ball conjecture, Talagrand (1994)
For all choices of sign "R 2 { 1, 1}
X
|R|=2 n
"RhR
L1
& nd/2
.
A recent surprise
Theorem (Noah Kravitz, arXiv:1712.01206)
For any choice of signs "R and any integer 0  k  n + 1,
8
<
:
x 2 [0, 1)2
:
X
|R|=2 n
"RhR = n + 1 2k
9
=
;
=
1
2n+1
✓
n + 1
k
◆
.
Problem with the Standard Dogma
Star discrepancy.
D⇤
N(X) = sup
R⇢[0,1]d
# {i : xi 2 R}
N
|R|
The constructions achieving
D⇤
N .
(log N)d 1
N
start being e↵ective around N dd (actually a bit larger even).
More or less totally useless in high dimensions.
Monte Carlo strikes back
Star discrepancy.
D⇤
N(X) = sup
R⇢[0,1]d
# {i : xi 2 R}
N
|R|
We want error bounds in N, d!
(Heinrich, Novak, Wasilkowski, Wozniakowski, 2002)
There are points
D⇤
N(X) .
d
p
N
.
This is still the best result. (Aistleitner 2011: constant c = 10).
How do you get these points? Monte Carlo
Jittered Sampling
If we already agree to distribute points randomly, we might just as
well distribute them randomly in a clever way.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Bellhouse, 1981
Cook, Porter & Carpenter, 1984
Cook, Porter & Carpenter, 1984
A Recent Application in Compressed Sensing (Nov 2015)
Theorem (Beck, 1987)
E D⇤
N(jittered sampling)  Cd
(log N)
1
2
N
1
2
+ 1
2d
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Theorem (Beck, 1987)
E D⇤
N(jittered sampling)  Cd
(log N)
1
2
N
1
2
+ 1
2d
I a very general result for many di↵erent discrepancies
I L2 based discrepancies (Chen & Travaligni, 2009)
I Problem: same old constant Cd (might be huge, the way the
proof proceeds it will be MASSIVE)
Theorem (Pausinger and S., 2015)
For N su ciently large (depending on d)
1
10
d
N
1
2
+ 1
2d
 ED⇤
N(P) 
p
d(log N)
1
2
N
1
2
+ 1
2d
.
I ’su ciently large’ is bad (talk about this later)
I lower bound can probably be improved
I upper bound not by much
How the proof works
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
How the proof works
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
How the proof works
• •
•
•
Maximize discrepancy over
p
N dimensional set in [0, N 1/2].
DN ⇠
pp
N
p
N
1
p
N
=
1
N3/4
.
I lose a logarithm
I union bound on the other cubes
Large
Large
Large
smallsmall
small
The large contribution
comes from codimension 1 sets.
In d dimensions, we therefore expect the main contribution of the
discrepancy to behave like
DN ⇠
p
N
d 1
d
N
d 1
d
1
N
1
d
=
1
N
d 1
2d
1
N
1
d
=
1
N
d+1
2d
.
Of course, there is also a log. Adding up this quantity d times
(because there are d fat slices of codimension 1) gives us an upper
bound of
DN .
d
p
log N
N
d+1
2d
.
Want to improve this a bit: standard Bernstein inequalities aren’t
enough.
Sharp Dvoretzy-Kiefer-Wolfowitz inequality (Massart, 1990)
If z1, z2, . . . , zk are independently and uniformly distributed
random variables in [0, 1], then
P
✓
sup
0z1
# {1  `  k : 0  z`  z}
k
z > "
◆
 2e 2k"2
.
limit ! Brownian Bridge ! Kolmogorov-Smirnov distribution
Refining estimates
This yields a refined Bernstein inequality for very quickly decaying
expectations.
Rumors!
Figure: Benjamin Doerr (Ecole Polytechnique (Paris))
Benjamin Doerr probably removed a
p
log d (?). Sadly, still not
e↵ective for small N (?).
What partition gives the best jittered sampling?
You want to decompose [0, 1]2 into 4 sets such that the associated
jittered sampling construction is as e↵ective as possible. How?
•
•
•
•
Is this good? Is this bad? Will it be into 4 parts of same volume?
We don’t actually know.
Jittered sampling always improves: variance reduction
Decompose [0, 1]d into sets of equal measure
[0, 1]d
=
N[
i=1
⌦i such that 8 1  i  N : |⌦i | =
1
N
and measure using the L2 discrepancy
L2(A) :=
Z
[0,1]d
#A  [0, x]
#A
|[0, x]|
2
dx
!1
2
.
Observation (Pausinger and S., 2015)
E L2(Jittered Sampling⌦)2
 E L2(Purely randomN)2
,
Main Idea: Variance Reduction
(What happens in L3?)
How to select 2 points: expected squared L2
discrepancy
MC
0.0694 0.0638 0.0555 0.05
•
•
•
0.04700.0471
Theorem (Florian Pausinger, Manas Rachh, S.)
Among all splittings of a domain given by a function y = f (x) with
symmetry around x = y, the following subdivison is optimal.
0.04617
The Most Nonlinear Integral Equation I’ve Ever Seen
Theorem (Florian Pausinger, Manas Rachh, S.)
Any optimal monotonically decreasing function g(x) whose graph
is symmetric about y = x satisfies, for 0  x  g 1(0),
(1 2p 4xg(x)) (1 g(x)) + (4p 1)x 1 g(x)2
4
Z g 1(0)
g(x)
(1 y)g (y)dy + g0
(x) (1 2p 4xg(x)) (1 x)
+ (4p 1)g(x) 1 x2
4
Z g 1(0)
x
(1 y)g(y)dy = 0.
Question. How to do 3 points in [0, 1]2? Simple rules?
Many thanks!

Weitere ähnliche Inhalte

Was ist angesagt?

accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
olli0601
 

Was ist angesagt? (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 

Ähnlich wie QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, Jittered Sampling: Bounds & Problems - Stefan Steinberger, Dec 14, 2017

Ähnlich wie QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, Jittered Sampling: Bounds & Problems - Stefan Steinberger, Dec 14, 2017 (20)

Density theorems for Euclidean point configurations
Density theorems for Euclidean point configurationsDensity theorems for Euclidean point configurations
Density theorems for Euclidean point configurations
 
Jurnal informatika
Jurnal informatika Jurnal informatika
Jurnal informatika
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cube
 
Density theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsDensity theorems for anisotropic point configurations
Density theorems for anisotropic point configurations
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Proof Techniques
Proof TechniquesProof Techniques
Proof Techniques
 
Lecture5
Lecture5Lecture5
Lecture5
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
 
Quantum chaos of generic systems - Marko Robnik
Quantum chaos of generic systems - Marko RobnikQuantum chaos of generic systems - Marko Robnik
Quantum chaos of generic systems - Marko Robnik
 
Divide and conquer
Divide and conquerDivide and conquer
Divide and conquer
 
Adaline and Madaline.ppt
Adaline and Madaline.pptAdaline and Madaline.ppt
Adaline and Madaline.ppt
 
4 litvak
4 litvak4 litvak
4 litvak
 
Decomposition and Denoising for moment sequences using convex optimization
Decomposition and Denoising for moment sequences using convex optimizationDecomposition and Denoising for moment sequences using convex optimization
Decomposition and Denoising for moment sequences using convex optimization
 
Prime numbers boundary
Prime numbers boundary Prime numbers boundary
Prime numbers boundary
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
 
EXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIAL
EXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIALEXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIAL
EXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIAL
 

Mehr von The Statistical and Applied Mathematical Sciences Institute

Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
The Statistical and Applied Mathematical Sciences Institute
 

Mehr von The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Kürzlich hochgeladen

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Kürzlich hochgeladen (20)

Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, Jittered Sampling: Bounds & Problems - Stefan Steinberger, Dec 14, 2017

  • 1. Jittered Sampling: Bounds and Problems Stefan Steinerberger joint with Florian Pausinger (Belfast), Manas Rachh (Yale)
  • 2. Florian Pausinger (Belfast) and Manas Rachh (Yale)
  • 3. QMC: the standard Dogma Star discrepancy. D⇤ N(X) = sup R⇢[0,1]d # {i : xi 2 R} N |R| This is a good quantity to minimize because Theorem (Koksma-Hlawka) Z [0,1]d f (x)dx 1 N NX n=1 f (xn) . (D⇤ N) (var(f )) . In particular: error only depends on the oscillation of f .
  • 4. QMC: the standard Dogma Star discrepancy. D⇤ N(X) = sup R⇢[0,1]d # {i : xi 2 R} N |R| Two competing conjectures (emotionally charged subject) D⇤ N & (log N)d 1 N or D⇤ N & (log N)d/2 N . There are many clever constructions of point set that achieve D⇤ N . (log N)d 1 N .
  • 5.
  • 6. QMC: the standard Dogma D⇤ N & (log N)d 1 N or D⇤ N & (log N)d/2 N . How would one actually try to prove this? Open for 80+ years, that sounds bad. Small ball conjecture seems spiritually related.
  • 7. Interlude: the small ball conjecture +1 1 1 +1 +1 1 1 +1 Haar functions hR on rectangles R.
  • 8. Interlude: the small ball conjecture All dyadic rectangles of area 2 2.
  • 9. Interlude: the small ball conjecture Small ball conjecture, Talagrand (1994) For all choices of sign "R 2 { 1, 1} X |R|=2 n "RhR L1 & nd/2 . 1. Talagrand cared about behavior of the Brownian sheet. 2. The lower bound & n(d 1)/2 is easy. 3. The case d = 2 is the only one that has been settled: three proofs due to M. Talagrand, V. Temlyakov (via Riesz products) and a beautiful one by Bilyk & Feldheim. 4. Only partial results in d 3 (Bilyk, Lacey, etc.)
  • 10. Interlude: the small ball conjecture Small ball conjecture, Talagrand (1994) For all choices of sign "R 2 { 1, 1} X |R|=2 n "RhR L1 & nd/2 . A recent surprise Theorem (Noah Kravitz, arXiv:1712.01206) For any choice of signs "R and any integer 0  k  n + 1, 8 < : x 2 [0, 1)2 : X |R|=2 n "RhR = n + 1 2k 9 = ; = 1 2n+1 ✓ n + 1 k ◆ .
  • 11. Problem with the Standard Dogma Star discrepancy. D⇤ N(X) = sup R⇢[0,1]d # {i : xi 2 R} N |R| The constructions achieving D⇤ N . (log N)d 1 N start being e↵ective around N dd (actually a bit larger even). More or less totally useless in high dimensions.
  • 12. Monte Carlo strikes back Star discrepancy. D⇤ N(X) = sup R⇢[0,1]d # {i : xi 2 R} N |R| We want error bounds in N, d! (Heinrich, Novak, Wasilkowski, Wozniakowski, 2002) There are points D⇤ N(X) . d p N . This is still the best result. (Aistleitner 2011: constant c = 10). How do you get these points? Monte Carlo
  • 13. Jittered Sampling If we already agree to distribute points randomly, we might just as well distribute them randomly in a clever way. • • • • • • • • • • • • • • • • • • • • • • • • •
  • 15. Cook, Porter & Carpenter, 1984
  • 16. Cook, Porter & Carpenter, 1984
  • 17. A Recent Application in Compressed Sensing (Nov 2015)
  • 18. Theorem (Beck, 1987) E D⇤ N(jittered sampling)  Cd (log N) 1 2 N 1 2 + 1 2d • • • • • • • • • • • • • • • • • • • • • • • • •
  • 19. Theorem (Beck, 1987) E D⇤ N(jittered sampling)  Cd (log N) 1 2 N 1 2 + 1 2d I a very general result for many di↵erent discrepancies I L2 based discrepancies (Chen & Travaligni, 2009) I Problem: same old constant Cd (might be huge, the way the proof proceeds it will be MASSIVE)
  • 20. Theorem (Pausinger and S., 2015) For N su ciently large (depending on d) 1 10 d N 1 2 + 1 2d  ED⇤ N(P)  p d(log N) 1 2 N 1 2 + 1 2d . I ’su ciently large’ is bad (talk about this later) I lower bound can probably be improved I upper bound not by much
  • 21. How the proof works • • • • • • • • • • • • • • • • • • • • • • • • •
  • 22. How the proof works • • • • • • • • • • • • • • • • • • • • • • • •
  • 23. How the proof works • • • • Maximize discrepancy over p N dimensional set in [0, N 1/2]. DN ⇠ pp N p N 1 p N = 1 N3/4 . I lose a logarithm I union bound on the other cubes
  • 25. In d dimensions, we therefore expect the main contribution of the discrepancy to behave like DN ⇠ p N d 1 d N d 1 d 1 N 1 d = 1 N d 1 2d 1 N 1 d = 1 N d+1 2d . Of course, there is also a log. Adding up this quantity d times (because there are d fat slices of codimension 1) gives us an upper bound of DN . d p log N N d+1 2d . Want to improve this a bit: standard Bernstein inequalities aren’t enough.
  • 26. Sharp Dvoretzy-Kiefer-Wolfowitz inequality (Massart, 1990) If z1, z2, . . . , zk are independently and uniformly distributed random variables in [0, 1], then P ✓ sup 0z1 # {1  `  k : 0  z`  z} k z > " ◆  2e 2k"2 . limit ! Brownian Bridge ! Kolmogorov-Smirnov distribution
  • 27. Refining estimates This yields a refined Bernstein inequality for very quickly decaying expectations.
  • 28. Rumors! Figure: Benjamin Doerr (Ecole Polytechnique (Paris)) Benjamin Doerr probably removed a p log d (?). Sadly, still not e↵ective for small N (?).
  • 29. What partition gives the best jittered sampling? You want to decompose [0, 1]2 into 4 sets such that the associated jittered sampling construction is as e↵ective as possible. How? • • • • Is this good? Is this bad? Will it be into 4 parts of same volume? We don’t actually know.
  • 30. Jittered sampling always improves: variance reduction Decompose [0, 1]d into sets of equal measure [0, 1]d = N[ i=1 ⌦i such that 8 1  i  N : |⌦i | = 1 N and measure using the L2 discrepancy L2(A) := Z [0,1]d #A [0, x] #A |[0, x]| 2 dx !1 2 . Observation (Pausinger and S., 2015) E L2(Jittered Sampling⌦)2  E L2(Purely randomN)2 ,
  • 31. Main Idea: Variance Reduction (What happens in L3?)
  • 32. How to select 2 points: expected squared L2 discrepancy MC 0.0694 0.0638 0.0555 0.05 • • • 0.04700.0471
  • 33. Theorem (Florian Pausinger, Manas Rachh, S.) Among all splittings of a domain given by a function y = f (x) with symmetry around x = y, the following subdivison is optimal. 0.04617
  • 34. The Most Nonlinear Integral Equation I’ve Ever Seen Theorem (Florian Pausinger, Manas Rachh, S.) Any optimal monotonically decreasing function g(x) whose graph is symmetric about y = x satisfies, for 0  x  g 1(0), (1 2p 4xg(x)) (1 g(x)) + (4p 1)x 1 g(x)2 4 Z g 1(0) g(x) (1 y)g (y)dy + g0 (x) (1 2p 4xg(x)) (1 x) + (4p 1)g(x) 1 x2 4 Z g 1(0) x (1 y)g(y)dy = 0. Question. How to do 3 points in [0, 1]2? Simple rules?