Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Pres metabief2020jmm
1. A computational framework based over Transported
Meshfree methods.
P.G. LeFloch 1, J.M. Mercier 2
1CNRS, 2MPG-Partners
16 01 2020
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 1 / 12
2. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
3. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
4. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
5. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
6. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
7. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
8. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
4 You can optimize your error computing sharp discrepancy sequences and optimal
discrepancy error as
Y = arg inf
Y ∈RD×N
E(Y , HK ), EHK (N, D) = E(Y , HK )
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
9. Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
10. Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
2 Transported kernel : S : Ω → RD
a transport map. Ktra(x, y) = K(S(x), S(y)).
x
y
k
Matern
x
y
k
Gaussian
x
y
k
Multiquadric
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
11. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
12. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
13. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
3 E(Y , HK ) random – vs E(Y , HK ) computed - vs theoretical EHK (N, D)
D=1 D=16 D=128
N=16 0.228 0.304 0.319
N=128 0.117 0.111 0.115
N=512 0.035 0.054 0.059
D=1 D=16 D=128
N=16 0.062 0.211 0.223
N=128 0.008 0.069 0.077
N=512 0.002 0.034 0.049
D=1 D=16 D=128
N=16 0.062 0.288 0.323
N=128 0.008 0.077 0.105
N=512 0.002 0.034 0.043
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
14. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
15. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
16. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
17. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
4 Here are two very similar applications :
1 (y1
, P1
), . . . , (yN
, PN
) are prices and implied volatilities (eg call
options under SABR model): Pricing.
2 (y1
, P1
), . . . , (yN
, PN
) are pictures of dogs and cats : classifier.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
18. Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
19. Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
2 For PDE, we can try to compute these sequences. For instance consider the
Navier-Stokes equation (hyperbolic equations)
∂t µ = · (vµ), ∂t (µv) + · (µv2
) = − p + · (µΣ)
· v = 0 (or energy conservation for non newtonian fluids)
Together with boundary conditions Dirichlet / Neumann. We obtain a numerical
scheme sharing some similarities with SPH - smooth particle hydrodynamics :
that are LAGRANGIAN MESHFREE METHODS.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
20. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
21. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
22. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
3 BACKWARD : interpret t → yn
(t), n = 1 . . . N as a moving, transported, PDE
grid (TREE). Solve with it the Kolmogorov equation. ERROR ESTIMATION :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤ EHKt (N, D) P(t, ·) HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
23. Illustration : the 2D SABR process, widely used in Finance
SABR process d
Ft
αt
= ρ
αt Fβ
t 0
0 ναt
dW 1
t
dW 2
t
, with 0 ≤ β ≤ 1,
ν ≥ 0,ρ ∈ R2×2
. The Fokker-Planck equation associated to SABR is
∂t µ + L∗
µ = 0, L∗
µ = ρ
x2
2
2
xβ
1 0
0 ν2
2
x2
ρT
· 2
µ.
(Loading SABR200)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 9 / 12
24. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
25. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
26. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
27. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
4 Result ? We can compute the solution P(t, x) at any order of accuracy :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤
P(t, ·) HK
Nα
...
...where α ≥ 1/2 is any number... Choose it according to your desired electricity bill
! But beware to smoothing effects in high dimensions : HK contains less
informations as the dimension raises. Some problems, as are for instance optimal
stopping problems, are intrinsecally cursed.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
28. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
29. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
30. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
3 Under work
1 McKean Vlasov equations (stochastic volatility modeling).
2 ISDA Standard Initial Margin : XVA computations based on sensitivities
(delta / vega ..gamma)
3 Transition IBOR / RFR rates a la Lyashenko - Mercurio.
4 Strategies for Liquidity risk : Hamilton-Jacobi-Bellman equations in high
dimensions.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
31. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
32. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
33. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
34. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
35. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
36. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
6 ...Thus we can argue that our numerical methods reach nearly optimal
algorithmic complexity.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12