SlideShare ist ein Scribd-Unternehmen logo
1 von 229
Downloaden Sie, um offline zu lesen
Numerical methods for stochastic systems subject
to generalized LĀ“evy noise
by
Mengdi Zheng
Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008
Sc.M. in Physics, Brown University; Providence, RI, USA, 2010
Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011
A dissertation submitted in partial fulļ¬llment of the
requirements for the degree of Doctor of Philosophy
in The Division of Applied Mathematics at Brown University
PROVIDENCE, RHODE ISLAND
April 2015
c Copyright 2015 by Mengdi Zheng
This dissertation by Mengdi Zheng is accepted in its present form
by The Division of Applied Mathematics as satisfying the
dissertation requirement for the degree of Doctor of Philosophy.
Date
George Em Karniadakis, Ph.D., Advisor
Recommended to the Graduate Council
Date
Hui Wang, Ph.D., Reader
Date
Xiaoliang Wan, Ph.D., Reader
Approved by the Graduate Council
Date
Peter Weber, Dean of the Graduate School
iii
Vitae
Born on September 04, 1986 in Hangzhou, Zhejiang, China.
Education
ā€¢ Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011
ā€¢ Sc.M. in Physics, Brown University; Providence, RI, USA, 2010
ā€¢ Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008
Publications
ā€¢ M. Zheng, G.E. Karniadakis, ā€˜Numerical Methods for SPDEs Driven by Multi-
dimensional LĀ“evy Jump Processesā€™, in preparation.
ā€¢ M. Zheng, B. Rozovsky, G.E. Karniadakis, ā€˜Adaptive Wick-Malliavin Approx-
imation to Nonlinear SPDEs with Discrete Random Variablesā€™, SIAM J. Sci.
Comput., revised.
ā€¢ M. Zheng, G.E. Karniadakis, ā€˜Numerical Methods for SPDEs with Tempered
Stable Processesā€™,SIAM J. Sci. Comput., accepted.
ā€¢ M. Zheng, X. Wan, G.E. Karniadakis, ā€˜Adaptive Multi-element Polynomial
Chaos with Discrete Measure: Algorithms and Application to SPDEsā€™,Applied
iv
Numerical Mathematics (2015), pp. 91-110. doi:10.1016/j.apnum.2014.11.006
.
v
Acknowledgements
I would like to thank my advisor, Professor George Karniadakis, for his great support
and guidance throughout all my years of graduate school. I would also like to thank
my committee, Professor Hui Wang and Professor Xiaoliang Wan for taking the time
to read my thesis.
In addition, I would like to thank the many collaborators I have had the oppor-
tunity to work with on various projects. In particular, I thank Professor Xiaoliang
Wan for his patience in answering all of my questions and for his advice and help
during our work on adaptive multi-element stochastic collocation methods. I thank
Professor Boris Rozovsky for oļ¬€ering his innovative ideas and educational discussions
on our work on the Wick-Malliavin approximation for nonlinear stochastic partial
diļ¬€erential equations driven by discrete random variables.
I would like to gratefully acknowledge the support from the NSF/DMS (grant
DMS-0915077) and the Airforce MURI (grant FA9550-09-1-0613).
Lastly, I thank all my friends, and all current and former members of the CRUNCH
group for their company and encouragement. I would like to thank all of the wonder-
ful professors and staļ¬€ at the Division of Applied Mathematics for making graduate
school a rewarding experience.
vi
Contents
Vitae iv
Acknowledgments vi
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Computational limitations for UQ of nonlinear SPDEs . . . . 3
1.1.2 Computational limitations for UQ of SPDEs driven by LĀ“evy
jump processes . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Introduction of TĪ±S LĀ“evy jump processes . . . . . . . . . . . . . . . . 5
1.3 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Simulation of LĀ“evy jump processes 9
2.1 Random walk approximation to Poisson processes . . . . . . . . . . . 10
2.2 KL expansion for Poisson processes . . . . . . . . . . . . . . . . . . . 11
2.3 Compound Poisson approximation to LĀ“evy jump processes . . . . . . 13
2.4 Series representation to LĀ“evy jump processes . . . . . . . . . . . . . . 18
3 Adaptive multi-element polynomial chaos with discrete measure:
Algorithms and applications to SPDEs 20
3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Generation of orthogonal polynomials for discrete measures . . . . . . 22
3.2.1 Nowak method . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Stieltjes method . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.3 Fischer method . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.4 Modiļ¬ed Chebyshev method . . . . . . . . . . . . . . . . . . . 26
3.2.5 Lanczos method . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.6 Gaussian quadrature rule associated with a discrete measure . 30
3.2.7 Orthogonality tests of numerically generated polynomials . . . 31
3.3 Discussion about the error of numerical integration . . . . . . . . . . 34
3.3.1 Theorem of numerical integration on discrete measure . . . . . 34
vii
3.3.2 Testing numerical integration with on RV . . . . . . . . . . . 41
3.3.3 Testing numerical integration with multiple RVs on sparse grids 42
3.4 Application to stochastic reaction equation and KdV equation . . . . 46
3.4.1 Reaction equation with discrete random coeļ¬ƒcients . . . . . . 46
3.4.2 KdV equation with random forcing . . . . . . . . . . . . . . . 48
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4 Adaptive Wick-Malliavin (WM) approximation to nonlinear SPDEs
with discrete RVs 58
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 WM approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2.1 WM series expansion . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.2 WM propagators . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3 Moment statistics by WM approximation of stochastic reaction equa-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.1 Reaction equation with one RV . . . . . . . . . . . . . . . . . 67
4.3.2 Reaction equation with multiple RVs . . . . . . . . . . . . . . 70
4.4 Moment statistics by WM approximation of stochastic Burgers equa-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 72
4.4.2 Burgers equation with multiple RVs . . . . . . . . . . . . . . . 75
4.5 Adaptive WM method . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.6 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 79
4.6.2 Burgers equation with d RVs . . . . . . . . . . . . . . . . . . . 82
4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5 Numerical methods for SPDEs with 1D tempered Ī±-stable (TĪ±S)
processes 86
5.1 Literature review of LĀ“evy ļ¬‚ights . . . . . . . . . . . . . . . . . . . . . 87
5.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Stochastic models driven by tempered stable white noises . . . . . . . 89
5.4 Background of TĪ±S processes . . . . . . . . . . . . . . . . . . . . . . 91
5.5 Numerical simulation of 1D TĪ±S processes . . . . . . . . . . . . . . . 94
5.5.1 Simulation of 1D TĪ±S processes by CP approximation . . . . 94
5.5.2 Simulation of 1D TĪ±S processes by series representation . . . 97
5.5.3 Example: simulation of inverse Gaussian subordinators by CP
approximation and series representation . . . . . . . . . . . . 97
5.6 Simulation of stochastic reaction-diļ¬€usion model driven by TĪ±S white
noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.6.1 Comparing CP approximation and series representation in MC 101
5.6.2 Comparing CP approximation and series representation in PCM102
5.6.3 Comparing MC and PCM in CP approximation or series rep-
resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
viii
5.7 Simulation of 1D stochastic overdamped Langevin equation driven by
TĪ±S white noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.7.1 Generalized FP equations for overdamped Langevin equations
with TĪ±S white noises . . . . . . . . . . . . . . . . . . . . . . 110
5.7.2 Simulating density by CP approximation . . . . . . . . . . . . 115
5.7.3 Simulating density by TFPDEs . . . . . . . . . . . . . . . . . 116
5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6 Numerical methods for SPDEs with additive multi-dimensional
LĀ“evy jump processes 121
6.1 Literature review of generalized FP equations . . . . . . . . . . . . . 122
6.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.3 Diļ¬€usion model driven by multi-dimensional LĀ“evy jump process . . . 124
6.4 Simulating multi-dimensional LĀ“evy pure jump processes . . . . . . . . 127
6.4.1 LePageā€™s series representation with radial decomposition of
LĀ“evy measure . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.4.2 Series representation with LĀ“evy copula . . . . . . . . . . . . . 130
6.5 Generalize FP equation for SODEs with correlated LĀ“evy jump pro-
cesses and ANOVA decomposition of joint PDF . . . . . . . . . . . . 141
6.6 Heat equation driven by bivariate LĀ“evy jump process in LePageā€™s rep-
resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.6.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.6.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 150
6.6.3 Simulating the joint PDF P(u1, u2, t) by the generalized FP
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.6.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 156
6.7 Heat equation driven by bivariate TS Clayton LĀ“evy jump process . . 157
6.7.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.7.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 161
6.7.3 Simulating the joint PDF P(u1, u2, t) by the generalized FP
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.7.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 164
6.8 Heat equation driven by 10-dimensional LĀ“evy jump processes in LeP-
ageā€™s representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.8.1 Heat equation driven by 10-dimensional LĀ“evy jump processes
from MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.8.2 Heat equation driven by 10-dimensional LĀ“evy jump processes
from PCM/S . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.8.3 Simulating the joint PDF P(u1, u2, ..., u10) by the ANOVA de-
composition of the generalized FP equation . . . . . . . . . . 170
6.8.4 Simulating the moment statistics by 2D-ANOVA-FP with di-
mension d = 4, 6, 10, 14 . . . . . . . . . . . . . . . . . . . . . . 182
6.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
7 Summary and future work 188
ix
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
7.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
x
List of Tables
4.1 For gPC with diļ¬€erent orders P and WM with a ļ¬xed order of P =
3, Q = 2 in reaction equation (4.23) with one Poisson RV (Ī» = 0.5,
y0 = 1, k(Ī¾) = c0(Ī¾;Ī»)
2!
+ c1(Ī¾;Ī»)
3!
+ c2(Ī¾;Ī»)
4!
, Ļƒ = 0.1, RK4 scheme with
time step dt = 1e āˆ’ 4), we compare: (1) computational complexity
ratio to evaluate k(t, Ī¾)y(t; Ļ‰) between gPC and WM (upper); (2) CPU
time ratio to compute k(t, Ī¾)y(t; Ļ‰) between gPC and WM (lower).We
simulated in Matlab on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz. 69
4.2 Computational complexity ratio to evaluate uāˆ‚u
āˆ‚x
term in Burgers equa-
tion with d RVs between WM and gPC, as C(P,Q)d
(P+1)3d : here we take the
WM order as Q = P āˆ’ 1, and gPC with order P, in diļ¬€erent dimen-
sions d = 2, 3, and 50. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.1 MC/CP vs. MC/S: error l2u2(T) of the solution for Equation (5.1)
versus the number of samples s with Ī» = 10 (upper) and Ī» = 1
(lower). T = 1, c = 0.1, Ī± = 0.5, = 0.1, Āµ = 2 (upper and lower).
Spatial discretization: Nx = 500 Fourier collocation points on [0, 2];
temporal discretization: ļ¬rst-order Euler scheme in (5.22) with time
steps t = 1 Ɨ 10āˆ’5
. In the CP approximation: RelTol = 1 Ɨ 10āˆ’8
for integration in U(Ī“). . . . . . . . . . . . . . . . . . . . . . . . . . . 102
xi
List of Figures
2.1 Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KL
expansion terms, for a centered Poisson process (Nt āˆ’ Ī»t) of Ī» =
10, Tmax = 1, with s = 10000 samples, and N = 200 points on the
time domain [0, 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Exact sample path vs. sample path approximated by the KL ex-
pansion: when Ī» is smaller, the sample path is better approximated.
(Brownian motion is the limiting case for a centered poisson process
with very large birth rate.) . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Exact mean vs. mean by KL expansion: when Ī» is larger, the KL
representation seems to be better. . . . . . . . . . . . . . . . . . . . . 14
2.4 Exact 2nd moment vs. 2nd moment by KL expansion with sampled
coeļ¬ƒcients. The 2nd moments are not as well approximated as the
mean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Orthogonality deļ¬ned in (3.27) with respect to the polynomial order
i up to 20 with Binomial distributions. . . . . . . . . . . . . . . . . . 32
3.2 CPU time to evaluate orthogonality for Binomial distributions. . . . . 33
3.3 Minimum polynomial order i (vertical axis) such that orth(i) is greater
than a threshold value. . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Left: GENZ1 functions with diļ¬€erent values of c and w; Right: h-
convergence of ME-PCM for function GENZ1. Two Gauss quadrature
points, d = 2, are employed in each element corresponding to a degree
m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos
method is employed to compute the orthogonal polynomials. . . . . . 42
3.5 Left: GENZ4 functions with diļ¬€erent values of c and w; Right: h-
convergence of ME-PCM for function GENZ4. Two Gauss quadrature
points, d = 2, are employed in each element corresponding to a degree
m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos
method is employed for numerical orthogonality. . . . . . . . . . . . . 43
3.6 Non-nested sparse grid points with respect to sparseness parameter
k = 3, 4, 5, 6 for random variables Ī¾1, Ī¾2 āˆ¼ Bino(10, 1/2), where the
one-dimensional quadrature formula is based on Gauss quadrature rule. 44
3.7 Convergence of sparse grids and tensor product grids to approximate
E[fi(Ī¾1, Ī¾2)], where Ī¾1 and Ī¾2 are two i.i.d. random variables associated
with a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4 is
GENZ4. Orthogonal polynomials are generated by Lanczos method. . 45
xii
3.8 Convergence of sparse grids and tensor product grids to approximate
E[fi(Ī¾1, Ī¾2, ..., Ī¾8)], where Ī¾1,...,Ī¾8 are eight i.i.d. random variables asso-
ciated with a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4
is GENZ4. Orthogonal polynomials are generated by Lanczos method. 45
3.9 p-convergence of PCM with respect to errors deļ¬ned in equations
(3.54) and (3.55) for the reaction equation with t = 1, y0 = 1. Ī¾ is
associated with negative binomial distribution with c = 1
2
and Ī² = 1.
Orthogonal polynomials are generated by the Stieltjes method. . . . . 47
3.10 Left: exact solution of the KdV equation (3.65) at time t = 0, 1.
Right: the pointwise error for the soliton at time t = 1 . . . . . . . . 49
3.11 p-convergence of PCM with respect to errors deļ¬ned in equations
(3.67) and (3.68) for the KdV equation with t = 1. a = 1, x0 = āˆ’5
and Ļƒ = 0.2, with 200 Fourier collocation points on the spatial domain
[āˆ’30, 30]. Left: Ī¾ āˆ¼Pois(10); Right: Ī¾ āˆ¼ Bino(n = 5, p = 1/2)). aPC
stands for arbitrary Polynomial Chaos, which is Polynomial Chaos
with respect to arbitrary measure. Orthogonal polynomials are gen-
erated by Fischerā€™s method. . . . . . . . . . . . . . . . . . . . . . . . 50
3.12 h-convergence of ME-PCM with respect to errors deļ¬ned in equations
(3.67) and (3.68) for the KdV equation with t = 1.05, a = 1, x0 = āˆ’5,
Ļƒ = 0.2, and Ī¾ āˆ¼ Bino(n = 120, p = 1/2), with 200 Fourier collocation
points on the spatial domain [āˆ’30, 30], where two collocation points
are employed in each element. Orthogonal polynomials are generated
by the Fischer method (left) and the Stieltjes method (right). . . . . 51
3.13 Adapted mesh with ļ¬ve elements with respect to Pois(40) distribution. 52
3.14 p-convergence of ME-PCM on a uniform mesh and an adapted mesh
with respect to errors deļ¬ned in equations (3.67) and (3.68) for the
KdV equation with t = 1, a = 1, x0 = āˆ’5, Ļƒ = 0.2, and Ī¾ āˆ¼
Pois(40), with 200 Fourier collocation points on the spatial domain
[āˆ’30, 30]. Left: Errors of the mean. Right: Errors of the second
moment. Orthogonal polynomials are generated by the Nowak method. 53
3.15 Ī¾1, Ī¾2 āˆ¼ Bino(10, 1/2): convergence of sparse grids and tensor product
grids with respect to errors deļ¬ned in equations (3.67) and (3.68) for
problem (3.69), where t = 1, a = 1, x0 = āˆ’5, and Ļƒ1 = Ļƒ2 = 0.2,
with 200 Fourier collocation points on the spatial domain [āˆ’30, 30].
Orthogonal polynomials are generated by the Lanczos method. . . . 54
3.16 Ī¾1 āˆ¼ Bino(10, 1/2) and Ī¾2 āˆ¼ N(0, 1): convergence of sparse grids and
tensor product grids with respect to errors deļ¬ned in in equations
(3.67) and (3.68) for problem (3.69), where t = 1, a = 1, x0 = āˆ’5,
and Ļƒ1 = Ļƒ2 = 0.2, with 200 Fourier collocation points on the spatial
domain [āˆ’30, 30]. Orthogonal polynomials are generated by Lanczos
method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.17 Convergence of sparse grids and tensor product grids with respect to
errors deļ¬ned in in equations (3.67) and (3.68) for problem (3.70),
where t = 0.5, a = 0.5, x0 = āˆ’5, Ļƒi = 0.1 and Ī¾i āˆ¼ Bino(5, 1/2), i =
1, 2, ..., 8, with 300 Fourier collocation points on the spatial domain
[āˆ’50, 50]. Orthogonal polynomials are generated by Lanczos method. 56
xiii
4.1 Reaction equation with one Poisson RV Ī¾ āˆ¼ Pois(Ī») (d = 1): errors
versus ļ¬nal time T deļ¬ned in (4.34) for diļ¬€erent WM order Q in
equation (4.27), with polynomial order P = 10, y0 = 1, Ī» = 0.5. We
used RK4 scheme with time step dt = 1e āˆ’ 4; k(Ī¾) = c0(Ī¾;Ī»)
2!
+ c1(Ī¾;Ī»)
3!
+
c2(Ī¾;Ī»)
4!
, Ļƒ = 0.1(left); k(Ī¾) = c0(Ī¾;Ī»)
0!
+ c1(Ī¾;Ī»)
3!
+ c2(Ī¾;Ī»)
6!
, Ļƒ = 1 (right). . . 68
4.2 Reaction equation with ļ¬ve Poisson RVs Ī¾1,...,5 āˆ¼Pois(Ī») (d = 5):
error deļ¬ned in (4.34) with respect to time, for diļ¬€erent WM order
Q, with parameters: Ī» = 1, Ļƒ = 0.5, y0 = 1, polynomial order P =
4, RK2 scheme with time step dt = 1e āˆ’ 3, and k(Ī¾1, Ī¾2, ..., Ī¾5, t) =
5
i=1 cos(it)c1(Ī¾i) in equation (4.23). . . . . . . . . . . . . . . . . . . 70
4.3 Reaction equation with one Poisson RV Ī¾1 āˆ¼Pois(Ī») and one Binomial
RV Ī¾2 āˆ¼ Bino(N, p) (d = 2): error deļ¬ned in (4.34) with respect to
time, for diļ¬€erent WM order Q, with parameters: Ī» = 1, Ļƒ = 0.1,
N = 10, p = 1/2, y0 = 1, polynomial order P = 10, RK4 scheme with
time step dt = 1e āˆ’ 4, and k(Ī¾1, Ī¾2, t) = c1(Ī¾1)k1(Ī¾2) in equation (4.23). 71
4.4 Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī») (d = 1, Ļˆ1(x, t) =
1): l2u2(T) error deļ¬ned in (6.62) versus time, with respect to dif-
ferent WM order Q. Here we take in equation (4.32): polynomial
expansion order P = 6, Ī» = 1, Ī½ = 1/2, Ļƒ = 0.1, IMEX (Crank-
Nicolson/RK2) scheme with time step dt = 2e āˆ’ 4, and 100 Fourier
collocation points on [āˆ’Ļ€, Ļ€]. . . . . . . . . . . . . . . . . . . . . . . 73
4.5 P-convergence for Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī»)
(d = 1, Ļˆ1(x, t) = 1): errors deļ¬ned in equation (6.62) versus poly-
nomial expansion order P, for diļ¬€erent WM order Q, and by prob-
abilistic collocation method (PCM) with P + 1 points with the fol-
lowing parameters: Ī½ = 1, Ī» = 1, ļ¬nal time T = 0.5, IMEX (Crank-
Nicolson/RK2) scheme with time step dt = 5e āˆ’ 4, 100 Fourier collo-
cation points on [āˆ’Ļ€, Ļ€], Ļƒ = 0.5 (left), and Ļƒ = 1 (right). . . . . . . 73
4.6 Q-convergence for Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī»)
(d = 1, Ļˆ1(x, t) = 1): errors deļ¬ned in equation (6.62) versus WM
order Q, for diļ¬€erent polynomial order P, with the following param-
eters: Ī½ = 1, Ī» = 1, ļ¬nal time T = 0.5, IMEX(RK2/Crank-Nicolson)
scheme with time step dt = 5e āˆ’ 4, 100 Fourier collocation points on
[āˆ’Ļ€, Ļ€], Ļƒ = 0.5 (left), and Ļƒ = 1 (right). The dashed lines serve as a
reference of the convergence rate. . . . . . . . . . . . . . . . . . . . . 74
4.7 Burgers equation with three Poisson RVs Ī¾1,2,3 āˆ¼Pois(Ī») (d = 3): error
deļ¬ned in equation (6.62) with respect to time, for diļ¬€erent WM order
Q, with parameters: Ī» = 0.1, Ļƒ = 0.1, y0 = 1, Ī½ = 1/100, polynomial
order P = 2, IMEX (RK2/Crank-Nicolson) scheme with time step
dt = 2.5e āˆ’ 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.8 Reaction equation with P-adaptivity and two Poisson RVs Ī¾1,2 āˆ¼Pois(Ī»)
(d = 2): error deļ¬ned in (4.34) with two Poisson RVs by comput-
ing the WM propagator in equation (4.27) with respect to time by
the RK2 method with: ļ¬xed WM order Q = 1, y0 = 1, Ī¾1,2 āˆ¼
Pois(1), a(Ī¾1, Ī¾2, t) = c1(Ī¾1; Ī»)c1(Ī¾2; Ī»), for ļ¬xed polynomial order
P (dashed lines), for varied polynomial order P (solid lines), for
Ļƒ = 0.1 (left), and Ļƒ = 1 (right). Adaptive criterion values are:
l2err(t) ā‰¤ 1e āˆ’ 8(left), and l2err(t) ā‰¤ 1e āˆ’ 6(right). . . . . . . . . . . 77
xiv
4.9 Burgers equation with P-Q-adaptivity and one Poisson RV Ī¾ āˆ¼Pois(Ī»)
(d = 1, Ļˆ1(x, t) = 1): error deļ¬ned in equation (6.62) by comput-
ing the WM propagator in equation (4.32) with IMEX (RK2/Crank-
Nicolson) method (Ī» = 1, Ī½ = 1/2, time step dt = 2e āˆ’ 4). Fixed
polynomial order P = 6, Ļƒ = 1, and Q is varied (left); ļ¬xed WM
order Q = 3, Ļƒ = 0.1, and P is varied (right). Adaptive criterion
value is: l2u2(T) ā‰¤ 1e āˆ’ 10 (left and right). . . . . . . . . . . . . . . 78
4.10 Terms in Q
p=0
P
i=0 Ė†ui
āˆ‚Ė†uk+2pāˆ’i
āˆ‚x
Ki,k+2pāˆ’i,p for each PDE in the WM
propagator for Burgers equation with one RV in equation (4.38) are
denoted by dots on the grids: here P = 4, Q = 1
2
, k = 0, 1, 2, 3, 4. Each
grid represents a PDE in the WM propagator, labeled by k. Each dot
represents a term in the sum Q
p=0
P
i=0 Ė†ui
āˆ‚Ė†uk+2pāˆ’i
āˆ‚x
Ki,k+2pāˆ’i,p . The
small index next to the dot is for p, x direction is the index i for Ė†ui,
and y direction is the index k + 2p āˆ’ i in
āˆ‚Ė†uk+2pāˆ’i
āˆ‚x
. The dots on the
same diagonal line have the same index p. . . . . . . . . . . . . . . . 81
4.11 The total number of terms as Ė†um1...md
āˆ‚
āˆ‚x
Ė†uk1+2p1āˆ’m1,...,kd+2pdāˆ’md
Km1,k1+2p1āˆ’m1,p1
...Kmd,kd+2pdāˆ’md,pd
in the WM propagator for Burgers equation with d
RVs, as C(P, Q)d
: for dimensions d = 2 (left) and d = 3 (right). Here
we assume P1 = ... = Pd = P and Q1 = ... = Qd = Q. . . . . . . . . . 83
5.1 Empirical histograms of an IG subordinator (Ī± = 1/2) simulated via
the CP approximationat t = 0.5: the IG subordinator has c = 1,
Ī» = 3; each simulation contains s = 106
samples (we zoom in and plot
x āˆˆ [0, 1.8] to examine the smaller jumps approximation); they are
with diļ¬€erent jump truncation sizes as Ī“ = 0.1 (left, dotted, CPU time
1450s), Ī“ = 0.02 (middle, dotted, CPU time 5710s), and Ī“ = 0.005
(right, dotted, CPU time 38531s). The reference PDFs are plotted in
red solid lines; the one-sample K-S test values are calculated for each
plot; the RelTol of integration in U(Ī“) and bĪ“
is 1 Ɨ 10āˆ’8
. These runs
were done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. 99
5.2 Empirical histograms of an IG subordinator (Ī± = 1/2) simulated via
the series representationat t = 0.5: the IG subordinator has c = 1,
Ī» = 3; each simulation is done on the time domain [0, 0.5] and con-
tains s = 106
samples (we zoom in and plot x āˆˆ [0, 1.8] to examine
the smaller jumps approximation); they are with diļ¬€erent number of
truncations in the series as Qs = 10 (left, dotted, CPU time 129s),
Qs = 100 (middle, dotted, CPU time 338s), and Qs = 1000 (right,
dotted, CPU time 2574s). The reference PDFs are plotted in red
solid lines; the one-sample K-S test values are calculated for each
plot. These runs were done on Intel (R) Core (TM) i5-3470 CPU @
3.20 GHz in Matlab. . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 PCM/CP vs. PCM/S: error l2u2(T) of the solution for Equation (5.1)
versus the number of jumps Qcp (in PCM/CP) or Qs (in PCM/S)
with Ī» = 10 (left) and Ī» = 1 (right). T = 1, c = 0.1, Ī± = 0.5,
= 0.1, Āµ = 2, Nx = 500 Fourier collocation points on [0, 2] (left
and right). In the PCM/CP: RelTol = 1 Ɨ 10āˆ’10
for integration
in U(Ī“). In the PCM/S: RelTol = 1 Ɨ 10āˆ’8
for the integration of
E[((
Ī±Ī“j
2cT
)āˆ’1/Ī±
āˆ§ Ī·jĪ¾
1/Ī±
j )2
]. . . . . . . . . . . . . . . . . . . . . . . . . . 107
xv
5.4 PCM vs. MC: error l2u2(T) of the solution for Equation (5.1) versus
the number of samples s obtained by MC/CP and PCM/CP with
Ī“ = 0.01 (left) and MC/S with Qs = 10 and PCM/S (right). T = 1
, c = 0.1, Ī± = 0.5, Ī» = 1, = 0.1, Āµ = 2 (left and right). Spatial
discretization: Nx = 500 Fourier collocation points on [0, 2] (left and
right); temporal discretization: ļ¬rst-order Euler scheme in (5.22) with
time steps t = 1 Ɨ 10āˆ’5
(left and right). In both MC/CP and
PCM/CP: RelTol = 1 Ɨ 10āˆ’8
for integration in U(Ī“). . . . . . . . . 109
5.5 Zoomed in density Pts(t, x) plots for the solution of Equation (5.2)
at diļ¬€erent times obtained from solving Equation (5.37) for Ī± = 0.5
(left) and Equation (5.42) for Ī± = 1.5 (right): Ļƒ = 0.4, x0 = 1, c = 1,
Ī» = 10 (left); Ļƒ = 0.1, x0 = 1, c = 0.01, Ī» = 0.01 (right). We have
Nx = 2000 equidistant spatial points on [āˆ’12, 12] (left); Nx = 2000
points on [āˆ’20, 20] (right). Time step is t = 1 Ɨ 10āˆ’4
(left) and
t = 1 Ɨ 10āˆ’5
(right). The initial conditions are approximated by Ī“D
20
(left and right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.6 Density/CP vs. PCM/CP with the same Ī“: errors err1st and err2nd
of the solution for Equation (5.2) versus time obtained by the density
Equation (5.36) with CP approximation and PCM/CP in Equation
(5.55). c = 0.5, Ī± = 0.95, Ī» = 10, Ļƒ = 0.01, x0 = 1 (left); c = 0.01,
Ī± = 1.6, Ī» = 0.1, Ļƒ = 0.02, x0 = 1 (right). In the density/CP: RK2
with time steps t = 2 Ɨ 10āˆ’3
, 1000 Fourier collocation points on
[āˆ’12, 12] in space, Ī“ = 0.012, RelTol = 1 Ɨ 10āˆ’8
for U(Ī“), and initial
condition as Ī“D
20 (left and right). In the PCM/CP: the same Ī“ = 0.012
as in the density/CP. . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.7 TFPDE vs. PCM/CP: error err2nd of the solution for Equation (5.2)
versus time with Ī» = 10 (left) and Ī» = 1 (right). Problems we are
solving: Ī± = 0.5, c = 2, Ļƒ = 0.1, x0 = 1 (left and right). For
PCM/CP: RelTol = 1 Ɨ 10āˆ’8
for U(Ī“) (left and right). For the TF-
PDE: ļ¬nite diļ¬€erence scheme in (5.47) with t = 2.5 Ɨ 10āˆ’5
, Nx
equidistant points on [āˆ’12, 12], initial condition given by Ī“D
40 (left and
right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.8 Zoomed in plots for the density Pts(x, T) by solving the TFPDE (5.37)
and the empirical histogram by MC/CP at T = 0.5 (left) and T = 1
(right): Ī± = 0.5, c = 1, Ī» = 1, x0 = 1 and Ļƒ = 0.01 (left and
right). In the MC/CP: sample size s = 105
, 316 bins, Ī“ = 0.01,
RelTol = 1 Ɨ 10āˆ’8
for U(Ī“), time step t = 1 Ɨ 10āˆ’3
(left and
right). In the TFPDE: ļ¬nite diļ¬€erence scheme given in (5.47) with
t = 1 Ɨ 10āˆ’5
in time, Nx = 2000 equidistant points on [āˆ’12, 12]
in space, and the initial conditions are approximated by Ī“D
40 (left and
right). We perform the one-sample K-S tests here to test how two
methods match. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.1 An illustration of the applications of multi-dimensional LĀ“evy jump
models in mathematical ļ¬nance. . . . . . . . . . . . . . . . . . . . . 127
6.2 Three ways to correlate LĀ“evy pure jump processes. . . . . . . . . . . 128
6.3 The LĀ“evy measures of bivariate tempered stable Clayton processes
with diļ¬€erent dependence strength (described by the correlation length
Ļ„) between their L1 and L2 components. . . . . . . . . . . . . . . . . 133
xvi
6.4 The LĀ“evy measures of bivariate tempered stable Clayton processes
with diļ¬€erent dependence strength (described by the correlation length
Ļ„) between their L++
1 and L++
2 components (only in the ++ corner).
It shows how the dependence structure changes with respect to the
parameter Ļ„ in the Clayton family of copulas. . . . . . . . . . . . . . 134
6.5 trajectory of component L++
1 (t) (in blue) and L++
2 (t) (in green) that
are dependent described by Clayton copula with dependent structure
parameter Ļ„. Observe how trajectories get more similar when Ļ„ in-
creases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.6 Sample path of (L1, L2) with marginal LĀ“evy measure given by equation
(6.14), LĀ“evy copula given by (6.13), with each components such as
F++
given by Clayton copula with parameter Ļ„. Observe that when Ļ„
is bigger, the ā€™ļ¬‚ippingā€™ motion happens more symmetrically, because
there is equal chance for jumps to be the same sign with the same
size, and for jumps to be the opposite signs with the same size. . . . 139
6.7 Sample paths of bivariate tempered stable Clayton LĀ“evy jump pro-
cesses (L1, L2) simulated by the series representation given in Equa-
tion (6.30). We simulate two sample paths for each value of Ļ„. . . . . 140
6.8 An illustration of the three methods used in this paper to solve the
moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 140
6.9 An illustration of the three methods used in this paper to solve the
moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 147
6.10 An illustration of the three methods used in this paper to solve the
moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 148
6.11 PCM/S (probabilistic) vs. MC/S (probabilistic): error l2u2(t) of the
solution for Equation (6.1) with a bivariate pure jump LĀ“evy process
with the LĀ“evy measure in radial decomposition given by Equation
(6.9) versus the number of samples s obtained by MC/S and PCM/S
(left) and versus the number of collocation points per RV obtained
by PCM/S with a ļ¬xed number of truncations Q in Equation (6.10)
(right). t = 1 , c = 1, Ī± = 0.5, Ī» = 5, Āµ = 0.01, NSR = 16.0%
(left and right). In MC/S: ļ¬rst order Euler scheme with time step
t = 1 Ɨ 10āˆ’3
(right). . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.12 PCM/series rep v.s. exact: T = 1. We test the noise/signal=variance/mean
ratio to be 4% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.13 PCM/series d-convergence and Q-convergence at T=1. We test the
noise/signal=variance/mean ratio to be 4% at t=1. The l2u2 error is
deļ¬ned as l2u2(t) =
||Eex[u2(x,t;Ļ‰)]āˆ’Enum[u2(x,t;Ļ‰)]||L2([0,2])
||Eex[u2(x,t;Ļ‰)]||L2([0,2])
. . . . . . . . . . 153
6.14 MC v.s. exact: T = 1. Choice of parameters of this problem: we
evaluated the moment statistics numerically with integration rela-
tive tolerance to be 10āˆ’8
. With this set of parameter, we test the
noise/signal=variance/mean ratio to be 4% at T = 1. . . . . . . . . . 153
6.15 MC v.s. exact: T = 2. Choice of parameters of this problem: we
evaluated the moment statistics numerically with integration rela-
tive tolerance to be 10āˆ’8
. With this set of parameter, we test the
noise/signal=variance/mean ratio to be 10% at T = 2. . . . . . . . . 154
xvii
6.16 FP (deterministic) vs. MC/S (probabilistic): joint PDF P(u1, u2, t)
of SODEs system in Equation (6.59) from FP Equation (6.41) (3D
contour plot), joint histogram by MC/S (2D contour plot on the x-
y plane), horizontal (subļ¬gure) and vertical (subļ¬gure) slices at the
peaks of density surface from FP equation and MC/S. Final time is
t = 1 (left, NSR = 16.0%) and t = 1.5 (right). c = 1, Ī± = 0.5,
Ī» = 5, Āµ = 0.01. In MC/S: ļ¬rst-order Euler scheme with time step
t = 1Ɨ10āˆ’3
, 200 bins on both u1 and u2 directions, Q = 40, sample
size s = 106
. In FP: initial condition is given by MC data at t0 = 0.5,
RK2 scheme with time step t = 4 Ɨ 10āˆ’3
. . . . . . . . . . . . . . . . 155
6.17 TFPDE (deterministic) vs. PCM/S (probabilistic): error l2u2(t) of
the solution for Equation (6.1) with a bivariate pure jump LĀ“evy pro-
cess with the LĀ“evy measure in radial decomposition given by Equation
(6.9) obtained by PCM/S in Equation (6.64) (stochastic approach)
and TFPDE in Equation (6.41) (deterministic approach) versus time.
Ī± = 0.5, Ī» = 5, Āµ = 0.001 (left and right). c = 0.1 (left); c = 1 (right).
In TFPDE: initial condition is given by Ī“G
2000 in Equation (6.67), RK2
scheme with time step t = 4 Ɨ 10āˆ’3
. . . . . . . . . . . . . . . . . . 156
6.18 Exact mean, variance, and NSR versus time. The noise/signal ratio
is 10% at T = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.19 PCM/S (probabilistic) vs. MC/S (stochastic): error l2u2(t) of the so-
lution for Equation (6.1) driven by a bivariate TS Clayton LĀ“evy pro-
cess with LĀ“evy measure given in Section 1.2.2 versus the number of
truncations Q in the series representation (6.32) by PCM/S (left) and
versus the number of samples s in MC/S with the series representation
(6.30) by computing Equation (6.59) (right). t = 1 , Ī± = 0.5, Ī» = 5,
Āµ = 0.01, Ļ„ = 1 (left and right). c = 0.1, NSR = 10.1% (right). In
MC/S: ļ¬rst order Euler scheme with time step t = 1 Ɨ 10āˆ’2
(right). 162
6.20 Q-convergence (with various Ī») of PCM/S in Equation (6.64):Ī± = 0.5,
Āµ = 0.01, RelTol of integration of moments of jump sizes is 1e-8. . . . 162
6.21 FP (deterministic) vs. MC/S (probabilistic): joint PDF P(u1, u2, t)
of SODE system in Equation (6.59) from FP Equation (6.40) (three-
dimensional contour plot), joint histogram by MC/S (2D contour plot
on the x-y plane), horizontal (left, subļ¬gure) and vertical (right, sub-
ļ¬gure) slices at the peak of density surfaces from FP equation and
MC/S. Final time t = 1 (left) and t = 1.5 (right). c = 0.5, Ī± = 0.5,
Ī» = 5, Āµ = 0.005, Ļ„ = 1 (left and right). In MC/S: ļ¬rst-order Eu-
ler scheme with time step t = 0.02, Q = 2 in series representation
(6.30), sample size s = 104
. 40 bins on both u1 and u2 directions
(left); 20 bins on both u1 and u2 directions (right). In FP: initial
condition is given by Ī“G
1000 in Equation (6.67), RK2 scheme with time
step t = 4 Ɨ 10āˆ’3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.22 TFPDE (deterministic) vs. PCM/S (stochastic): error l2u2(t) of the
solution for Equation (6.1) driven by a bivariate TS Clayton LĀ“evy pro-
cess with LĀ“evy measure given in Section 1.2.2 versus time obtained by
PCM/S in Equation (6.81) (stochastic approach) and TFPDE (6.40)
(deterministic approach). c = 1, Ī± = 0.5, Ī» = 5, Āµ = 0.01 (left and
right). c = 0.05, Āµ = 0.001 (left). c = 1, Āµ = 0.005 (right). In
TFPDE: initial condition is given by Ī“G
1000 in Equation (6.67), RK2
scheme with time step t = 4 Ɨ 10āˆ’3
. . . . . . . . . . . . . . . . . . 165
xviii
6.23 S-convergence in MC/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence
in the E[u2
] (left) between diļ¬€erent sample sizes s and s = 106
(as a
reference). The heat equation (6.1) is driven by a 10-dimensional jump
process with a LĀ“evy measure (6.9) obtained by MC/S with series rep-
resentation (6.10). We show the L2 norm of these diļ¬€erences versus
s (right). Final time T = 1, c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01, time
step t = 4 Ɨ 10āˆ’3
, and Q = 10. The NSR at T = 1 is 6.62%. . . . . 167
6.24 Samples of (u1, u2) (left) and joint PDF of (u1, u2, ..., u10) on the
(u1, u2) plane by MC (right) : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01,dt =
4e āˆ’ 3 (ļ¬rst order Euler scheme), T = 1, Q = 10 (number of trunca-
tions in the series representation), and sample size s = 106
. . . . . . 167
6.25 Samples of (u9, u10) (left) and joint PDF of (u1, u2, ..., u10) on the
(u9, u10) plane by MC (right) : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01,dt =
4e āˆ’ 3 (ļ¬rst order Euler scheme), T = 1, Q = 10 (number of trunca-
tions in the series representation), and sample size s = 106
. . . . . . . 168
6.26 First two moments for solution of the heat equation (6.1) driven by a
10-dimensional jump process with a LĀ“evy measure (6.9) obtained by
MC/S with series representation (6.10) at ļ¬nal time T = 0.5 (left) and
T = 1 (right) by MC : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01, dt = 4e āˆ’ 3
(with the ļ¬rst order Euler scheme), Q = 10, and sample size s = 106
. 169
6.27 Q-convergence in PCM/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence
in the E[u2
] (left) between diļ¬€erent series truncation order Q and
Q = 16 (as a reference). The heat equation (6.1) is driven by a
10-dimensional jump process with a LĀ“evy measure (6.9) obtained by
MC/S with series representation (6.10). We show the L2 norm of these
diļ¬€erences versus Q (right). Final time T = 1, c = 0.1, Ī± = 0.5, Ī» =
10, Āµ = 0.01. The NSR at T = 1 is 6.62%. . . . . . . . . . . . . . . . 169
6.28 MC/S V.s. PCM/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence
between the E[u2
] computed from MC/S and that computed from
PCM/S at ļ¬nal time T = 0.5 (left) and T = 1 (right). The heat equa-
tion (6.1) is driven by a 10-dimensional jump process with a LĀ“evy
measure (6.9) obtained by MC/S with series representation (6.10).
c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01. In MC/S, time step t = 4Ɨ10āˆ’3
,
Q = 10. In PCM/S, Q = 16. . . . . . . . . . . . . . . . . . . . . . . . 170
6.29 The function in Equation (6.82) with d = 2 (left up and left down)
and the ANOVA approximation of it with eļ¬€ective dimension of two
(right up and right down). A = 0.5, d = 2. . . . . . . . . . . . . . . . 173
6.30 The function in Equation (6.82) with d = 2 (left up and left down)
and the ANOVA approximation of it with eļ¬€ective dimension of two
(right up and right down). A = 0.1, d = 2. . . . . . . . . . . . . . . . 173
6.31 The function in Equation (6.82) with d = 2 (left up and left down)
and the ANOVA approximation of it with eļ¬€ective dimension of two
(right up and right down). A = 0.01, d = 2. . . . . . . . . . . . . . . 174
xix
6.32 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional LĀ“evy jump processes:the
mean (left) for the solution of the heat equation (6.1) driven by a 10-
dimensional jump process with a LĀ“evy measure (6.9) computed by
1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms of dif-
ference in E[u] between these three methods are plotted versus ļ¬nal
time T (right). c = 1, Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4
. In 1D-ANOVA-FP:
t = 4 Ɨ 10āˆ’3
in RK2, M = 30 elements, q = 4 GLL points on
each element. In 2D-ANOVA-FP: t = 4 Ɨ 10āˆ’3
in RK2, M = 5
elements on each direction, q2
= 16 GLL points on each element. In
PCM/S: Q = 10 in the series representation (6.10). Initial condition
of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 Ɨ 104
, t = 4 Ɨ 10āˆ’3
.
NSR ā‰ˆ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 175
6.33 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional LĀ“evy jump processes:the
second moment (left) for the solution of heat equation (6.1) driven by
a 10-dimensional jump process with a LĀ“evy measure (6.9) computed
by 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms of
diļ¬€erence in E[u2
] between these three methods are plotted versus
ļ¬nal time T (right). c = 1, Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4
. In 1D-ANOVA-
FP: t = 4 Ɨ 10āˆ’3
in RK2, M = 30 elements, q = 4 GLL points
on each element. In 2D-ANOVA-FP: t = 4 Ɨ 10āˆ’3
in RK2, M = 5
elements on each direction, q2
= 16 GLL points on each element. Ini-
tial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 Ɨ 104
,
t = 4Ɨ10āˆ’3
. In PCM/S: Q = 10 in the series representation (6.10).
NSR ā‰ˆ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 176
6.34 Evolution of marginal distributions pi(xi, t) at ļ¬nal time t = 0.6, ..., 1.
c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4
. Initial condition from MC:
t0 = 0.5, s = 104
, dt = 4 Ɨ 10āˆ’3
, Q = 10. 1D-ANOVA-FP : RK2
with time step dt = 4 Ɨ 10āˆ’3
, 30 elements with 4 GLL points on each
element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.35 Showing the mean E[u] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and
by solving 1D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10,
Āµ = 1e āˆ’ 4. Initial condition from MC: s = 104
, dt = 4āˆ’3
, Q = 10.
1D-ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3
, 30 elements with 4 GLL
points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.36 The mean E[u2
] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and by
solving 1D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 1eāˆ’4.
Initial condition from MC: s = 104
, dt = 4 Ɨ 10āˆ’3
, Q = 10. 1D-
ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3
, 30 elements with 4 GLL
points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.37 The mean E[u2
] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and by
solving 2D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4
.
Initial condition from MC: s = 104
, dt = 4 Ɨ 10āˆ’3
, Q = 10. 2D-
ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3
, 30 elements with 4 GLL
points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.38 Left: sensitivity index deļ¬ned in Equation (6.87) on each pair of
(i, j), j ā‰„ i. Right: sensitivity index deļ¬ned in Equation (6.88) on
each pair of (i, j), j ā‰„ i. They are computed from the MC data at
t0 = 0.5 with s = 104
samples. . . . . . . . . . . . . . . . . . . . . . 182
xx
6.39 Error growth by 2D-ANOVA-FP in diļ¬€erent dimension d:the error growth
l2u1rel(T; t0) in E[u] deļ¬ned in Equation (6.91) versus ļ¬nal time T
(left); the error growth l2u2rel(T; t0) in E[u2
] deļ¬ned in Equation
(6.92) versus T (middle); l2u1rel(T = 1; t0) and l2u2rel(T = 1; t0)
versus dimension d (right). We consider the diļ¬€usion equation (6.1)
driven by a d-dimensional jump process with a LĀ“evy measure (6.9)
computed by 2D-ANOVA-FP, and PCM/S. c = 1, Ī± = 0.5, Āµ = 10āˆ’4
(left, middle, right). In Equation (6.49): t = 4 Ɨ 10āˆ’3
in RK2,
M = 30 elements, q = 4 GLL points on each element. In Equation
(6.50): t = 4 Ɨ 10āˆ’3
in RK2, M = 5 elements on each direction,
q2
= 16 GLL points on each element. Initial condition of ANOVA-FP:
MC/S data at t0 = 0.5, s = 1 Ɨ 104
, t = 4 Ɨ 10āˆ’3
, and Q = 16. In
PCM/S: Q = 16 in the series representation (6.10). NSR ā‰ˆ 20.5%
at T = 1 for all the dimensions d = 2, 4, 6, 10, 14, 18. These runs were
done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. . . 184
7.1 Summary of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
xxi
Chapter One
Introduction
2
1.1 Motivation
Stochastic partial diļ¬€erential equations (SPDEs) are widely used for stochastic mod-
eling in diverse applications from physics, to engineering, biology and many other
ļ¬elds, where the source of uncertainty includes random coeļ¬ƒcients and stochastic
forcing. Our work is motivated by two things: application and shortcomings of past
work.
The source of uncertainty, practically, can be any non-Gaussian process. In many
cases, the random parameters are only observed at discrete values, which implies
that a discrete probability measure is more appropriate from the modeling point of
view. More generally, random processes with jumps are of fundamental importance in
stochastic modeling, e.g., stochastic-volatility jump-diļ¬€usion models in ļ¬nance [171],
stochastic simulation algorithms for modeling diļ¬€usion, reaction and taxis in biol-
ogy [41], ļ¬‚uid models with jumps [158], quantum-jump models in physics [35], etc.
This serves as the motivation of our work on simulating SPDEs driven by discrete
random variables (RVs). Nonlinear SPDEs with discrete RVs and jump processes are
of practical use, since sources of stochastic excitations including uncertain parame-
ters and boundary/initial conditions are typically observed at discrete values. Many
complex systems of fundamental and industrial importance are signiļ¬cantly aļ¬€ected
by the underlying ļ¬‚uctuations/variations in random excitations, such as stochastic-
volatility jump-diļ¬€usion model in mathematical ļ¬nance [12, 13, 24, 27, 28, 171],
stochastic simulation algorithms for modeling diļ¬€usion, reaction and taxis in biol-
ogy [41], truncated Levy ļ¬‚ight model in turbulence [85, 106, 121, 158], quantum-jump
models in physics [35], etc.
An interesting source of uncertainty is LĀ“evy jump processes, such as tempered
3
Ī± stable (TĪ±S) processes. TĪ±S processes were introduced in statistical physics to
model turbulence, e.g., the truncated LĀ“evy ļ¬‚ight model [85, 106, 121], and in math-
ematical ļ¬nance to model stochastic volatility, e.g., the CGMY model [27, 28]. The
empirical distribution of asset prices is not always in a stable distribution or a nor-
mal distribution. The tail is heavier than a normal distribution and thinner than a
stable distribution [20]. Therefore, the TĪ±S process was introduced as the CGMY
model to modify the Black and Scholes model. More details of white noise the-
ory for LĀ“evy jump processes with applications to SPDEs and ļ¬nance can be found
in [18, 120, 96, 97, 124]. Although one-dimensional (1D) jump models are constructed
in ļ¬nance with LĀ“evy processes [14, 86, 100], many ļ¬nancial models require multi-
dimensional LĀ“evy jump processes with dependent components [33], such as basket
option pricing [94], portfolio optimization [39], and risk scenarios for portfolios [33].
In history, multi-dimensional Gaussian models are widely applied in ļ¬nance because
of the simplicity in the description of dependence structures [134], however in some
applications we must take jumps in price processes into account [27, 28].
This work is constructed on previous work on the ļ¬eld of uncertainty quan-
tiļ¬cation (UQ), which includes the generalized polynomial chaos method (gPC),
multi-element generalized polynomial chaos method (MEgPC), probabilistic collo-
cation method (PCM), sparse collocation method, analysis of variance (ANOVA),
and many other variants (see, e.g., [8, 9, 50, 52, 58, 156] and references therein).
1.1.1 Computational limitations for UQ of nonlinear SPDEs
Numerically, nonlinear SPDEs with discrete processes are often solved by gPC in-
volving a system of coupled deterministic nonlinear equations [169], or probabilistic
collocation method (PCM) [50, 170, 177] involving nonlinear corresponding PDEs
4
obtained at the collocation points. For stochastic processes with short correlation
length, the number of RVs required to represent the processes can be extremely large.
Therefore, the number of equations involved in the gPC propagator for a nonlinear
SPDE driven by such a process can be very large and highly coupled.
1.1.2 Computational limitations for UQ of SPDEs driven by
LĀ“evy jump processes
For simulations of LĀ“evy jump processes as TĪ±S, we do not know the distribution of in-
crements explicitly [33], but we may still simulate the trajectories of TĪ±S processes by
the random walk approximation [10]. However, the random walk approximation does
not identify the jump time and size of the large jumps precisely [139, 140, 141, 142].
In the heavy tailed case, large jumps contribute more than small jumps in functionals
of a LĀ“evy process. Therefore, in this case, we have mainly used two other ways to
simulate the trajectories of a TĪ±S process numerically: compound Poisson (CP) ap-
proximation [33] and series representation [140]. In the CP approximation, we treat
the jumps smaller than a certain size Ī“ by their expectation, and treat the remaining
process with larger jumps as a CP process [33]. There are six diļ¬€erent series represen-
tations of LĀ“evy jump processes. They are the inverse LĀ“evy measure method [44, 82],
LePageā€™s method [92], Bondessonā€™s method [23], thinning method [140], rejection
method [139], and shot noise method [140, 141]. However, in each representation,
the number of RVs involved is very large (such as 100). In this work, for TĪ±S pro-
cesses, we will use the shot noise representation for Lt as a series representation
method because the tail of LĀ“evy measure of a TĪ±S process does not have an explicit
inverse [142]. Both the CP and the series approximation converge slowly when the
jumps of the LĀ“evy process are highly concentrated around zero, however both can
5
be improved by replacing the small jumps via Brownian motions [6]. The Ī±-stable
distribution was introduced to model the empirical distribution of asset prices [104],
replacing the normal distribution. In the past literature, the simulation of SDEs or
functionals of TĪ±S processes was mainly done via MC [128]. MC for functionals of
TĪ±S processes is possible after a change of measure that transform TĪ±S processes
into stable processes [130].
1.2 Introduction of TĪ±S LĀ“evy jump processes
TĪ±S processes were introduced in statistical physics to model turbulence, e.g., the
truncated LĀ“evy ļ¬‚ight model [85, 106, 121], and in mathematical ļ¬nance to model
stochastic volatility, e.g., the CGMY model [27, 28]. Here, we consider a symmet-
ric TĪ±S process (Lt) as a pure jump LĀ“evy martingale with characteristic triplet
(0, Ī½, 0) [19, 143] (no drift and no Gaussian part). The LĀ“evy measure is given by [33]
1
:
Ī½(x) =
ceāˆ’Ī»|x|
|x|Ī±+1
, 0 < Ī± < 2. (1.1)
This LĀ“evy measure can be interpreted as an Esscher transformation [57] from that
of a stable process with exponential tilting of the LĀ“evy measure. The parameter
c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of
the process. Also, Ī» > 0 ļ¬xes the decay rate of big jumps, while Ī± determines the
relative importance of smaller jumps in the path of the process2
. The probability
density for Lt at a given time is not available in a closed form (except when Ī± = 1
2
3
).
1
In a more generalized form, LĀ“evy measure is Ī½(x) = cāˆ’eāˆ’Ī»āˆ’|x|
|x|Ī±+1 Ix<0 + c+eāˆ’Ī»+|x|
|x|Ī±+1 Ix>0. We may
have diļ¬€erent coeļ¬ƒcients c+, cāˆ’, Ī»+, Ī»āˆ’ on the positive and the negative jump parts.
2
In the case when Ī± = 0, Lt is the gamma process.
3
See inverse Gaussian processes.
6
The characteristic exponent for Lt is [33]:
Ī¦(s) = sāˆ’1
log E[eisLs
] = 2Ī“(āˆ’Ī±)Ī»Ī±
c[(1 āˆ’
is
Ī»
)Ī±
āˆ’ 1 +
isĪ±
Ī»
], Ī± = 1, (1.2)
where Ī“(x) is the Gamma function and E is the expectation. By taking the deriva-
tives of the characteristic exponent we obtain the mean and variance:
E[Lt] = 0, V ar[Lt] = 2tĪ“(2 āˆ’ Ī±)cĪ»Ī±āˆ’2
. (1.3)
In order to derive the second moments for the exact solutions of Equations (5.1) and
(5.2), we introduce the ItĖ†o isometry. The jump of Lt is deļ¬ned by Lt = Lt āˆ’ Ltāˆ’ .
We deļ¬ne the Poisson random measure N(t, U) as [71, 119, 123]:
N(t, U) =
0ā‰¤sā‰¤t
I LsāˆˆU , U āˆˆ B(R0), ĀÆU āŠ‚ R0. (1.4)
Here R0 = R{0}, and B(R0) is the Ļƒ-algebra generated by the family of all Borel
subsets U āŠ‚ R, such that ĀÆU āŠ‚ R0; IA is an indicator function. The Poisson random
measure N(t, U) counts the number of jumps of size Ls āˆˆ U at time t. In order
to introduce the ItĖ†o isometry, we deļ¬ne the compensated Poisson random measure
ĖœN [71] as:
ĖœN(dt, dz) = N(dt, dz) āˆ’ Ī½(dz)dt = N(dt, dz) āˆ’ E[N(dt, dz)]. (1.5)
The TĪ±S process Lt (as a martingale) can be also written as:
Lt =
t
0 R0
z ĖœN(dĻ„, dz). (1.6)
For any t, let Ft be the Ļƒ-algebra generated by (Lt, ĖœN(ds, dz)), z āˆˆ R0, s ā‰¤ t. We
deļ¬ne the ļ¬ltration to be F = {Ft, t ā‰„ 0}. If a stochastic process Īøt(z), t ā‰„ 0, z āˆˆ R0
7
is Ft-adapted, we have the following ItĖ†o isometry [119]:
E[(
T
0 R0
Īøt(z) ĖœN(dt, dz))2
] = E[
T
0 R0
Īø2
t (z)Ī½(dz)dt]. (1.7)
1.3 Organization of the thesis
In Chapter 2, we discuss four methods to simulate LĀ“evy jump processes preliminar-
ies and background information to the reader: 1. random walk approximation; 2.
Karhumen-Loeve expansion; 3. compound Poisson approximation; 4. series repre-
sentation.
In Chapter 3, The methods of generating orthogonal polynomial bases with re-
spect to discrete measures are presented, followed by a discussion about the error of
numerical integration. Numerical solutions of the stochastic reaction equation and
Korteweg- de Vries (KdV) equation, including adaptive procedures, are explained.
Then, we summarize the work. In the appendices, we provide more details about
the deterministic KdV equation solver, and the adaptive procedure.
In Chapter 4, we deļ¬ne the WM expansion and derive the Wick-Malliavin prop-
agators for a stochastic reaction equation and a stochastic Burgers equation. We
present several numerical results for SPDEs with one RV and multiple RVs, in-
cluding an adaptive procedure to control the error in time. We also compare the
computational complexity between gPC and WM for stochastic Burgers equation
with the same level of accuracy. Also, we provide an iterative algorithm to generate
coeļ¬ƒcients in the WM approximation.
In Chapter 5, we compare the CP approximation and the series representation
8
of a TĪ±S process. We solve a stochastic reaction-diļ¬€usion with TĪ±S white noise via
MC and PCM, both with CP approximation or series representation of the TĪ±S pro-
cess. We simulate the density evolution for an overdamped Langevin equation with
TĪ±S white noise via the corresponding generalized FP equations. We compare the
statistics obtained from the FP equations and MC or PCM methods. We conclude.
Also, we provide algorithms of the rejection method and simulation of CP processes.
We also provide the probability distributions to simplify the series representation.
In Chapter 6, by MC, PCM and FP, we solve the moment statistics for the solu-
tion of a heat equation driven by a 2D LĀ“evy noise in LePageā€™s series representation.
By MC, PCM and FP, we solve the moment statistics for the solution of a heat equa-
tion driven by a 2D LĀ“evy noise described by LĀ“evy copula as. By MC, PCM and FP,
we solve the moment statistics for the solution of the heat equation driven by a 10D
LĀ“evy noise in LePageā€™s series representation, where the FP equation is decomposed
by the unanchored ANOVA decomposition. We also exam the error growth versus
the dimension of the LĀ“evy process. We conclude. Also, we show how we simplify the
multi-dimensional integration in FP equations into the 1D and 2D integrals.
In Chapter 7, lastly, we summarize the scope of SPDEs, the scope of stochastic
processes, and the methods we have experimented so far. We summarize the compu-
tational cost and accuracy in our numerical experiments. We suggest feasible future
works on methodology and applications.
Chapter Two
Simulation of LĀ“evy jump processes
10
In general there are three ways to generate a LĀ“evy process [140]: random walk ap-
proximation, series representation and compound Poisson (CP) approximation. The
random walk approximation approximate the continuous random walk by a discrete
random walk on a discrete time sequence, if the marginal distribution of the process is
known. It is often used to simulate LĀ“evy jump processes with large jumps, but it does
not identify the jump time and size of the large jumps precisely [139, 140, 141, 142].
We attempt to simulate a non-Gaussian process by Karhumen-Lo`eve (KL) expansion
here as well by computing the covariance kernel and its eigenfunctions. In the CP
approximation, we treat the jumps smaller than a certain size by their expectation
as a drift term, and the remaining process with large jumps as a CP process [33].
There are six diļ¬€erent series representations of LĀ“evy jump processes. They are the in-
verse LĀ“evy measure method [44, 82], LePageā€™s method [92], Bondessonā€™s method [23],
thinning method [140], rejection method [139], and shot noise method [140, 141].
2.1 Random walk approximation to Poisson pro-
cesses
For a LĀ“evy jump process Lt, on a ļ¬xed time grid [t0, t1, t2, ..., tN ], we may approximate
Lt by Lt = N
i=1 XiI{t < ti}. When the marginal distribution of Lt is known,
the distribution of Xi is known to be Ltiāˆ’tiāˆ’1
. Therefore, on the ļ¬xed time grid,
we may generate the RVs Xi by sampling from the known distribution. When Lt
is composed of large jumps with low intensity (or rate of jumps), this can be a
good approximation. However, we are mostly interested in LĀ“evy jump processes
with inļ¬nite activity (with high rates of jumps), therefore this will not be a good
approximation for the kind of processes we are going to consider, such as tempered
11
Ī± stable processes.
2.2 KL expansion for Poisson processes
Let us ļ¬rst take a Poisson process N(t; Ļ‰) with intensity Ī» on a computational time
domain [0, T] as an example. We mimic the KL expansion for Gaussian processes to
simulate non-Gaussian processes as Poisson processes.
ā€¢ First we calculate the covariance kernel (assuming t > t).
Cov(N(t; Ļ‰)N(t ; Ļ‰)) = E[N(t; Ļ‰)N(t ; Ļ‰)] āˆ’ E[N(t; Ļ‰)]E[N(t ; Ļ‰)]
= E[N(t; Ļ‰)N(t; Ļ‰)] + E[N(t; Ļ‰)]E[N(t āˆ’ t; Ļ‰)] āˆ’ E[N(t; Ļ‰)]E[N(t ; Ļ‰)]
= Ī»t, t > t,
(2.1)
Therefore, the covariance kernel is
Cov(N(t; Ļ‰)N(t ; Ļ‰)) = Ī»(t t ) (2.2)
ā€¢ The eigenvalues and eigenfunctions for this kernel would be:
ek(t) =
āˆš
2sin(k āˆ’
1
2
)Ļ€t (2.3)
and
Ī»k =
1
(k āˆ’ 1
2
)2Ļ€2
(2.4)
where k=1,2,3,...
ā€¢ The stochastic process Nt approximated by ļ¬nite number of terms in the KL
12
expansion can be written as:
ĖœN(t; Ļ‰) = Ī»t +
M
i=1
Ī»iYiei(t) (2.5)
where
1
0
e2
k(t)dt = 1 (2.6)
and
T
0
e2
k(t)dt = T āˆ’
sin[T(1 āˆ’ 2k)Ļ€]
Ļ€(1 āˆ’ 2k)
(2.7)
and they are orthogonal.
ā€¢ The distribution of Yk can be calculated by the following. Given a sample path
Ļ‰ āˆˆ ā„¦,
< N(t; Ļ‰) āˆ’ Ī»t, ek(t) >=
Yk
āˆš
Ī»
Ļ€(k āˆ’ 1
2
)
< ek(t), ek(t) >
= 2Yk
āˆš
Ī»[
T(2k āˆ’ 1)Ļ€ āˆ’ sin((2k āˆ’ 1)Ļ€T)
Ļ€2(2k āˆ’ 1)2
]
=< N(t; Ļ‰), ek(t) > āˆ’
āˆš
2Ī»
Ļ€2
[āˆ’2Ļ€Tcos(Ļ€T/2) + 4sin(Ļ€T/2)].
(2.8)
Therefore,
Yk =
Ļ€2
(2k āˆ’ 1)2
[< N(t; Ļ‰), ek(t) > āˆ’
āˆš
2Ī»
Ļ€2 [āˆ’2Ļ€Tcos(Ļ€T/2) + 4sin(Ļ€T/2)]]
2
āˆš
Ī»[T(2k āˆ’ 1)Ļ€ āˆ’ sin((2k āˆ’ 1)Ļ€T]
.
(2.9)
From each sample path each sample path Ļ‰, we can calculate the value of
Y1, ..., YM . In this way the distribution of Y1, ..., YM can be sampled. Nu-
merically, if we simulate enough number of samples of a Poisson process (by
simulating the jump times and jump sizes separately), we may have the em-
pirical distribution of RVs Y1, ..., YM .
ā€¢ Now let us see how well the sample paths of the Poisson process Nt are ap-
13
5 4 3 2 1 0 1 2 3 4 5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Empirical CDF for KL Exp RVs
i
CDF
Figure 2.1: Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KL expansion terms,
for a centered Poisson process (Nt āˆ’Ī»t) of Ī» = 10, Tmax = 1, with s = 10000 samples, and N = 200
points on the time domain [0, 1].
proximated by the KL expansion.
ā€¢ Now let us see how well the mean of the Poisson process Nt are approximated
by the KL expansion.
ā€¢ Now let us see how well the second moment of the Poisson process Nt are
approximated by the KL expansion.
2.3 Compound Poisson approximation to LĀ“evy jump
processes
Let us take a tempered Ī± stable process (TĪ±S) as an example here. TĪ±S processes
were introduced in statistical physics to model turbulence, e.g., the truncated LĀ“evy
ļ¬‚ight model [85, 106, 121], and in mathematical ļ¬nance to model stochastic volatility,
e.g., the CGMY model [27, 28]. Here, we consider a symmetric TĪ±S process (Lt) as
a pure jump LĀ“evy martingale with characteristic triplet (0, Ī½, 0) [19, 143] (no drift
14
0 1 2 3 4 5
100
50
0
50
100
150
200
250
300
Exact and Approx ed Sample Path by KL Exp
time
N(t;0
)
ex sample path
approx ed sample path
10 Exp Terms
=50
T
max
=5
0 1 2 3 4 5
1
0
1
2
3
4
5
6
Exact and Approx ed Sample Path by KL Exp
time
N(t;0
)
exact sample path
approx ed sample path
10 Exp Terms
=1
T
max
=5
Figure 2.2: Exact sample path vs. sample path approximated by the KL expansion: when Ī»
is smaller, the sample path is better approximated. (Brownian motion is the limiting case for a
centered poisson process with very large birth rate.)
0 1 2 3 4 5
50
0
50
100
150
200
250
300
Mean Rep by KL Exp w/ Sampled Coefficients
time
<N(t;)>
Exact
KL Exp
10 Exp Terms
=50
T
max
=5
200 Samples
0 1 2 3 4 5
6
4
2
0
2
4
6
8
10
Mean Rep by KL Exp w/ Sampled Coefficients
time
<N(t;)>
Exact
KL Exp
10 Exp Terms
=1
T
max
=5
200 Samples
Figure 2.3: Exact mean vs. mean by KL expansion: when Ī» is larger, the KL representation
seems to be better.
0 1 2 3 4 5
0
1
2
3
4
5
6
7
x 10
4 2nd Moment Rep by KL Exp w/ Sampled Coefficients
time
<N2
(t;)>
Exact
KL Exp
10 Exp Terms
=50
T
max
=5
200 Samples
0 1 2 3 4 5
0
10
20
30
40
50
60
2nd Moment Rep by KL Exp w/ Sampled Coefficients
Time
<N2
(t;)>
Exact
KL Exp
10 Exp Terms
=1
T
max
=5
200 Samples
Figure 2.4: Exact 2nd moment vs. 2nd moment by KL expansion with sampled coeļ¬ƒcients. The
2nd moments are not as well approximated as the mean.
15
and no Gaussian part). The LĀ“evy measure is given by [33] 1
:
Ī½(x) =
ceāˆ’Ī»|x|
|x|Ī±+1
, 0 < Ī± < 2. (2.10)
This LĀ“evy measure can be interpreted as an Esscher transformation [57] from that
of a stable process with exponential tilting of the LĀ“evy measure. The parameter
c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of
the process. Also, Ī» > 0 ļ¬xes the decay rate of big jumps, while Ī± determines the
relative importance of smaller jumps in the path of the process2
. The probability
density for Lt at a given time is not available in a closed form (except when Ī± = 1
2
3
).
The characteristic exponent for Lt is [33]:
Ī¦(s) = sāˆ’1
log E[eisLs
] = 2Ī“(āˆ’Ī±)Ī»Ī±
c[(1 āˆ’
is
Ī»
)Ī±
āˆ’ 1 +
isĪ±
Ī»
], Ī± = 1, (2.11)
where Ī“(x) is the Gamma function and E is the expectation. By taking the deriva-
tives of the characteristic exponent we obtain the mean and variance:
E[Lt] = 0, V ar[Lt] = 2tĪ“(2 āˆ’ Ī±)cĪ»Ī±āˆ’2
. (2.12)
In the CP approximation, we simulate the jumps larger than Ī“ as a CP process
and replace jumps smaller than Ī“ by their expectation as a drift term [33]. Here
we explain the method to approximate a TĪ±S subordinator Xt (without a Gaussian
part and a drift) with the LĀ“evy measure Ī½(x) = ceāˆ’Ī»x
xĪ±+1 Ix>0 (positive jumps only); this
method can be generalized to a TĪ±S process with both positive and negative jumps.
1
In a more generalized form, LĀ“evy measure is Ī½(x) = cāˆ’eāˆ’Ī»āˆ’|x|
|x|Ī±+1 Ix<0 + c+eāˆ’Ī»+|x|
|x|Ī±+1 Ix>0. We may
have diļ¬€erent coeļ¬ƒcients c+, cāˆ’, Ī»+, Ī»āˆ’ on the positive and the negative jump parts.
2
In the case when Ī± = 0, Lt is the gamma process.
3
See inverse Gaussian processes.
16
The CP approximation XĪ“
t for this TĪ±S subordinator Xt is:
Xt ā‰ˆ XĪ“
t =
sā‰¤t
XsI Xsā‰„Ī“+E[
sā‰¤t
XsI Xs<Ī“] =
āˆž
i=1
JĪ“
i Itā‰¤Ti
+bĪ“
t ā‰ˆ
Qcp
i=1
JĪ“
i Itā‰¤Ti
+bĪ“
t,
(2.13)
We introduce Qcp here as the number of jumps occurred before time t. The ļ¬rst
term āˆž
i=1 JĪ“
i Itā‰¤Ti
is a compound Poisson process with jump intensity
U(Ī“) = c
āˆž
Ī“
eāˆ’Ī»x
dx
xĪ±+1
(2.14)
and jump size distribution pĪ“
(x) = 1
U(Ī“)
ceāˆ’Ī»x
xĪ±+1 Ixā‰„Ī“ for JĪ“
i . The jump size random
variables (RVs) JĪ“
i are generated via the rejection method [37]. This is the algorithm
of rejection method to generate RVs with distribution pĪ“
(x) = 1
U(Ī“)
ceĪ»x
xĪ±+1 Ixā‰„Ī“ for CP
approximation [37]
The distribution pĪ“
(x) can be bounded by
pĪ“
(x) ā‰¤
Ī“āˆ’Ī±
eāˆ’Ī»Ī“
Ī±U(Ī“)
fĪ“
(x), (2.15)
where fĪ“
(x) = Ī±Ī“āˆ’Ī±
xĪ±+1 Ixā‰„Ī“. The algorithm to generate RVs with distribution pĪ“
(x) =
1
U(Ī“)
ceĪ»x
xĪ±+1 Ixā‰„Ī“ is [33, 37]:
ā€¢ REPEAT
ā€¢ Generate RVs W and V : independent and uniformly distributed on [0, 1]
ā€¢ Set X = Ī“Wāˆ’1/Ī±
17
ā€¢ Set T = fĪ“(X)Ī“āˆ’Ī±eāˆ’Ī»Ī“
pĪ“(X)Ī±U(Ī“)
ā€¢ UNTIL V T ā‰¤ 1
ā€¢ RETURN X .
Here, Ti is the i-th jump arrival time of a Poisson process with intensity U(Ī“).
The accuracy of CP approximation method can be improved by replacing the smaller
jumps by a Brownian motion [6], when the growth of the LĀ“evy measure near zero
is fast. The second term functions as a drift term, bĪ“
t, resulted from truncating
the smaller jumps. The drift is bĪ“
= c
Ī“
0
eāˆ’Ī»xdx
xĪ± . This integration diverges when
Ī± ā‰„ 1, therefore the CP approximation method only applies to TĪ±S processes with
0 < Ī± < 1. In this paper, both the intensity U(Ī“) and drift bĪ“
are calculated
via numerical integrations with Gauss-quadrature rules [54] with a speciļ¬ed relative
tolerance (RelTol) 4
. In general, there are two algorithms to simulate a compound
Poisson process [33]: the ļ¬rst method is to simulate the jump time Ti by exponentially
distributed RVs and take the number of jumps Qcp as large as possible; the second
method is to ļ¬rst generate and ļ¬x the number of jumps, then generate the jump time
by uniformly distributed RVs on [0, t]. Algorithms for simulating a CP process (the
second kind) with intensity and the jump size distribution in their explicit forms are
known on a ļ¬xed time grid [33]. Here we describe how to simulate the trajectories of a
CP process with intensity U(Ī“) and jump size distribution Ī½Ī“(x)
U(Ī“)
, on a simulation time
domain [0, T] at time t. The algorithm to generate sample paths for CP processes
without a drift:
4
The RelTol of numerical integration is deļ¬ned as |qāˆ’Q|
|Q| , where q is the computed value of the
integral and Q is the unknown exact value.
18
ā€¢ Simulate an RV N from Poisson distribution with parameter U(Ī“)T, as the
total number of jumps on the interval [0, T].
ā€¢ Simulate N independent RVs, Ti, uniformly distributed on the interval [0, T],
as jump times.
ā€¢ Simulate N jump sizes, Yi with distribution Ī½Ī“(x)
U(Ī“)
.
ā€¢ Then the trajectory at time t is given by N
i=1 ITiā‰¤tYi.
In order to simulate the sample paths of a symmetric TĪ±S process with a LĀ“evy
measure given in Equation (5.3), we generate two independent TĪ±S subordinators
via the CP approximation and subtract one from the other. The accuracy of the CP
approximation is determined by the jump truncation size Ī“.
The numerical experiments for this method will be given in Chapter 5.
2.4 Series representation to LĀ“evy jump processes
Let { j}, {Ī·j}, and {Ī¾j} be sequences of i.i.d. RVs such that P( j = Ā±1) = 1/2, Ī·j āˆ¼
Exponential(Ī»), and Ī¾j āˆ¼Uniform(0, 1). Let {Ī“j} be arrival times in a Poisson
process with rate one. Let {Uj} be i.i.d. uniform RVs on [0, T]. Then, a TĪ±S
process Lt with LĀ“evy measure given in Equation (5.3) can be represented as [142]:
Lt =
+āˆž
j=1
j[(
Ī±Ī“j
2cT
)āˆ’1/Ī±
āˆ§ Ī·jĪ¾
1/Ī±
j ]I{Ujā‰¤t}, 0 ā‰¤ t ā‰¤ T. (2.16)
Equation (5.14) converges almost surely as uniformly in t [139]. In numerical simu-
lations, we truncate the series in Equation (5.14) up to Qs terms. The accuracy of
19
series representation approximation is determined by the number of truncations Qs.
The numerical experiments for this method will be given in Chapter 5.
Chapter Three
Adaptive multi-element
polynomial chaos with discrete
measure: Algorithms and
applications to SPDEs
21
We develop a multi-element probabilistic collocation method (ME-PCM) for arbi-
trary discrete probability measures with ļ¬nite moments and apply it to solve partial
diļ¬€erential equations with random parameters. The method is based on numeri-
cal construction of orthogonal polynomial bases in terms of a discrete probability
measure. To this end, we compare the accuracy and eļ¬ƒciency of ļ¬ve diļ¬€erent con-
structions. We develop an adaptive procedure for decomposition of the parametric
space using the local variance criterion. We then couple the ME-PCM with sparse
grids to study the Korteweg-de Vries (KdV) equation subject to random excitation,
where the random parameters are associated with either a discrete or a continuous
probability measure. Numerical experiments demonstrate that the proposed algo-
rithms lead to high accuracy and eļ¬ƒciency for hybrid (discrete-continuous) random
inputs.
3.1 Notation
Āµ, Ī½ probability measure of discrete RVs
Ī¾ discrete RV
Pi(Ī¾) generalized Polynomial Chaos basis function
Ī“ij Dirac delta function
S(Āµ) support of measure Āµ over discrete RV Ī¾
N size of the support S(Āµ)
Ī±i, Ī²i coeļ¬ƒcients in the three term recurrence relation of orthogonal polynomial basis
mk the kith moment of RV Ī¾
Ī“ integration domain of the discrete RV
Wm,p
(Ī“) Sobolev space
h size of element in multi-element integration
Nes number of elements in multi-element integration
d number of quadrature points in Gauss quadrature rule
Bi i-th element in the multi-element integration
Ļƒ2
i local variance
22
3.2 Generation of orthogonal polynomials for dis-
crete measures
Let Āµ be a positive measure with inļ¬nite support S(Āµ) āŠ‚ R and ļ¬nite moments at
all orders, i.e.,
S
Ī¾n
Āµ(dĪ¾) < āˆž, āˆ€n āˆˆ N0, (3.1)
where N0 = {0, 1, 2, ...}, and it is deļ¬ned as a Riemann-Stieltjes integral. There
exists one unique [54] set of orthogonal monic polynomials {Pi}āˆž
i=0 with respect to
the measure Āµ such that
S
Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾) = Ī“ijĪ³āˆ’2
i , i = 0, 1, 2, . . . , (3.2)
where Ī³i = 0 are constants. In particular, the orthogonal polynomials satisfy a
three-term recurrence relation [31, 43]
Pi+1(Ī¾) = (Ī¾ āˆ’ Ī±i)Pi(Ī¾) āˆ’ Ī²iPiāˆ’1(Ī¾), i = 0, 1, 2, . . . (3.3)
The uniqueness of the set of orthogonal polynomials with respect to Āµ can be also
derived by constructing such set of polynomials starting from P0(Ī¾) = 1. We typ-
ically choose Pāˆ’1(Ī¾) = 0 and Ī²0 to be a constant. Then the full set of orthogonal
polynomials is completely determined by the coeļ¬ƒcients Ī±i and Ī²i.
If the support S(Āµ) is a ļ¬nite set with data points {Ļ„1, ..., Ļ„N }, i.e., Āµ is a discrete
measure deļ¬ned as
Āµ =
N
i=1
Ī»iĪ“Ļ„i
, Ī»i > 0, (3.4)
23
the corresponding orthogonality condition is ļ¬nite, up to order N āˆ’ 1 [46, 54], i.e.,
S
P2
i (Ī¾)Āµ(dĪ¾) = 0, i ā‰„ N, (3.5)
where Ī“Ļ„i
indicates the empirical measure at Ļ„i, although by the recurrence relation
(3.3) we can generate polynomials at any order greater than N āˆ’ 1. Furthermore,
one way to test whether the coeļ¬ƒcients Ī±i are well approximated is to check the
following relation [45, 46]
Nāˆ’1
i=0
Ī±i =
N
i=1
Ļ„i. (3.6)
One can prove that the coeļ¬ƒcient of Ī¾Nāˆ’1
in PN (Ī¾) is āˆ’ Nāˆ’1
i=0 Ī±i, and PN (Ī¾) =
(Ī¾ āˆ’ Ļ„1)...(Ī¾ āˆ’ Ļ„N ), therefore equation (3.6) holds [46].
We subsequently examine ļ¬ve diļ¬€erent approaches of generating orthogonal poly-
nomials for a discrete measure and point out the pros and cons of each method. In
Nowak method, the coeļ¬ƒcients of the polynomials are directly derived from solving
a linear system; in the other four methods, we generate coeļ¬ƒcients Ī±i and Ī²i by four
diļ¬€erent numerical methods, and the coeļ¬ƒcients of polynomials are derived from the
recurrence relation in equation (3.3).
3.2.1 Nowak method
Deļ¬ne the k-th order moment as
mk =
S
Ī¾k
Āµ(dĪ¾), k = 0, 1, ..., 2d āˆ’ 1. (3.7)
24
The coeļ¬ƒcients of the d-th order polynomial Pd(Ī¾) = d
i=0 aiĪ¾i
are determined by
the following linear system [125]
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
m0 m1 . . . md
m1 m2 . . . md+1
. . . . . . . . . . . .
mdāˆ’1 md . . . m2dāˆ’1
0 0 . . . 1
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
a0
a1
. . .
adāˆ’1
ad
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
=
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
0
0
. . .
0
1
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
, (3.8)
where the (d + 1) by (d + 1) Vandermonde matrix needs to be inverted.
Although this method is straightforward to implement, it is well known that the
matrix may be ill conditioned when d is very large.
The total computational complexity for solving the linear system in equation
(3.8) is O(d2
) to generate Pd(Ī¾) 1
.
3.2.2 Stieltjes method
Stieltjes method is based on the following formulas of the coeļ¬ƒcients Ī±i and Ī²i [54]
Ī±i = S
Ī¾P2
i (Ī¾)Āµ(dĪ¾)
S
P2
i (Ī¾)Āµ(dĪ¾)
, Ī²i = S
Ī¾P2
i (Ī¾)Āµ(dĪ¾)
S
P2
iāˆ’1(Ī¾)Āµ(dĪ¾)
, i = 0, 1, .., d āˆ’ 1. (3.9)
For a discrete measure, the Stieltjes method is quite stable [54, 51]. When the
discrete measure has a ļ¬nite number of elements in its support (N), the above
formulas are exact. However, if we use Stieltjes method on a discrete measure with
inļ¬nite support, i.e. Poisson distribution, we approximate the measure by a discrete
1
Here we notice that the Vandermonde matrix is in a Toeplitz matrix form. Therefore the
computational complexity of solving this linear system is O(d2
) [59, 157].
25
measure with ļ¬nite number of points; therefore, each time when we iterate for Ī±i
and Ī²i, the error accumulates by neglecting the points with less weights. In that
case, Ī±i and Ī²i may suļ¬€er from inaccuracy when i is close to N [54].
The computational complexity for integral evaluation in equation (3.9) is of the
order O(N).
3.2.3 Fischer method
Fischer proposed a procedure for generating the coeļ¬ƒcients Ī±i and Ī²i by adding
data points one-by-one [45, 46]. Assume that the coeļ¬ƒcients Ī±i and Ī²i are known
for the discrete measure Āµ = N
i=1 Ī»iĪ“Ļ„i
. Then, if we add another data point Ļ„ to
the discrete measure Āµ and deļ¬ne a new discrete measure Ī½ = Āµ + Ī»Ī“Ļ„ , Ī» being the
weight of the newly added data point Ļ„, the following relations hold [45, 46]:
Ī±Ī½
i = Ī±i + Ī»
Ī³2
i Pi(Ļ„)Pi+1(Ļ„)
1 + Ī» i
j=0 Ī³2
j P2
j (Ļ„)
āˆ’ Ī»
Ī³2
iāˆ’1Pi(Ļ„)Piāˆ’1(Ļ„)
1 + Ī» iāˆ’1
j=0 Ī³2
j P2
j (Ļ„)
(3.10)
Ī²Ī½
i = Ī²i
[1 + Ī» iāˆ’2
j=0 Ī³2
j P2
j (Ļ„)][1 + Ī» i
j=0 Ī³2
j P2
j (Ļ„)]
[1 + Ī» iāˆ’1
j=0 Ī³2
j P2
j (Ļ„)]2
(3.11)
for i < N, and
Ī±Ī½
N = Ļ„ āˆ’ Ī»
Ī³2
Nāˆ’1PN (Ļ„)PNāˆ’1(Ļ„)
1 + Ī» Nāˆ’1
j=0 Ī³2
j P2
j (Ļ„)
(3.12)
Ī²Ī½
N =
Ī»Ī³2
Nāˆ’1P2
N (Ļ„)[1 + Ī» Nāˆ’2
j=0 Ī³2
j P2
j (Ļ„)]
[1 + Ī» Nāˆ’1
j=0 Ī³2
j P2
j (Ļ„)]2
, (3.13)
where Ī±Ī½
i and Ī²Ī½
i indicate the coeļ¬ƒcients in the three-term recurrence formula (3.3)
for the measure Ī½. The numerical stability of this algorithm depends on the stability
of the recurrence relations above, and on the sequence of data points added [46]. For
26
example, the data points can be in either ascending or descending order. Fischerā€™s
method basically modiļ¬es the available coeļ¬ƒcients Ī±i and Ī²i using the information
induced by the new data point. Thus, this approach is very practical when an
empirical distribution for stochastic inputs is altered by an additional possible value.
For example, let us consider that we have already generated d probability collocation
points with respect to the given discrete measure with N data points, and we want
to add another data point into the discrete measure to generate d new probability
collocation points with respect to the new measure. Using the Nowak method, we
will need to reconstruct the moment matrix and invert the matrix again with N + 1
data points; however by Fischerā€™s method, we will only need to update 2d values of
Ī±i and Ī²i by adding this new data point, which is more convenient.
We generate a new sequence of {Ī±i, Ī²i} by adding a new data point into the
measure, therefore the computational complexity for calculating the coeļ¬ƒcients
{Ī³2
i , i = 0, .., d} for N times is O(N2
).
3.2.4 Modiļ¬ed Chebyshev method
Compared to the Chebyshev method [54], the modiļ¬ed Chebyshev method computes
moments in a diļ¬€erent way. Deļ¬ne the quantities:
Āµi,j =
S
Pi(Ī¾)Ī¾j
Āµ(dĪ¾), i, j = 0, 1, 2, ... (3.14)
Then, the coeļ¬ƒcients Ī±i and Ī²i satisfy [54]:
Ī±0 =
Āµ0,1
Āµ0,0
, Ī²0 = Āµ0,0, Ī±i =
Āµi,i+1
Āµi,i
āˆ’
Āµiāˆ’1,i
Āµiāˆ’1,iāˆ’1
, Ī²i =
Āµi,i
Āµiāˆ’1,iāˆ’1
. (3.15)
27
Note that due to the orthogonality, Āµi,j = 0 when i > j. Starting from the moments
Āµj, Āµi,j can be computed recursively as
Āµi,j = Āµiāˆ’1,j+1 āˆ’ Ī±iāˆ’1Āµiāˆ’1,j āˆ’ Ī²iāˆ’1Āµiāˆ’2,j, (3.16)
with
Āµāˆ’1,0 = 0, Āµ0,j = Āµj, (3.17)
where j = i, i + 1, ..., 2d āˆ’ i āˆ’ 1.
However, this method suļ¬€ers from the same eļ¬€ects of ill-conditioning as the
Nowak method [125] does, because they both rely on calculating moments. To sta-
bilize the algorithm we introduce another way of deļ¬ning moments by polynomials:
Ė†Āµi,j =
S
Pi(Ī¾)pj(Ī¾)Āµ(dĪ¾), (3.18)
where {pi(Ī¾)} is chosen to be a set of orthogonal polynomials, e.g., Legendre poly-
nomials. Deļ¬ne
Ī½i =
S
pi(Ī¾)Āµ(dĪ¾). (3.19)
Since {pi(Ī¾)}āˆž
i=0 is not a set of orthogonal polynomials with respect to the measure
Āµ(dĪ¾), Ī½i is, in general, not equal to zero. For all the following numerical experiments
we used the Legendre polynomials for {pi(Ī¾)}āˆž
i=0.2
Let Ė†Ī±i and Ė†Ī²i be the coeļ¬ƒcients
in the three-term recurrence formula associated with the set {pi} of orthogonal poly-
nomials.
2
Legendre polynomials {pi(Ī¾)}āˆž
i=0 are deļ¬ned on [āˆ’1, 1], therefore in implementation of the
Modiļ¬ed Chebyshev method, we scale the measure onto [āˆ’1, 1] ļ¬rst.
28
Then, we initialize the process of building up the coeļ¬ƒcients as
Ė†Āµāˆ’1,j = 0, j = 1, 2, ..., 2d āˆ’ 2,
Ė†Āµ0,j = Ī½j, j = 0, 2, ..., 2d āˆ’ 1,
Ī±0 = Ė†Ī±0 +
Ī½1
Ī½0
, Ī²0 = Ī½0,
and compute the following coeļ¬ƒcients:
Ė†Āµi,j = Ė†Āµiāˆ’1,j+1 āˆ’ (Ī±iāˆ’1 āˆ’ Ė†Ī±j)Ė†Āµiāˆ’1,j āˆ’ Ī²iāˆ’1 Ė†Āµiāˆ’2,j + Ė†Ī²j Ė†Āµiāˆ’1,jāˆ’1, (3.20)
where j = i, i + 1, ..., 2d āˆ’ i āˆ’ 1. The coeļ¬ƒcients Ī±i and Ī²i can be obtained as
Ī±i = Ė†Ī±i +
Ė†Āµi,i+1
Ė†Āµi,i
āˆ’
Ė†Āµiāˆ’1,i
Ė†Āµiāˆ’1,iāˆ’1
, Ī²i =
Ė†Āµi,i
Ė†Āµiāˆ’1,iāˆ’1
. (3.21)
Based on the modiļ¬ed moments, the ill-conditioning issue related to moments can
be improved, although such an issue can still be severe especially when we consider
orthogonality on inļ¬nite intervals.
The computational complexity for generating Āµi,j and Ī½i is O(N).
3.2.5 Lanczos method
The idea of Lanczos method is to tridiagonalize a matrix to obtain the coeļ¬ƒ-
cients of the recurrence relation Ī±j and Ī²j. Suppose the discrete measure is Āµ =
N
i=1 Ī»iĪ“Ļ„i
, Ī»i > 0. With weights Ī»i and Ļ„i in the expression of the measure Āµ, the
29
ļ¬rst step of this method is to construct a matrix [22]:
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
1
āˆš
Ī»1
āˆš
Ī»2 . . .
āˆš
Ī»N
āˆš
Ī»1 Ļ„1 0 . . . 0
āˆš
Ī»2 0 Ļ„2 . . . 0
. . . . . . . . . . . . . . .
āˆš
Ī»N 0 0 . . . Ļ„N
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
. (3.22)
After we triagonalize it by the Lanczos algorithm, which is a process that reduces a
symmetric matrix into a tridiagonal form with unitary transformations [59], we can
obtain:
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£¬
ļ£­
1
āˆš
Ī²0 0 . . . 0
āˆš
Ī²0 Ī±0
āˆš
Ī²1 . . . 0
0
āˆš
Ī²1 Ī±1 . . . 0
. . . . . . . . . . . . . . .
0 0 0 . . . Ī±Nāˆ’1
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£·
ļ£ø
, (3.23)
where the non-zero entries correspond to the coeļ¬ƒcients Ī±i and Ī²i. Lanczos method
is motivated by the interest in the inverse Sturm-Liouville problem: given some
information on the eigenvalues of the matrix with a highly structured form, or some
of its principal sub-matrices, this method is able to generate a symmetric matrix,
either Jacobi or banded, in a ļ¬nite number of steps. It is easy to program but can
be considerably slow [22].
The computational complexity for the unitary transformation is O(N2
).
30
3.2.6 Gaussian quadrature rule associated with a discrete
measure
Here we describe how to utilize the above ļ¬ve methods to perform integration over
a discrete measure numerically, using the Gaussian quadrature rule [60] associated
with Āµ.
We consider integrals of the form
S
f(Ī¾)Āµ(dĪ¾) < āˆž. (3.24)
With respect to Āµ, we generate the Āµ-orthogonal polynomials up to order d (d ā‰¤
N āˆ’ 1), denoted as {Pi(Ī¾)}d
i=0, by one of the ļ¬ve methods. We calculated the zeros
{Ī¾i}d
i=1 from Pd(Ī¾) = adĪ¾d
+ adāˆ’1Ī¾dāˆ’1
+ ... + a0, as Gaussian quadrature points, and
Gaussian quadrature weights {wi}d
i=1 by
wi =
ad
adāˆ’1
S
Āµ(dĪ¾)Pdāˆ’1(Ī¾)2
Pd(Ī¾i)Pdāˆ’1(Ī¾i)
. (3.25)
Therefore, numerically the integral is approximated by
S
f(Ī¾)Āµ(dĪ¾) ā‰ˆ
d
i=1
f(Ī¾i)wi. (3.26)
In the case when zeros for polynomial Pd(Ī¾) do not have explicit formulas,
Newton-Raphson is used [7, 174], with a speciļ¬ed tolerance as 10āˆ’16
(in double
precision). In order to ensure that at each search we ļ¬nd a new root, the polynomial
deļ¬‚ation method [81] is applied, where the searched roots are factored out of the
31
initial polynomial once they have been determined. All the calculations are done
with double precision in this paper.
3.2.7 Orthogonality tests of numerically generated polyno-
mials
To investigate the stability of the ļ¬ve methods, we perform an orthogonality test,
where the orthogonality is deļ¬ned as:
orth(i) =
1
i
iāˆ’1
j=0
| S
Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾)|
S
P2
j (Ī¾)Āµ(dĪ¾) S
P2
i (x)Āµ(dĪ¾)
, i ā‰¤ N āˆ’ 1, (3.27)
for the set {Pj(Ī¾)}i
j=0 of orthogonal polynomials constructed numerically. Note that
S
Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾) = 0, 0 ā‰¤ j < i, for orthogonal polynomials constructed numeri-
cally due to round-oļ¬€ errors, although they should be orthogonal theoretically.
We compare the numerical orthogonality given by the aforementioned ļ¬ve meth-
ods in ļ¬gure 3.1 for the following distribution: 3
f(k; n, p) = P(Ī¾ =
2k
n
āˆ’ 1) =
n!
k!(n āˆ’ k)!
pk
(1 āˆ’ p)nāˆ’k
, k = 0, 1, 2, ..., n. (3.28)
We see that Stieltjes, Modiļ¬ed Chebyshev, and Lanczos methods preserve the
best numerical orthogonality when the polynomial order i is close to N. We notice
that when N is large, the numerical orthogonality is preserved up to the order of 70,
indicating the robustness of these three methods. The Nowak method exhibits the
worst numerical orthogonality among the ļ¬ve methods, due to the ill-conditioning
3
We rescale the support for Binomial distribution with parameters (n, p), {0, .., n}, onto [āˆ’1, 1].
32
0 2 4 6 8 10 12 14 16 18 20
10
18
10
16
10
14
10
12
10
10
10
8
10
6
polynomial order i
orth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
n=20, p=1/2
0 10 20 30 40 50 60 70 80 90 100
10
20
10
15
10
10
10
5
10
0
polynomial order i
orth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
n=100, p=1/2
Figure 3.1: Orthogonality deļ¬ned in (3.27) with respect to the polynomial order i up to 20 with
distribution deļ¬ned in (3.28) (n = 20, p = 1/2) (left) and i up to 100 with (n = 100, p = 1/2)(right).
nature of the matrix in equation (3.8). The Fischer method exhibits better numerical
orthogonality when the number of data points N in the discrete measure is small.
The numerical orthogonality is lost when N is large, which serves as a motivation
to use ME-PCM instead of PCM for numerical integration over discrete measures.
Our results suggest that we shall use Stieltjes, Modiļ¬ed Chebyshev, and Lanczos
methods for more accuracy.
We also compare the cost by tracking the CPU time to evaluate (3.27) in ļ¬gure
3.2: for a ļ¬xed polynomial order i, we track the CPU time with respect to N, the
number of points in the discrete measure deļ¬ned in (3.28); for a ļ¬xed N, we track
the CPU time with respect to i. We observe that the Stieltjes method has the least
computational cost while the Fischer method has the largest computational cost.
Asymptotically, we observe that the computational complexity to evaluate (3.27)
is O(i2
) for Nowak method, O(N) for the Stieltjes method, O(N2
) for the Fischer
method, O(N) for the Modiļ¬ed Chebyshev method, and O(N2
) for the Lanczos
method.
To conclude we recommend Stieltjes method as the most accurate and eļ¬ƒcient
in generating orthogonal polynomials with respect to discrete measures, especially
33
20 40 80 100
10
4
10
3
10
2
10
1
10
0
n
CPUtimetoevaluateorth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
C1
*n2
C
2
*n
p = 1/2
i = 4
10 20 40 80 100
10
4
10
3
10
2
10
1
10
0
polynomial order i
CPUtimetoevaluateorth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
C*i
2
n=100,p=1/2
Figure 3.2: CPU time (in seconds) on Intel (R) Core(TM) i5-3470 CPU @ 3.20 GHz in Matlab to
evaluate orthogonality in (3.27) at the order i = 4 for distribution deļ¬ned in (3.28) with parameter
n and p = 1/2 (left). CPU time to evaluate orthogonality in (3.27) at the order i for distribution
deļ¬ned in (3.28) with parameter n = 100 and p = 1/2 (right).
when higher orders are required. However, for generating polynomials at lower orders
(for ME-PCM), the ļ¬ve methods are equally eļ¬€ective.
We noticed from ļ¬gure 3.1 and 3.2 that the Stieltjes method exhibits the most
accuracy and eļ¬ƒciency in generating orthogonal polynomials with respect to a dis-
crete measure Āµ. Therefore, here we investigate the minimum polynomial order i
(i ā‰¤ N āˆ’ 1) that the orthogonality orth(i) deļ¬ned in equation (3.27) of the Stieltjes
method is larger than a threshold . In ļ¬gure 3.3, we perform this test on the distribu-
tion given by (3.28) with diļ¬€erent parameters for n (n ā‰„ i). The highest polynomial
order i for polynomial chaos shall be less than the minimum i that orth(i) exceeds a
certain desired , for practical computations. The cost for numerical orthogonality
is, in general, negligible compared to the cost for solving a stochastic problem by
either Galerkin or collocation approaches. Hence, we can pay more attention on the
accuracy, rather than the cost, of these ļ¬ve methods.
34
0 20 40 60 80 100 120 140 160
0
20
40
60
80
100
120
140
160
n (p=1/10) for measure defined in (28)
polynomialorderi
=1E 8
=1E 10
=1E 13
i = n
Figure 3.3: Minimum polynomial order i (vertical axis) such that orth(i) deļ¬ned in (3.27) is
greater than a threshold value Īµ (here Īµ = 1E āˆ’ 8, 1E āˆ’ 10, 1E āˆ’ 13), for distribution deļ¬ned in
(3.28) with p = 1/10. Orthogonal polynomials are generated by the Stieltjes method.
3.3 Discussion about the error of numerical inte-
gration
3.3.1 Theorem of numerical integration on discrete measure
In [50], the h-convergence rate of ME-PCM [81] for numerical integration in terms
of continuous measures was established with respect to the degree of exactness given
by the quadrature rule.
Let us ļ¬rst deļ¬ne the Sobolev space Wm+1,p
(Ī“) to be the set of all functions
f āˆˆ Lp
(Ī“) such that for every multi-index Ī³ with |Ī³| ā‰¤ m + 1, the weak partial
derivative DĪ³
f belongs to Lp
(Ī“) [1, 40], i.e.
Wm+1,p
(Ī“) = {f āˆˆ Lp
(Ī“) : DĪ³
f āˆˆ Lp
(Ī“), āˆ€|Ī³| ā‰¤ m + 1}. (3.29)
35
Here Ī“ is an open set in Rn
and 1 ā‰¤ p ā‰¤ +āˆž. The natural number m + 1 is called
the order of the Sobolev space Wm+1,p
(Ī“). Here the Sobolev space Wm+1,āˆž
(A) in
the following theorem is deļ¬ned for functions f : A ā†’ R subject to the norm:
f m+1,āˆž,A = max
|Ī³|ā‰¤m+1
ess supĪ¾āˆˆA|DĪ³
f(Ī¾)|,
and the seminorm is deļ¬ned as:
|f|m+1,āˆž,A = max
|Ī³|=m+1
ess supĪ¾āˆˆA|DĪ³
f(Ī¾)|,
where A āŠ‚ Rn
, Ī³ āˆˆ Nn
0 , |Ī³| = Ī³1 + . . . + Ī³n and m + 1 āˆˆ N0.
We ļ¬rst consider a one-dimensional discrete measure Āµ = N
i=1 Ī»iĪ“Ļ„i
, where N is a
ļ¬nite number. For simplicity and without loss of generality, we assume that {Ļ„i}N
i=1 āŠ‚
(0, 1). Otherwise, we can use a linear mapping to map (min{Ļ„i}N
i=1āˆ’c, max{Ļ„i}N
i=1+c)
to (0, 1) with c being a arbitrarily small positive number. We then construct the
approximation of the Dirac measure as
ĀµĪµ =
N
i=1
Ī»iĪ·Īµ
Ļ„i
, (3.30)
where Īµ is a small positive number and Ī·Īµ
Ļ„i
is deļ¬ned as
Ī·Īµ
Ļ„i
=
ļ£±
ļ£“ļ£²
ļ£“ļ£³
1
Īµ
if |Ī¾ āˆ’ Ļ„i| < Īµ/2,
0 otherwise.
(3.31)
First of all, Ī·Īµ
Ļ„i
deļ¬nes a continuous measure in (0, 1) with a ļ¬nite number of discon-
tinuous points, where a uniform distribution is taken on the interval (Ļ„iāˆ’Īµ/2, Ļ„i+Īµ/2).
36
Second, Ī·Īµ
Ļ„i
converges to Ī“Ļ„i
in the weak sense, i.e.,
lim
Īµā†’0+
1
0
g(Ī¾)Ī·Īµ
Ļ„i
(dĪ¾) =
1
0
g(Ī¾)Ī“Ļ„i
(dĪ¾), (3.32)
for all bounded continuous functions g(Ī¾). We write that
lim
Īµā†’0+
Ī·Īµ
Ļ„i
= Ī“Ļ„i
. (3.33)
It is seen that when Īµ is small enough, the intervals (Ļ„iāˆ’Īµ/2, Ļ„i+Īµ/2) can be mutually
disjoint for i = 1, . . . , N. Due to the linearity, we have
lim
Īµā†’0+
ĀµĪµ = Āµ, (3.34)
and the convergence is deļ¬ned in the weak sense as before. Then, ĀµĪµ is also a
continuous measure with a ļ¬nite number of discontinuous points. The choice for Ī·Īµ
Ļ„i
is not unique. Another choice is
Ī·Īµ
Ļ„i
=
1
Īµ
Ī·
Ī¾ āˆ’ Ļ„i
Īµ
, Ī·(Ī¾) =
ļ£±
ļ£“ļ£²
ļ£“ļ£³
e
āˆ’ 1
1āˆ’|Ī¾|2
if |Ī¾| < 1,
0 otherwise.
(3.35)
Such a choice is smooth. When Īµ is small enough, the domains deļ¬ned by |Ī¾āˆ’Ļ„i
Īµ
| < 1
are also mutually disjoint.
We then have the following proposition.
Proposition 1. For the continuous measure ĀµĪµ, we let Ī±i,Īµ and Ī²i,Īµ indicate the
coeļ¬ƒcients in the three-term recurrence formula (3.3), which is valid for both con-
tinuous and discrete measures. For the discrete measure Āµ, we let Ī±i and Ī²i indicate
37
the coeļ¬ƒcients in the three-term recurrence formula. We then have
lim
Īµā†’0+
Ī±i,Īµ = Ī±i, lim
Īµā†’0+
Ī²i,Īµ = Ī²i. (3.36)
In other words, the monic orthogonal polynomials deļ¬ned by ĀµĪµ will converge to those
deļ¬ned by Āµ, i.e
lim
Īµā†’0+
Pi,Īµ(Ī¾) = Pi(Ī¾), (3.37)
where Pi,Īµ and Pi are monic polynomials of order i corresponding to ĀµĪµ and Āµ, re-
spectively.
The coeļ¬ƒcients Ī±i,Īµ and Ī²i,Īµ are given by the formula, see equation (3.9),
Ī±i,Īµ =
(Ī¾Pi,Īµ, Pi,Īµ)ĀµĪµ
(Pi,Īµ, Pi,Īµ)ĀµĪµ
, i = 0, 1, 2, . . . , (3.38)
Ī²i,Īµ =
(Pi,Īµ, Pi,Īµ)ĀµĪµ
(Piāˆ’1,Īµ, Piāˆ’1,Īµ)ĀµĪµ
, i = 1, 2, . . . , (3.39)
where (Ā·, Ā·)ĀµĪµ indicates the inner product with respect to ĀµĪµ. Correspondingly, we
have
Ī±i =
(Ī¾Pi, Pi)Āµ
(Pi, Pi)Āµ
, i = 0, 1, 2, . . . , (3.40)
Ī²i =
(Pi, Pi)Āµ
(Piāˆ’1,iāˆ’1)Āµ
, i = 1, 2, . . . , (3.41)
By deļ¬nition,
Ī²0,Īµ = (1, 1)ĀµĪµ = 1, Ī²0 = (1, 1)Āµ = 1.
The argument is based on induction. We assume that the equation (3.37) is true
for k = i and k = i āˆ’ 1. When i = 0, this is trivial. To show that equation
(3.37) holds for k = i + 1, we only need to prove equation (3.36) for k = i based
on the observation that Pi+1,Īµ = (Ī¾ āˆ’ Ī±i,Īµ)Pi,Īµ āˆ’ Ī²i,ĪµPiāˆ’1,Īµ. We now show that all
38
inner products in equations (3.38) and (3.39) converges to the corresponding inner
products in equations (3.40) and (3.41) as Īµ ā†’ 0+
. We here only consider (Pi,Īµ, Pi,Īµ)ĀµĪµ
and other inner products can be dealt with in a similar way. We have
(Pi,Īµ, Pi,Īµ)ĀµĪµ = (Pi, Pi)ĀµĪµ + 2(Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ + (Pi,Īµ āˆ’ Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ
We then have (Pi, Pi)ĀµĪµ ā†’ (Pi, Pi)Āµ due to the deļ¬nition of ĀµĪµ. The second term on
the right-hand side can be bounded as
|(Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ | ā‰¤ ess supĪ¾Piess supĪ¾(Pi,Īµ āˆ’ Pi)(1, 1)ĀµĪµ .
According to the assumption that Pi,Īµ ā†’ Pi, the right-hand side of the above in-
equality goes to zero. Similarly, (Pi,Īµ āˆ’ Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ goes to zero. We then have
(Pi,Īµ, Pi,Īµ)ĀµĪµ ā†’ (Pi, Pi)Āµ. The conclusion is then achieved by induction.
Remark 1. Since as Īµ ā†’ 0+
, the orthogonal polynomials deļ¬ned by ĀµĪµ will converge
to those deļ¬ned by Āµ. The (Gauss) quadrature points and weights deļ¬ned by ĀµĪµ
should also converge to those deļ¬ned by Āµ.
We then recall the following theorem for continuous measures.
Theorem 1 ([50]). Suppose f āˆˆ Wm+1,āˆž
(Ī“) with Ī“ = (0, 1)n
, and {Bi
}Ne
i=1 is a
non-overlapping mesh of Ī“. Let h indicate the maximum side length of each element
and QĪ“
m a quadrature rule with degree of exactness m in domain Ī“. (In other words
Qm exactly integrates polynomials up to order m). Let QA
m be the quadrature rule in
subset A āŠ‚ Ī“, corresponding to QĪ“
m through an aļ¬ƒne linear mapping. We deļ¬ne a
linear functional on Wm+1,āˆž
(A) :
EA(g) ā‰”
A
g(Ī¾)Āµ(dĪ¾) āˆ’ QA
m(g), (3.42)
39
whose norm in the dual space of Wm+1,āˆž
(A) is deļ¬ned as
EA m+1,āˆž,A = sup
g m+1,āˆž,Aā‰¤1
|EA(g)|. (3.43)
Then, the following error estimate holds:
Ī“
f(Ī¾)Āµ(dĪ¾) āˆ’
Ne
i=1
QBi
m f ā‰¤ Chm+1
EĪ“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“ (3.44)
where C is a constant and EĪ“ m+1,āˆž,Ī“ refers to the norm in the dual space of
Wm+1,āˆž
(Ī“), which is deļ¬ned in equation (3.43).
For discrete measures, we have the following theorem.
Theorem 2. Suppose the function f satisļ¬es all assumptions required by Theorem 1.
We add the following three assumptions for discrete measures: 1) The measure Āµ can
be expressed as a product of n one-dimensional discrete measures, i.e., we consider n
independent discrete random variables; 2) The quadrature rule QA
m can be generated
from the quadrature rules given by the n one-dimensional discrete measures by the
tensor product; 3) The number of all the possible values for the discrete measure Āµ
is ļ¬nite and they are located within Ī“. We then have
Ī“
f(Ī¾)Āµ(dĪ¾) āˆ’
Ne
i=1
QBi
m f ā‰¤ CNāˆ’māˆ’1
es EĪ“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“, (3.45)
where Nes indicates the number of integration elements for each random variable.
The argument is based on Theorem 1 and the approximation ĀµĪµ of Āµ. Since we
assume that Āµ is given by n independent discrete random variables, we can deļ¬ne
a continuous approximation (see equation (3.30)) for each one-dimensional discrete
measure and ĀµĪµ can be naturally chosen as the product of these n continuous one-
40
dimensional measures.
We then consider
Ī“
f(Ī¾)Āµ(dĪ¾) āˆ’
Ne
i=1
QBi
m f ā‰¤
Ī“
f(Ī¾)Āµ(dĪ¾) āˆ’
Ī“
f(Ī¾)ĀµĪµ(dĪ¾)
+
Ī“
f(Ī¾)ĀµĪµ(dĪ¾) āˆ’
Ne
i=1
QĪµ,Bi
m f
+
Ne
i=1
QĪµ,Bi
m f āˆ’
Ne
i=1
QBi
m f ,
where QĪµ,Bi
m deļ¬nes the corresponding quadrature rule for the continuous measure
ĀµĪµ. Since we assume that the quadrature rules QĪµ,Bi
m and QBi
m can be constructed by
n one-dimensional quadrature rules, QĪµ,Bi
m should converge to QBi
m as Īµ goes to zero
based on Proposition 1 and the fact that the construction procedure for QĪµ,Bi
m and
QBi
m to have a degree of exactness m is measure independent. For the second term
on the right-hand side, theorem 1 can be applied with a well-deļ¬ned h because we
assume that all possible values for Āµ are located within Ī“, otherwise, this assumption
can be achieved by a linear mapping. We then have
Ī“
f(Ī¾)ĀµĪµ(dĪ¾) āˆ’
Ne
i=1
QĪµ,Bi
m f ā‰¤ Chm+1
EĪµ
Ī“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“, (3.46)
where EĪµ
Ī“ is a linear functional deļ¬ned with respect to ĀµĪµ. We then let Īµ ā†’ 0+
. In
the error bound given by equation (3.46), only EĪµ
Ī“ m+1,āˆž,Ī“ is associated with ĀµĪµ.
According to its deļ¬nition and noting that QĪµ,A
m ā†’ QA
m,
lim
Īµā†’0
EĪµ
A(g) = lim
Īµā†’0 A
g(Ī¾)ĀµĪµ(dĪ¾) āˆ’ QĪµ,A
m (g) = EA(g),
which is a linear functional with respect to Āµ. Since ĀµĪµ ā†’ Āµ and QĪµ,Bi
m ā†’ QBi
m , the
ļ¬rst and third term will go to zero. However, since we are working with discrete
41
measures, it is not convenient to use the element size. Instead we use the number of
elements since h āˆ Nāˆ’1
es , where Nes indicates the number of elements per side. Then
the conclusion is reached.
The h-convergence rate of ME-PCM for discrete measures takes the form O N
āˆ’(m+1)
es .
If we employ Gauss quadrature rule with d points, the degree of exactness is m =
2d āˆ’ 1, which corresponds to a h-convergence rate Nāˆ’2d
es . The extra assumptions in
Theorem 2 are actually quite practical. In applications, we often consider i.i.d ran-
dom variables and the commonly used quadrature rules for high-dimensional cases,
such as tensor-product rule and sparse grids, are obtained from one-dimensional
quadrature rules.
3.3.2 Testing numerical integration with on RV
We now verify the h-convergence rate numerically. We employ the Lanczos method [22]
to generate the Gauss quadrature points. We then approximate integrals of GENZ
functions [56] with respect to the binomial distribution Bino(n = 120, p = 1/2) using
ME-PCM. We consider the following one-dimensional GENZ functions:
ā€¢ GENZ1 function deals with oscillatory integrands:
f1(Ī¾) = cos(2Ļ€w + cĪ¾), (3.47)
ā€¢ GENZ4 function deals with Gaussian-like integrands:
f4(Ī¾) = exp(āˆ’c2
(Ī¾ āˆ’ w)2
), (3.48)
42
0 20 40 60 80 100
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
GENZ1 function (oscillations)
w=1, c=0.01
w=1,c=0.1
w=1,c=1
10
0
10
1
10
6
10
5
10
4
10
3
10
2
Nes
absoluteerror
c=0.1,w=1
GENZ1
d=2
m=3
bino(120,1/2)
Figure 3.4: Left: GENZ1 functions with diļ¬€erent values of c and w; Right: h-convergence of
ME-PCM for function GENZ1. Two Gauss quadrature points, d = 2, are employed in each element
corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method
is employed to compute the orthogonal polynomials.
where c and w are constants. Note that both GENZ1 and GENZ4 functions are
smooth. In this section, we consider the absolute error deļ¬ned as | S
f(Ī¾)Āµ(dĪ¾) āˆ’
d
i=1 f(Ī¾i)wi|, where {Ī¾i} and {wi} (i = 1, ..., d) are d Gauss quadrature points and
weights with respect to Āµ.
In ļ¬gures 3.4 and 3.5, we plot the h-convergence behavior of ME-PCM for GENZ1
and GENZ4 functions, respectively. In each element, two Gauss quadrature points
are employed, corresponding to a degree 3 of exactness, which means that the h-
convergence rate should be Nāˆ’4
es . In ļ¬gures 3.4 and 3.5, we see that when Nes is large
enough, the h-convergence rate of ME-PCM approaches the theoretical prediction,
demonstrated by the reference straight lines CNāˆ’4
es .
3.3.3 Testing numerical integration with multiple RVs on
sparse grids
An interesting question is if the sparse grid approach is as eļ¬€ective for discrete mea-
sures as it is for continuous measures [170], and how that compares to the tensor
43
0 20 40 60 80 100 120
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
GENZ4 function (Gaussian)
c=0.01,w=1
c=0.1,w=1
c=1,w=1
10
0
10
1
10
13
10
12
10
11
10
10
10
9
N
es
absoluteerrors
c=0.1,w=1
GENZ4
d=2
m=3
bino(120,1/2)
Figure 3.5: Left: GENZ4 functions with diļ¬€erent values of c and w; Right: h-convergence of
ME-PCM for function GENZ4. Two Gauss quadrature points, d = 2, are employed in each element
corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method
is employed for numerical orthogonality.
product grids. Let us denote the sparse grid level by k and the dimension by n.
Assume that each random dimension is independent. We apply the Smolyak algo-
rithm [149, 114, 115] to construct sparse grids, i.e.,
A(k + n, n) =
k+1ā‰¤|i|ā‰¤k+n
(āˆ’1)k+nāˆ’|i|
ļ£«
ļ£¬
ļ£­
n āˆ’ 1
k + n āˆ’ |i|
ļ£¶
ļ£·
ļ£ø (Ui1
āŠ— ... āŠ— Uin
), (3.49)
where A(k + n, n) deļ¬nes a cubature formula with respect to the n-dimensional dis-
crete measure and Uij
deļ¬nes the quadrature rule of i-th level for the j-th dimension
[170].
We use Gauss quadrature rule to deļ¬ne Uij
, which implies that the grids at
diļ¬€erent levels are not necessarily nested. Two-dimensional non-nested sparse grid
points are plotted in ļ¬gure 3.6, where each dimension has the same discrete measure
as binomial distribution Bino(10, 1/2). We then use sparse grids to approximate the
integration of the following two GENZ functions with M RVs [56]:
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis
Thesis

Weitere Ƥhnliche Inhalte

Was ist angesagt?

Nelson_Rei_Bernardino_PhD_Thesis_2008
Nelson_Rei_Bernardino_PhD_Thesis_2008Nelson_Rei_Bernardino_PhD_Thesis_2008
Nelson_Rei_Bernardino_PhD_Thesis_2008Nelson Rei Bernardino
Ā 
Disseration-Final
Disseration-FinalDisseration-Final
Disseration-FinalJustin Hill
Ā 
Thesis yossie
Thesis yossieThesis yossie
Thesis yossiedmolina87
Ā 
Flexible and efficient Gaussian process models for machine ...
Flexible and efficient Gaussian process models for machine ...Flexible and efficient Gaussian process models for machine ...
Flexible and efficient Gaussian process models for machine ...butest
Ā 
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...SSA KPI
Ā 
WillieOngPhDThesisFinalDuke2015
WillieOngPhDThesisFinalDuke2015WillieOngPhDThesisFinalDuke2015
WillieOngPhDThesisFinalDuke2015Willie Ong
Ā 
Winitzki-QFTCS-course-2006
Winitzki-QFTCS-course-2006Winitzki-QFTCS-course-2006
Winitzki-QFTCS-course-2006Sergei Winitzki
Ā 

Was ist angesagt? (15)

tfg
tfgtfg
tfg
Ā 
t
tt
t
Ā 
Nelson_Rei_Bernardino_PhD_Thesis_2008
Nelson_Rei_Bernardino_PhD_Thesis_2008Nelson_Rei_Bernardino_PhD_Thesis_2008
Nelson_Rei_Bernardino_PhD_Thesis_2008
Ā 
add_2_diplom_main
add_2_diplom_mainadd_2_diplom_main
add_2_diplom_main
Ā 
Disseration-Final
Disseration-FinalDisseration-Final
Disseration-Final
Ā 
Thesis yossie
Thesis yossieThesis yossie
Thesis yossie
Ā 
MS-Thesis
MS-ThesisMS-Thesis
MS-Thesis
Ā 
Flexible and efficient Gaussian process models for machine ...
Flexible and efficient Gaussian process models for machine ...Flexible and efficient Gaussian process models for machine ...
Flexible and efficient Gaussian process models for machine ...
Ā 
PhDThesis
PhDThesisPhDThesis
PhDThesis
Ā 
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...
All Minimal and Maximal Open Single Machine Scheduling Problems Are Polynomia...
Ā 
WillieOngPhDThesisFinalDuke2015
WillieOngPhDThesisFinalDuke2015WillieOngPhDThesisFinalDuke2015
WillieOngPhDThesisFinalDuke2015
Ā 
Report_Final
Report_FinalReport_Final
Report_Final
Ā 
Kretz dis
Kretz disKretz dis
Kretz dis
Ā 
Winitzki-QFTCS-course-2006
Winitzki-QFTCS-course-2006Winitzki-QFTCS-course-2006
Winitzki-QFTCS-course-2006
Ā 
Dissertation A. Sklavos
Dissertation A. SklavosDissertation A. Sklavos
Dissertation A. Sklavos
Ā 

Andere mochten auch

Final Marketing Project
Final Marketing ProjectFinal Marketing Project
Final Marketing ProjectJoshua Gelles
Ā 
03/17/2015 SLC talk
03/17/2015 SLC talk 03/17/2015 SLC talk
03/17/2015 SLC talk Zheng Mengdi
Ā 
Flor informatica
Flor informaticaFlor informatica
Flor informaticafloribarrolal
Ā 
Aide memoire efficient 2 0
Aide memoire efficient 2 0Aide memoire efficient 2 0
Aide memoire efficient 2 0ggodbout
Ā 
La reddition de compte de projet passe par un tableau KPI intelligent
La reddition de compte de projet passe par un tableau KPI intelligentLa reddition de compte de projet passe par un tableau KPI intelligent
La reddition de compte de projet passe par un tableau KPI intelligentggodbout
Ā 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012Zheng Mengdi
Ā 
Les fonctions avancĆ©es du systĆØme dā€™exploitation
Les fonctions avancĆ©es du systĆØme dā€™exploitationLes fonctions avancĆ©es du systĆØme dā€™exploitation
Les fonctions avancĆ©es du systĆØme dā€™exploitationDa Mi
Ā 
Aide mƩmoire efficient 2.1
Aide mƩmoire efficient 2.1Aide mƩmoire efficient 2.1
Aide mƩmoire efficient 2.1ggodbout
Ā 
phd thesis outline
phd thesis outlinephd thesis outline
phd thesis outlineZheng Mengdi
Ā 
Gestion de programme
Gestion de programmeGestion de programme
Gestion de programmeggodbout
Ā 
2015 SIRIUS catalogue formation
2015 SIRIUS catalogue formation2015 SIRIUS catalogue formation
2015 SIRIUS catalogue formationggodbout
Ā 

Andere mochten auch (15)

Final Marketing Project
Final Marketing ProjectFinal Marketing Project
Final Marketing Project
Ā 
Agile 101
Agile 101Agile 101
Agile 101
Ā 
03/17/2015 SLC talk
03/17/2015 SLC talk 03/17/2015 SLC talk
03/17/2015 SLC talk
Ā 
Flor informatica
Flor informaticaFlor informatica
Flor informatica
Ā 
Aide memoire efficient 2 0
Aide memoire efficient 2 0Aide memoire efficient 2 0
Aide memoire efficient 2 0
Ā 
La reddition de compte de projet passe par un tableau KPI intelligent
La reddition de compte de projet passe par un tableau KPI intelligentLa reddition de compte de projet passe par un tableau KPI intelligent
La reddition de compte de projet passe par un tableau KPI intelligent
Ā 
PrƩsentation
PrƩsentationPrƩsentation
PrƩsentation
Ā 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Ā 
Les fonctions avancĆ©es du systĆØme dā€™exploitation
Les fonctions avancĆ©es du systĆØme dā€™exploitationLes fonctions avancĆ©es du systĆØme dā€™exploitation
Les fonctions avancĆ©es du systĆØme dā€™exploitation
Ā 
DevOps
DevOpsDevOps
DevOps
Ā 
Aide mƩmoire efficient 2.1
Aide mƩmoire efficient 2.1Aide mƩmoire efficient 2.1
Aide mƩmoire efficient 2.1
Ā 
phd thesis outline
phd thesis outlinephd thesis outline
phd thesis outline
Ā 
IOT strategy
IOT strategyIOT strategy
IOT strategy
Ā 
Gestion de programme
Gestion de programmeGestion de programme
Gestion de programme
Ā 
2015 SIRIUS catalogue formation
2015 SIRIUS catalogue formation2015 SIRIUS catalogue formation
2015 SIRIUS catalogue formation
Ā 

Ƅhnlich wie Thesis

Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Xin-She Yang
Ā 
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimization
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimizationDavid_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimization
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimizationDavid Mateos
Ā 
Algorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed SensingAlgorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed SensingAqib Ejaz
Ā 
Fundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsFundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsAghilesh V
Ā 
biometry MTH 201
biometry MTH 201 biometry MTH 201
biometry MTH 201 musadoto
Ā 
phd-2013-dkennett
phd-2013-dkennettphd-2013-dkennett
phd-2013-dkennettDavid Kennett
Ā 
Classification System for Impedance Spectra
Classification System for Impedance SpectraClassification System for Impedance Spectra
Classification System for Impedance SpectraCarl Sapp
Ā 
Exercises_in_Machine_Learning_1657514028.pdf
Exercises_in_Machine_Learning_1657514028.pdfExercises_in_Machine_Learning_1657514028.pdf
Exercises_in_Machine_Learning_1657514028.pdfRaidTan
Ā 
PhD_Thesis_J_R_Richards
PhD_Thesis_J_R_RichardsPhD_Thesis_J_R_Richards
PhD_Thesis_J_R_RichardsJohn R. Richards
Ā 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverAkshat Srivastava
Ā 
CADances-thesis
CADances-thesisCADances-thesis
CADances-thesisChris Dances
Ā 
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...valentincivil
Ā 
Mth201 COMPLETE BOOK
Mth201 COMPLETE BOOKMth201 COMPLETE BOOK
Mth201 COMPLETE BOOKmusadoto
Ā 

Ƅhnlich wie Thesis (20)

thesis
thesisthesis
thesis
Ā 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)
Ā 
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimization
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimizationDavid_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimization
David_Mateos_NĆŗƱez_thesis_distributed_algorithms_convex_optimization
Ā 
Thesis augmented
Thesis augmentedThesis augmented
Thesis augmented
Ā 
Algorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed SensingAlgorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed Sensing
Ā 
Fundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamicsFundamentals of computational fluid dynamics
Fundamentals of computational fluid dynamics
Ā 
book.pdf
book.pdfbook.pdf
book.pdf
Ā 
phd_unimi_R08725
phd_unimi_R08725phd_unimi_R08725
phd_unimi_R08725
Ā 
biometry MTH 201
biometry MTH 201 biometry MTH 201
biometry MTH 201
Ā 
thesis
thesisthesis
thesis
Ā 
phd-2013-dkennett
phd-2013-dkennettphd-2013-dkennett
phd-2013-dkennett
Ā 
Classification System for Impedance Spectra
Classification System for Impedance SpectraClassification System for Impedance Spectra
Classification System for Impedance Spectra
Ā 
Exercises_in_Machine_Learning_1657514028.pdf
Exercises_in_Machine_Learning_1657514028.pdfExercises_in_Machine_Learning_1657514028.pdf
Exercises_in_Machine_Learning_1657514028.pdf
Ā 
MScThesis1
MScThesis1MScThesis1
MScThesis1
Ā 
PhD_Thesis_J_R_Richards
PhD_Thesis_J_R_RichardsPhD_Thesis_J_R_Richards
PhD_Thesis_J_R_Richards
Ā 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land Rover
Ā 
CADances-thesis
CADances-thesisCADances-thesis
CADances-thesis
Ā 
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
Ā 
time_series.pdf
time_series.pdftime_series.pdf
time_series.pdf
Ā 
Mth201 COMPLETE BOOK
Mth201 COMPLETE BOOKMth201 COMPLETE BOOK
Mth201 COMPLETE BOOK
Ā 

Mehr von Zheng Mengdi

Cv 10 2015_aspen_ucl
Cv 10 2015_aspen_uclCv 10 2015_aspen_ucl
Cv 10 2015_aspen_uclZheng Mengdi
Ā 
Thesis defense improved
Thesis defense improvedThesis defense improved
Thesis defense improvedZheng Mengdi
Ā 
Thesis defense
Thesis defenseThesis defense
Thesis defenseZheng Mengdi
Ā 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved versionZheng Mengdi
Ā 
Teaching statement 1 page to bath
Teaching statement 1 page to bathTeaching statement 1 page to bath
Teaching statement 1 page to bathZheng Mengdi
Ā 
fractional dynamics on networks
fractional dynamics on networks fractional dynamics on networks
fractional dynamics on networks Zheng Mengdi
Ā 
uncertainty quantification of SPDEs with multi-dimensional Levy processes
uncertainty quantification of SPDEs with multi-dimensional Levy processesuncertainty quantification of SPDEs with multi-dimensional Levy processes
uncertainty quantification of SPDEs with multi-dimensional Levy processesZheng Mengdi
Ā 
Mengdi zheng 09232014_apma
Mengdi zheng 09232014_apmaMengdi zheng 09232014_apma
Mengdi zheng 09232014_apmaZheng Mengdi
Ā 
wick malliavin approximation for sde with discrete rvs
wick malliavin approximation for sde with discrete rvswick malliavin approximation for sde with discrete rvs
wick malliavin approximation for sde with discrete rvsZheng Mengdi
Ā 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)Zheng Mengdi
Ā 

Mehr von Zheng Mengdi (11)

Cv 12112015
Cv 12112015Cv 12112015
Cv 12112015
Ā 
Cv 10 2015_aspen_ucl
Cv 10 2015_aspen_uclCv 10 2015_aspen_ucl
Cv 10 2015_aspen_ucl
Ā 
Thesis defense improved
Thesis defense improvedThesis defense improved
Thesis defense improved
Ā 
Thesis defense
Thesis defenseThesis defense
Thesis defense
Ā 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved version
Ā 
Teaching statement 1 page to bath
Teaching statement 1 page to bathTeaching statement 1 page to bath
Teaching statement 1 page to bath
Ā 
fractional dynamics on networks
fractional dynamics on networks fractional dynamics on networks
fractional dynamics on networks
Ā 
uncertainty quantification of SPDEs with multi-dimensional Levy processes
uncertainty quantification of SPDEs with multi-dimensional Levy processesuncertainty quantification of SPDEs with multi-dimensional Levy processes
uncertainty quantification of SPDEs with multi-dimensional Levy processes
Ā 
Mengdi zheng 09232014_apma
Mengdi zheng 09232014_apmaMengdi zheng 09232014_apma
Mengdi zheng 09232014_apma
Ā 
wick malliavin approximation for sde with discrete rvs
wick malliavin approximation for sde with discrete rvswick malliavin approximation for sde with discrete rvs
wick malliavin approximation for sde with discrete rvs
Ā 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
Ā 

KĆ¼rzlich hochgeladen

Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bSĆ©rgio Sacani
Ā 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Servicemonikaservice1
Ā 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .Poonam Aher Patil
Ā 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)Areesha Ahmad
Ā 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Servicenishacall1
Ā 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxRizalinePalanog2
Ā 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfrohankumarsinghrore1
Ā 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learninglevieagacer
Ā 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsSĆ©rgio Sacani
Ā 
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
Ā 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticssakshisoni2385
Ā 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPirithiRaju
Ā 
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit flypumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit flyPRADYUMMAURYA1
Ā 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
Ā 
Unit5-Cloud.pptx for lpu course cse121 o
Unit5-Cloud.pptx for lpu course cse121 oUnit5-Cloud.pptx for lpu course cse121 o
Unit5-Cloud.pptx for lpu course cse121 oManavSingh202607
Ā 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPirithiRaju
Ā 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformationAreesha Ahmad
Ā 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Silpa
Ā 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .Poonam Aher Patil
Ā 
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Silpa
Ā 

KĆ¼rzlich hochgeladen (20)

Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Ā 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Ā 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .
Ā 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
Ā 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
Ā 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
Ā 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdf
Ā 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
Ā 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Ā 
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Ā 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Ā 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Ā 
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit flypumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
Ā 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
Ā 
Unit5-Cloud.pptx for lpu course cse121 o
Unit5-Cloud.pptx for lpu course cse121 oUnit5-Cloud.pptx for lpu course cse121 o
Unit5-Cloud.pptx for lpu course cse121 o
Ā 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Ā 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
Ā 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.
Ā 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
Ā 
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Ā 

Thesis

  • 1. Numerical methods for stochastic systems subject to generalized LĀ“evy noise by Mengdi Zheng Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008 Sc.M. in Physics, Brown University; Providence, RI, USA, 2010 Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011 A dissertation submitted in partial fulļ¬llment of the requirements for the degree of Doctor of Philosophy in The Division of Applied Mathematics at Brown University PROVIDENCE, RHODE ISLAND April 2015
  • 2. c Copyright 2015 by Mengdi Zheng
  • 3. This dissertation by Mengdi Zheng is accepted in its present form by The Division of Applied Mathematics as satisfying the dissertation requirement for the degree of Doctor of Philosophy. Date George Em Karniadakis, Ph.D., Advisor Recommended to the Graduate Council Date Hui Wang, Ph.D., Reader Date Xiaoliang Wan, Ph.D., Reader Approved by the Graduate Council Date Peter Weber, Dean of the Graduate School iii
  • 4. Vitae Born on September 04, 1986 in Hangzhou, Zhejiang, China. Education ā€¢ Sc.M. in Applied Math, Brown University; Providence, RI, USA, 2011 ā€¢ Sc.M. in Physics, Brown University; Providence, RI, USA, 2010 ā€¢ Sc.B. in Physics, Zhejiang University; Hangzhou, Zhejiang, China, 2008 Publications ā€¢ M. Zheng, G.E. Karniadakis, ā€˜Numerical Methods for SPDEs Driven by Multi- dimensional LĀ“evy Jump Processesā€™, in preparation. ā€¢ M. Zheng, B. Rozovsky, G.E. Karniadakis, ā€˜Adaptive Wick-Malliavin Approx- imation to Nonlinear SPDEs with Discrete Random Variablesā€™, SIAM J. Sci. Comput., revised. ā€¢ M. Zheng, G.E. Karniadakis, ā€˜Numerical Methods for SPDEs with Tempered Stable Processesā€™,SIAM J. Sci. Comput., accepted. ā€¢ M. Zheng, X. Wan, G.E. Karniadakis, ā€˜Adaptive Multi-element Polynomial Chaos with Discrete Measure: Algorithms and Application to SPDEsā€™,Applied iv
  • 5. Numerical Mathematics (2015), pp. 91-110. doi:10.1016/j.apnum.2014.11.006 . v
  • 6. Acknowledgements I would like to thank my advisor, Professor George Karniadakis, for his great support and guidance throughout all my years of graduate school. I would also like to thank my committee, Professor Hui Wang and Professor Xiaoliang Wan for taking the time to read my thesis. In addition, I would like to thank the many collaborators I have had the oppor- tunity to work with on various projects. In particular, I thank Professor Xiaoliang Wan for his patience in answering all of my questions and for his advice and help during our work on adaptive multi-element stochastic collocation methods. I thank Professor Boris Rozovsky for oļ¬€ering his innovative ideas and educational discussions on our work on the Wick-Malliavin approximation for nonlinear stochastic partial diļ¬€erential equations driven by discrete random variables. I would like to gratefully acknowledge the support from the NSF/DMS (grant DMS-0915077) and the Airforce MURI (grant FA9550-09-1-0613). Lastly, I thank all my friends, and all current and former members of the CRUNCH group for their company and encouragement. I would like to thank all of the wonder- ful professors and staļ¬€ at the Division of Applied Mathematics for making graduate school a rewarding experience. vi
  • 7. Contents Vitae iv Acknowledgments vi 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Computational limitations for UQ of nonlinear SPDEs . . . . 3 1.1.2 Computational limitations for UQ of SPDEs driven by LĀ“evy jump processes . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Introduction of TĪ±S LĀ“evy jump processes . . . . . . . . . . . . . . . . 5 1.3 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Simulation of LĀ“evy jump processes 9 2.1 Random walk approximation to Poisson processes . . . . . . . . . . . 10 2.2 KL expansion for Poisson processes . . . . . . . . . . . . . . . . . . . 11 2.3 Compound Poisson approximation to LĀ“evy jump processes . . . . . . 13 2.4 Series representation to LĀ“evy jump processes . . . . . . . . . . . . . . 18 3 Adaptive multi-element polynomial chaos with discrete measure: Algorithms and applications to SPDEs 20 3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Generation of orthogonal polynomials for discrete measures . . . . . . 22 3.2.1 Nowak method . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2 Stieltjes method . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.3 Fischer method . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2.4 Modiļ¬ed Chebyshev method . . . . . . . . . . . . . . . . . . . 26 3.2.5 Lanczos method . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.6 Gaussian quadrature rule associated with a discrete measure . 30 3.2.7 Orthogonality tests of numerically generated polynomials . . . 31 3.3 Discussion about the error of numerical integration . . . . . . . . . . 34 3.3.1 Theorem of numerical integration on discrete measure . . . . . 34 vii
  • 8. 3.3.2 Testing numerical integration with on RV . . . . . . . . . . . 41 3.3.3 Testing numerical integration with multiple RVs on sparse grids 42 3.4 Application to stochastic reaction equation and KdV equation . . . . 46 3.4.1 Reaction equation with discrete random coeļ¬ƒcients . . . . . . 46 3.4.2 KdV equation with random forcing . . . . . . . . . . . . . . . 48 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4 Adaptive Wick-Malliavin (WM) approximation to nonlinear SPDEs with discrete RVs 58 4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2 WM approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.1 WM series expansion . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.2 WM propagators . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3 Moment statistics by WM approximation of stochastic reaction equa- tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3.1 Reaction equation with one RV . . . . . . . . . . . . . . . . . 67 4.3.2 Reaction equation with multiple RVs . . . . . . . . . . . . . . 70 4.4 Moment statistics by WM approximation of stochastic Burgers equa- tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 72 4.4.2 Burgers equation with multiple RVs . . . . . . . . . . . . . . . 75 4.5 Adaptive WM method . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.6 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . 78 4.6.1 Burgers equation with one RV . . . . . . . . . . . . . . . . . . 79 4.6.2 Burgers equation with d RVs . . . . . . . . . . . . . . . . . . . 82 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5 Numerical methods for SPDEs with 1D tempered Ī±-stable (TĪ±S) processes 86 5.1 Literature review of LĀ“evy ļ¬‚ights . . . . . . . . . . . . . . . . . . . . . 87 5.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3 Stochastic models driven by tempered stable white noises . . . . . . . 89 5.4 Background of TĪ±S processes . . . . . . . . . . . . . . . . . . . . . . 91 5.5 Numerical simulation of 1D TĪ±S processes . . . . . . . . . . . . . . . 94 5.5.1 Simulation of 1D TĪ±S processes by CP approximation . . . . 94 5.5.2 Simulation of 1D TĪ±S processes by series representation . . . 97 5.5.3 Example: simulation of inverse Gaussian subordinators by CP approximation and series representation . . . . . . . . . . . . 97 5.6 Simulation of stochastic reaction-diļ¬€usion model driven by TĪ±S white noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.6.1 Comparing CP approximation and series representation in MC 101 5.6.2 Comparing CP approximation and series representation in PCM102 5.6.3 Comparing MC and PCM in CP approximation or series rep- resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 viii
  • 9. 5.7 Simulation of 1D stochastic overdamped Langevin equation driven by TĪ±S white noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.7.1 Generalized FP equations for overdamped Langevin equations with TĪ±S white noises . . . . . . . . . . . . . . . . . . . . . . 110 5.7.2 Simulating density by CP approximation . . . . . . . . . . . . 115 5.7.3 Simulating density by TFPDEs . . . . . . . . . . . . . . . . . 116 5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6 Numerical methods for SPDEs with additive multi-dimensional LĀ“evy jump processes 121 6.1 Literature review of generalized FP equations . . . . . . . . . . . . . 122 6.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.3 Diļ¬€usion model driven by multi-dimensional LĀ“evy jump process . . . 124 6.4 Simulating multi-dimensional LĀ“evy pure jump processes . . . . . . . . 127 6.4.1 LePageā€™s series representation with radial decomposition of LĀ“evy measure . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.4.2 Series representation with LĀ“evy copula . . . . . . . . . . . . . 130 6.5 Generalize FP equation for SODEs with correlated LĀ“evy jump pro- cesses and ANOVA decomposition of joint PDF . . . . . . . . . . . . 141 6.6 Heat equation driven by bivariate LĀ“evy jump process in LePageā€™s rep- resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.6.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.6.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 150 6.6.3 Simulating the joint PDF P(u1, u2, t) by the generalized FP equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 6.6.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 156 6.7 Heat equation driven by bivariate TS Clayton LĀ“evy jump process . . 157 6.7.1 Exact moments . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.7.2 Simulating the moment statistics by PCM/S . . . . . . . . . . 161 6.7.3 Simulating the joint PDF P(u1, u2, t) by the generalized FP equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 6.7.4 Simulating moment statistics by TFPDE and PCM/S . . . . . 164 6.8 Heat equation driven by 10-dimensional LĀ“evy jump processes in LeP- ageā€™s representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.8.1 Heat equation driven by 10-dimensional LĀ“evy jump processes from MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.8.2 Heat equation driven by 10-dimensional LĀ“evy jump processes from PCM/S . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 6.8.3 Simulating the joint PDF P(u1, u2, ..., u10) by the ANOVA de- composition of the generalized FP equation . . . . . . . . . . 170 6.8.4 Simulating the moment statistics by 2D-ANOVA-FP with di- mension d = 4, 6, 10, 14 . . . . . . . . . . . . . . . . . . . . . . 182 6.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 7 Summary and future work 188 ix
  • 10. 7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 7.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 x
  • 11. List of Tables 4.1 For gPC with diļ¬€erent orders P and WM with a ļ¬xed order of P = 3, Q = 2 in reaction equation (4.23) with one Poisson RV (Ī» = 0.5, y0 = 1, k(Ī¾) = c0(Ī¾;Ī») 2! + c1(Ī¾;Ī») 3! + c2(Ī¾;Ī») 4! , Ļƒ = 0.1, RK4 scheme with time step dt = 1e āˆ’ 4), we compare: (1) computational complexity ratio to evaluate k(t, Ī¾)y(t; Ļ‰) between gPC and WM (upper); (2) CPU time ratio to compute k(t, Ī¾)y(t; Ļ‰) between gPC and WM (lower).We simulated in Matlab on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz. 69 4.2 Computational complexity ratio to evaluate uāˆ‚u āˆ‚x term in Burgers equa- tion with d RVs between WM and gPC, as C(P,Q)d (P+1)3d : here we take the WM order as Q = P āˆ’ 1, and gPC with order P, in diļ¬€erent dimen- sions d = 2, 3, and 50. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.1 MC/CP vs. MC/S: error l2u2(T) of the solution for Equation (5.1) versus the number of samples s with Ī» = 10 (upper) and Ī» = 1 (lower). T = 1, c = 0.1, Ī± = 0.5, = 0.1, Āµ = 2 (upper and lower). Spatial discretization: Nx = 500 Fourier collocation points on [0, 2]; temporal discretization: ļ¬rst-order Euler scheme in (5.22) with time steps t = 1 Ɨ 10āˆ’5 . In the CP approximation: RelTol = 1 Ɨ 10āˆ’8 for integration in U(Ī“). . . . . . . . . . . . . . . . . . . . . . . . . . . 102 xi
  • 12. List of Figures 2.1 Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KL expansion terms, for a centered Poisson process (Nt āˆ’ Ī»t) of Ī» = 10, Tmax = 1, with s = 10000 samples, and N = 200 points on the time domain [0, 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Exact sample path vs. sample path approximated by the KL ex- pansion: when Ī» is smaller, the sample path is better approximated. (Brownian motion is the limiting case for a centered poisson process with very large birth rate.) . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 Exact mean vs. mean by KL expansion: when Ī» is larger, the KL representation seems to be better. . . . . . . . . . . . . . . . . . . . . 14 2.4 Exact 2nd moment vs. 2nd moment by KL expansion with sampled coeļ¬ƒcients. The 2nd moments are not as well approximated as the mean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 Orthogonality deļ¬ned in (3.27) with respect to the polynomial order i up to 20 with Binomial distributions. . . . . . . . . . . . . . . . . . 32 3.2 CPU time to evaluate orthogonality for Binomial distributions. . . . . 33 3.3 Minimum polynomial order i (vertical axis) such that orth(i) is greater than a threshold value. . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4 Left: GENZ1 functions with diļ¬€erent values of c and w; Right: h- convergence of ME-PCM for function GENZ1. Two Gauss quadrature points, d = 2, are employed in each element corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method is employed to compute the orthogonal polynomials. . . . . . 42 3.5 Left: GENZ4 functions with diļ¬€erent values of c and w; Right: h- convergence of ME-PCM for function GENZ4. Two Gauss quadrature points, d = 2, are employed in each element corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method is employed for numerical orthogonality. . . . . . . . . . . . . 43 3.6 Non-nested sparse grid points with respect to sparseness parameter k = 3, 4, 5, 6 for random variables Ī¾1, Ī¾2 āˆ¼ Bino(10, 1/2), where the one-dimensional quadrature formula is based on Gauss quadrature rule. 44 3.7 Convergence of sparse grids and tensor product grids to approximate E[fi(Ī¾1, Ī¾2)], where Ī¾1 and Ī¾2 are two i.i.d. random variables associated with a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4 is GENZ4. Orthogonal polynomials are generated by Lanczos method. . 45 xii
  • 13. 3.8 Convergence of sparse grids and tensor product grids to approximate E[fi(Ī¾1, Ī¾2, ..., Ī¾8)], where Ī¾1,...,Ī¾8 are eight i.i.d. random variables asso- ciated with a distribution Bino(10, 1/2). Left: f1 is GENZ1 Right: f4 is GENZ4. Orthogonal polynomials are generated by Lanczos method. 45 3.9 p-convergence of PCM with respect to errors deļ¬ned in equations (3.54) and (3.55) for the reaction equation with t = 1, y0 = 1. Ī¾ is associated with negative binomial distribution with c = 1 2 and Ī² = 1. Orthogonal polynomials are generated by the Stieltjes method. . . . . 47 3.10 Left: exact solution of the KdV equation (3.65) at time t = 0, 1. Right: the pointwise error for the soliton at time t = 1 . . . . . . . . 49 3.11 p-convergence of PCM with respect to errors deļ¬ned in equations (3.67) and (3.68) for the KdV equation with t = 1. a = 1, x0 = āˆ’5 and Ļƒ = 0.2, with 200 Fourier collocation points on the spatial domain [āˆ’30, 30]. Left: Ī¾ āˆ¼Pois(10); Right: Ī¾ āˆ¼ Bino(n = 5, p = 1/2)). aPC stands for arbitrary Polynomial Chaos, which is Polynomial Chaos with respect to arbitrary measure. Orthogonal polynomials are gen- erated by Fischerā€™s method. . . . . . . . . . . . . . . . . . . . . . . . 50 3.12 h-convergence of ME-PCM with respect to errors deļ¬ned in equations (3.67) and (3.68) for the KdV equation with t = 1.05, a = 1, x0 = āˆ’5, Ļƒ = 0.2, and Ī¾ āˆ¼ Bino(n = 120, p = 1/2), with 200 Fourier collocation points on the spatial domain [āˆ’30, 30], where two collocation points are employed in each element. Orthogonal polynomials are generated by the Fischer method (left) and the Stieltjes method (right). . . . . 51 3.13 Adapted mesh with ļ¬ve elements with respect to Pois(40) distribution. 52 3.14 p-convergence of ME-PCM on a uniform mesh and an adapted mesh with respect to errors deļ¬ned in equations (3.67) and (3.68) for the KdV equation with t = 1, a = 1, x0 = āˆ’5, Ļƒ = 0.2, and Ī¾ āˆ¼ Pois(40), with 200 Fourier collocation points on the spatial domain [āˆ’30, 30]. Left: Errors of the mean. Right: Errors of the second moment. Orthogonal polynomials are generated by the Nowak method. 53 3.15 Ī¾1, Ī¾2 āˆ¼ Bino(10, 1/2): convergence of sparse grids and tensor product grids with respect to errors deļ¬ned in equations (3.67) and (3.68) for problem (3.69), where t = 1, a = 1, x0 = āˆ’5, and Ļƒ1 = Ļƒ2 = 0.2, with 200 Fourier collocation points on the spatial domain [āˆ’30, 30]. Orthogonal polynomials are generated by the Lanczos method. . . . 54 3.16 Ī¾1 āˆ¼ Bino(10, 1/2) and Ī¾2 āˆ¼ N(0, 1): convergence of sparse grids and tensor product grids with respect to errors deļ¬ned in in equations (3.67) and (3.68) for problem (3.69), where t = 1, a = 1, x0 = āˆ’5, and Ļƒ1 = Ļƒ2 = 0.2, with 200 Fourier collocation points on the spatial domain [āˆ’30, 30]. Orthogonal polynomials are generated by Lanczos method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.17 Convergence of sparse grids and tensor product grids with respect to errors deļ¬ned in in equations (3.67) and (3.68) for problem (3.70), where t = 0.5, a = 0.5, x0 = āˆ’5, Ļƒi = 0.1 and Ī¾i āˆ¼ Bino(5, 1/2), i = 1, 2, ..., 8, with 300 Fourier collocation points on the spatial domain [āˆ’50, 50]. Orthogonal polynomials are generated by Lanczos method. 56 xiii
  • 14. 4.1 Reaction equation with one Poisson RV Ī¾ āˆ¼ Pois(Ī») (d = 1): errors versus ļ¬nal time T deļ¬ned in (4.34) for diļ¬€erent WM order Q in equation (4.27), with polynomial order P = 10, y0 = 1, Ī» = 0.5. We used RK4 scheme with time step dt = 1e āˆ’ 4; k(Ī¾) = c0(Ī¾;Ī») 2! + c1(Ī¾;Ī») 3! + c2(Ī¾;Ī») 4! , Ļƒ = 0.1(left); k(Ī¾) = c0(Ī¾;Ī») 0! + c1(Ī¾;Ī») 3! + c2(Ī¾;Ī») 6! , Ļƒ = 1 (right). . . 68 4.2 Reaction equation with ļ¬ve Poisson RVs Ī¾1,...,5 āˆ¼Pois(Ī») (d = 5): error deļ¬ned in (4.34) with respect to time, for diļ¬€erent WM order Q, with parameters: Ī» = 1, Ļƒ = 0.5, y0 = 1, polynomial order P = 4, RK2 scheme with time step dt = 1e āˆ’ 3, and k(Ī¾1, Ī¾2, ..., Ī¾5, t) = 5 i=1 cos(it)c1(Ī¾i) in equation (4.23). . . . . . . . . . . . . . . . . . . 70 4.3 Reaction equation with one Poisson RV Ī¾1 āˆ¼Pois(Ī») and one Binomial RV Ī¾2 āˆ¼ Bino(N, p) (d = 2): error deļ¬ned in (4.34) with respect to time, for diļ¬€erent WM order Q, with parameters: Ī» = 1, Ļƒ = 0.1, N = 10, p = 1/2, y0 = 1, polynomial order P = 10, RK4 scheme with time step dt = 1e āˆ’ 4, and k(Ī¾1, Ī¾2, t) = c1(Ī¾1)k1(Ī¾2) in equation (4.23). 71 4.4 Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī») (d = 1, Ļˆ1(x, t) = 1): l2u2(T) error deļ¬ned in (6.62) versus time, with respect to dif- ferent WM order Q. Here we take in equation (4.32): polynomial expansion order P = 6, Ī» = 1, Ī½ = 1/2, Ļƒ = 0.1, IMEX (Crank- Nicolson/RK2) scheme with time step dt = 2e āˆ’ 4, and 100 Fourier collocation points on [āˆ’Ļ€, Ļ€]. . . . . . . . . . . . . . . . . . . . . . . 73 4.5 P-convergence for Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī») (d = 1, Ļˆ1(x, t) = 1): errors deļ¬ned in equation (6.62) versus poly- nomial expansion order P, for diļ¬€erent WM order Q, and by prob- abilistic collocation method (PCM) with P + 1 points with the fol- lowing parameters: Ī½ = 1, Ī» = 1, ļ¬nal time T = 0.5, IMEX (Crank- Nicolson/RK2) scheme with time step dt = 5e āˆ’ 4, 100 Fourier collo- cation points on [āˆ’Ļ€, Ļ€], Ļƒ = 0.5 (left), and Ļƒ = 1 (right). . . . . . . 73 4.6 Q-convergence for Burgers equation with one Poisson RV Ī¾ āˆ¼Pois(Ī») (d = 1, Ļˆ1(x, t) = 1): errors deļ¬ned in equation (6.62) versus WM order Q, for diļ¬€erent polynomial order P, with the following param- eters: Ī½ = 1, Ī» = 1, ļ¬nal time T = 0.5, IMEX(RK2/Crank-Nicolson) scheme with time step dt = 5e āˆ’ 4, 100 Fourier collocation points on [āˆ’Ļ€, Ļ€], Ļƒ = 0.5 (left), and Ļƒ = 1 (right). The dashed lines serve as a reference of the convergence rate. . . . . . . . . . . . . . . . . . . . . 74 4.7 Burgers equation with three Poisson RVs Ī¾1,2,3 āˆ¼Pois(Ī») (d = 3): error deļ¬ned in equation (6.62) with respect to time, for diļ¬€erent WM order Q, with parameters: Ī» = 0.1, Ļƒ = 0.1, y0 = 1, Ī½ = 1/100, polynomial order P = 2, IMEX (RK2/Crank-Nicolson) scheme with time step dt = 2.5e āˆ’ 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.8 Reaction equation with P-adaptivity and two Poisson RVs Ī¾1,2 āˆ¼Pois(Ī») (d = 2): error deļ¬ned in (4.34) with two Poisson RVs by comput- ing the WM propagator in equation (4.27) with respect to time by the RK2 method with: ļ¬xed WM order Q = 1, y0 = 1, Ī¾1,2 āˆ¼ Pois(1), a(Ī¾1, Ī¾2, t) = c1(Ī¾1; Ī»)c1(Ī¾2; Ī»), for ļ¬xed polynomial order P (dashed lines), for varied polynomial order P (solid lines), for Ļƒ = 0.1 (left), and Ļƒ = 1 (right). Adaptive criterion values are: l2err(t) ā‰¤ 1e āˆ’ 8(left), and l2err(t) ā‰¤ 1e āˆ’ 6(right). . . . . . . . . . . 77 xiv
  • 15. 4.9 Burgers equation with P-Q-adaptivity and one Poisson RV Ī¾ āˆ¼Pois(Ī») (d = 1, Ļˆ1(x, t) = 1): error deļ¬ned in equation (6.62) by comput- ing the WM propagator in equation (4.32) with IMEX (RK2/Crank- Nicolson) method (Ī» = 1, Ī½ = 1/2, time step dt = 2e āˆ’ 4). Fixed polynomial order P = 6, Ļƒ = 1, and Q is varied (left); ļ¬xed WM order Q = 3, Ļƒ = 0.1, and P is varied (right). Adaptive criterion value is: l2u2(T) ā‰¤ 1e āˆ’ 10 (left and right). . . . . . . . . . . . . . . 78 4.10 Terms in Q p=0 P i=0 Ė†ui āˆ‚Ė†uk+2pāˆ’i āˆ‚x Ki,k+2pāˆ’i,p for each PDE in the WM propagator for Burgers equation with one RV in equation (4.38) are denoted by dots on the grids: here P = 4, Q = 1 2 , k = 0, 1, 2, 3, 4. Each grid represents a PDE in the WM propagator, labeled by k. Each dot represents a term in the sum Q p=0 P i=0 Ė†ui āˆ‚Ė†uk+2pāˆ’i āˆ‚x Ki,k+2pāˆ’i,p . The small index next to the dot is for p, x direction is the index i for Ė†ui, and y direction is the index k + 2p āˆ’ i in āˆ‚Ė†uk+2pāˆ’i āˆ‚x . The dots on the same diagonal line have the same index p. . . . . . . . . . . . . . . . 81 4.11 The total number of terms as Ė†um1...md āˆ‚ āˆ‚x Ė†uk1+2p1āˆ’m1,...,kd+2pdāˆ’md Km1,k1+2p1āˆ’m1,p1 ...Kmd,kd+2pdāˆ’md,pd in the WM propagator for Burgers equation with d RVs, as C(P, Q)d : for dimensions d = 2 (left) and d = 3 (right). Here we assume P1 = ... = Pd = P and Q1 = ... = Qd = Q. . . . . . . . . . 83 5.1 Empirical histograms of an IG subordinator (Ī± = 1/2) simulated via the CP approximationat t = 0.5: the IG subordinator has c = 1, Ī» = 3; each simulation contains s = 106 samples (we zoom in and plot x āˆˆ [0, 1.8] to examine the smaller jumps approximation); they are with diļ¬€erent jump truncation sizes as Ī“ = 0.1 (left, dotted, CPU time 1450s), Ī“ = 0.02 (middle, dotted, CPU time 5710s), and Ī“ = 0.005 (right, dotted, CPU time 38531s). The reference PDFs are plotted in red solid lines; the one-sample K-S test values are calculated for each plot; the RelTol of integration in U(Ī“) and bĪ“ is 1 Ɨ 10āˆ’8 . These runs were done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. 99 5.2 Empirical histograms of an IG subordinator (Ī± = 1/2) simulated via the series representationat t = 0.5: the IG subordinator has c = 1, Ī» = 3; each simulation is done on the time domain [0, 0.5] and con- tains s = 106 samples (we zoom in and plot x āˆˆ [0, 1.8] to examine the smaller jumps approximation); they are with diļ¬€erent number of truncations in the series as Qs = 10 (left, dotted, CPU time 129s), Qs = 100 (middle, dotted, CPU time 338s), and Qs = 1000 (right, dotted, CPU time 2574s). The reference PDFs are plotted in red solid lines; the one-sample K-S test values are calculated for each plot. These runs were done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.3 PCM/CP vs. PCM/S: error l2u2(T) of the solution for Equation (5.1) versus the number of jumps Qcp (in PCM/CP) or Qs (in PCM/S) with Ī» = 10 (left) and Ī» = 1 (right). T = 1, c = 0.1, Ī± = 0.5, = 0.1, Āµ = 2, Nx = 500 Fourier collocation points on [0, 2] (left and right). In the PCM/CP: RelTol = 1 Ɨ 10āˆ’10 for integration in U(Ī“). In the PCM/S: RelTol = 1 Ɨ 10āˆ’8 for the integration of E[(( Ī±Ī“j 2cT )āˆ’1/Ī± āˆ§ Ī·jĪ¾ 1/Ī± j )2 ]. . . . . . . . . . . . . . . . . . . . . . . . . . 107 xv
  • 16. 5.4 PCM vs. MC: error l2u2(T) of the solution for Equation (5.1) versus the number of samples s obtained by MC/CP and PCM/CP with Ī“ = 0.01 (left) and MC/S with Qs = 10 and PCM/S (right). T = 1 , c = 0.1, Ī± = 0.5, Ī» = 1, = 0.1, Āµ = 2 (left and right). Spatial discretization: Nx = 500 Fourier collocation points on [0, 2] (left and right); temporal discretization: ļ¬rst-order Euler scheme in (5.22) with time steps t = 1 Ɨ 10āˆ’5 (left and right). In both MC/CP and PCM/CP: RelTol = 1 Ɨ 10āˆ’8 for integration in U(Ī“). . . . . . . . . 109 5.5 Zoomed in density Pts(t, x) plots for the solution of Equation (5.2) at diļ¬€erent times obtained from solving Equation (5.37) for Ī± = 0.5 (left) and Equation (5.42) for Ī± = 1.5 (right): Ļƒ = 0.4, x0 = 1, c = 1, Ī» = 10 (left); Ļƒ = 0.1, x0 = 1, c = 0.01, Ī» = 0.01 (right). We have Nx = 2000 equidistant spatial points on [āˆ’12, 12] (left); Nx = 2000 points on [āˆ’20, 20] (right). Time step is t = 1 Ɨ 10āˆ’4 (left) and t = 1 Ɨ 10āˆ’5 (right). The initial conditions are approximated by Ī“D 20 (left and right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.6 Density/CP vs. PCM/CP with the same Ī“: errors err1st and err2nd of the solution for Equation (5.2) versus time obtained by the density Equation (5.36) with CP approximation and PCM/CP in Equation (5.55). c = 0.5, Ī± = 0.95, Ī» = 10, Ļƒ = 0.01, x0 = 1 (left); c = 0.01, Ī± = 1.6, Ī» = 0.1, Ļƒ = 0.02, x0 = 1 (right). In the density/CP: RK2 with time steps t = 2 Ɨ 10āˆ’3 , 1000 Fourier collocation points on [āˆ’12, 12] in space, Ī“ = 0.012, RelTol = 1 Ɨ 10āˆ’8 for U(Ī“), and initial condition as Ī“D 20 (left and right). In the PCM/CP: the same Ī“ = 0.012 as in the density/CP. . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.7 TFPDE vs. PCM/CP: error err2nd of the solution for Equation (5.2) versus time with Ī» = 10 (left) and Ī» = 1 (right). Problems we are solving: Ī± = 0.5, c = 2, Ļƒ = 0.1, x0 = 1 (left and right). For PCM/CP: RelTol = 1 Ɨ 10āˆ’8 for U(Ī“) (left and right). For the TF- PDE: ļ¬nite diļ¬€erence scheme in (5.47) with t = 2.5 Ɨ 10āˆ’5 , Nx equidistant points on [āˆ’12, 12], initial condition given by Ī“D 40 (left and right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.8 Zoomed in plots for the density Pts(x, T) by solving the TFPDE (5.37) and the empirical histogram by MC/CP at T = 0.5 (left) and T = 1 (right): Ī± = 0.5, c = 1, Ī» = 1, x0 = 1 and Ļƒ = 0.01 (left and right). In the MC/CP: sample size s = 105 , 316 bins, Ī“ = 0.01, RelTol = 1 Ɨ 10āˆ’8 for U(Ī“), time step t = 1 Ɨ 10āˆ’3 (left and right). In the TFPDE: ļ¬nite diļ¬€erence scheme given in (5.47) with t = 1 Ɨ 10āˆ’5 in time, Nx = 2000 equidistant points on [āˆ’12, 12] in space, and the initial conditions are approximated by Ī“D 40 (left and right). We perform the one-sample K-S tests here to test how two methods match. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1 An illustration of the applications of multi-dimensional LĀ“evy jump models in mathematical ļ¬nance. . . . . . . . . . . . . . . . . . . . . 127 6.2 Three ways to correlate LĀ“evy pure jump processes. . . . . . . . . . . 128 6.3 The LĀ“evy measures of bivariate tempered stable Clayton processes with diļ¬€erent dependence strength (described by the correlation length Ļ„) between their L1 and L2 components. . . . . . . . . . . . . . . . . 133 xvi
  • 17. 6.4 The LĀ“evy measures of bivariate tempered stable Clayton processes with diļ¬€erent dependence strength (described by the correlation length Ļ„) between their L++ 1 and L++ 2 components (only in the ++ corner). It shows how the dependence structure changes with respect to the parameter Ļ„ in the Clayton family of copulas. . . . . . . . . . . . . . 134 6.5 trajectory of component L++ 1 (t) (in blue) and L++ 2 (t) (in green) that are dependent described by Clayton copula with dependent structure parameter Ļ„. Observe how trajectories get more similar when Ļ„ in- creases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.6 Sample path of (L1, L2) with marginal LĀ“evy measure given by equation (6.14), LĀ“evy copula given by (6.13), with each components such as F++ given by Clayton copula with parameter Ļ„. Observe that when Ļ„ is bigger, the ā€™ļ¬‚ippingā€™ motion happens more symmetrically, because there is equal chance for jumps to be the same sign with the same size, and for jumps to be the opposite signs with the same size. . . . 139 6.7 Sample paths of bivariate tempered stable Clayton LĀ“evy jump pro- cesses (L1, L2) simulated by the series representation given in Equa- tion (6.30). We simulate two sample paths for each value of Ļ„. . . . . 140 6.8 An illustration of the three methods used in this paper to solve the moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 140 6.9 An illustration of the three methods used in this paper to solve the moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 147 6.10 An illustration of the three methods used in this paper to solve the moment statistics of Equation (6.1). . . . . . . . . . . . . . . . . . . 148 6.11 PCM/S (probabilistic) vs. MC/S (probabilistic): error l2u2(t) of the solution for Equation (6.1) with a bivariate pure jump LĀ“evy process with the LĀ“evy measure in radial decomposition given by Equation (6.9) versus the number of samples s obtained by MC/S and PCM/S (left) and versus the number of collocation points per RV obtained by PCM/S with a ļ¬xed number of truncations Q in Equation (6.10) (right). t = 1 , c = 1, Ī± = 0.5, Ī» = 5, Āµ = 0.01, NSR = 16.0% (left and right). In MC/S: ļ¬rst order Euler scheme with time step t = 1 Ɨ 10āˆ’3 (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 151 6.12 PCM/series rep v.s. exact: T = 1. We test the noise/signal=variance/mean ratio to be 4% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . . 152 6.13 PCM/series d-convergence and Q-convergence at T=1. We test the noise/signal=variance/mean ratio to be 4% at t=1. The l2u2 error is deļ¬ned as l2u2(t) = ||Eex[u2(x,t;Ļ‰)]āˆ’Enum[u2(x,t;Ļ‰)]||L2([0,2]) ||Eex[u2(x,t;Ļ‰)]||L2([0,2]) . . . . . . . . . . 153 6.14 MC v.s. exact: T = 1. Choice of parameters of this problem: we evaluated the moment statistics numerically with integration rela- tive tolerance to be 10āˆ’8 . With this set of parameter, we test the noise/signal=variance/mean ratio to be 4% at T = 1. . . . . . . . . . 153 6.15 MC v.s. exact: T = 2. Choice of parameters of this problem: we evaluated the moment statistics numerically with integration rela- tive tolerance to be 10āˆ’8 . With this set of parameter, we test the noise/signal=variance/mean ratio to be 10% at T = 2. . . . . . . . . 154 xvii
  • 18. 6.16 FP (deterministic) vs. MC/S (probabilistic): joint PDF P(u1, u2, t) of SODEs system in Equation (6.59) from FP Equation (6.41) (3D contour plot), joint histogram by MC/S (2D contour plot on the x- y plane), horizontal (subļ¬gure) and vertical (subļ¬gure) slices at the peaks of density surface from FP equation and MC/S. Final time is t = 1 (left, NSR = 16.0%) and t = 1.5 (right). c = 1, Ī± = 0.5, Ī» = 5, Āµ = 0.01. In MC/S: ļ¬rst-order Euler scheme with time step t = 1Ɨ10āˆ’3 , 200 bins on both u1 and u2 directions, Q = 40, sample size s = 106 . In FP: initial condition is given by MC data at t0 = 0.5, RK2 scheme with time step t = 4 Ɨ 10āˆ’3 . . . . . . . . . . . . . . . . 155 6.17 TFPDE (deterministic) vs. PCM/S (probabilistic): error l2u2(t) of the solution for Equation (6.1) with a bivariate pure jump LĀ“evy pro- cess with the LĀ“evy measure in radial decomposition given by Equation (6.9) obtained by PCM/S in Equation (6.64) (stochastic approach) and TFPDE in Equation (6.41) (deterministic approach) versus time. Ī± = 0.5, Ī» = 5, Āµ = 0.001 (left and right). c = 0.1 (left); c = 1 (right). In TFPDE: initial condition is given by Ī“G 2000 in Equation (6.67), RK2 scheme with time step t = 4 Ɨ 10āˆ’3 . . . . . . . . . . . . . . . . . . 156 6.18 Exact mean, variance, and NSR versus time. The noise/signal ratio is 10% at T = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 6.19 PCM/S (probabilistic) vs. MC/S (stochastic): error l2u2(t) of the so- lution for Equation (6.1) driven by a bivariate TS Clayton LĀ“evy pro- cess with LĀ“evy measure given in Section 1.2.2 versus the number of truncations Q in the series representation (6.32) by PCM/S (left) and versus the number of samples s in MC/S with the series representation (6.30) by computing Equation (6.59) (right). t = 1 , Ī± = 0.5, Ī» = 5, Āµ = 0.01, Ļ„ = 1 (left and right). c = 0.1, NSR = 10.1% (right). In MC/S: ļ¬rst order Euler scheme with time step t = 1 Ɨ 10āˆ’2 (right). 162 6.20 Q-convergence (with various Ī») of PCM/S in Equation (6.64):Ī± = 0.5, Āµ = 0.01, RelTol of integration of moments of jump sizes is 1e-8. . . . 162 6.21 FP (deterministic) vs. MC/S (probabilistic): joint PDF P(u1, u2, t) of SODE system in Equation (6.59) from FP Equation (6.40) (three- dimensional contour plot), joint histogram by MC/S (2D contour plot on the x-y plane), horizontal (left, subļ¬gure) and vertical (right, sub- ļ¬gure) slices at the peak of density surfaces from FP equation and MC/S. Final time t = 1 (left) and t = 1.5 (right). c = 0.5, Ī± = 0.5, Ī» = 5, Āµ = 0.005, Ļ„ = 1 (left and right). In MC/S: ļ¬rst-order Eu- ler scheme with time step t = 0.02, Q = 2 in series representation (6.30), sample size s = 104 . 40 bins on both u1 and u2 directions (left); 20 bins on both u1 and u2 directions (right). In FP: initial condition is given by Ī“G 1000 in Equation (6.67), RK2 scheme with time step t = 4 Ɨ 10āˆ’3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.22 TFPDE (deterministic) vs. PCM/S (stochastic): error l2u2(t) of the solution for Equation (6.1) driven by a bivariate TS Clayton LĀ“evy pro- cess with LĀ“evy measure given in Section 1.2.2 versus time obtained by PCM/S in Equation (6.81) (stochastic approach) and TFPDE (6.40) (deterministic approach). c = 1, Ī± = 0.5, Ī» = 5, Āµ = 0.01 (left and right). c = 0.05, Āµ = 0.001 (left). c = 1, Āµ = 0.005 (right). In TFPDE: initial condition is given by Ī“G 1000 in Equation (6.67), RK2 scheme with time step t = 4 Ɨ 10āˆ’3 . . . . . . . . . . . . . . . . . . 165 xviii
  • 19. 6.23 S-convergence in MC/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence in the E[u2 ] (left) between diļ¬€erent sample sizes s and s = 106 (as a reference). The heat equation (6.1) is driven by a 10-dimensional jump process with a LĀ“evy measure (6.9) obtained by MC/S with series rep- resentation (6.10). We show the L2 norm of these diļ¬€erences versus s (right). Final time T = 1, c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01, time step t = 4 Ɨ 10āˆ’3 , and Q = 10. The NSR at T = 1 is 6.62%. . . . . 167 6.24 Samples of (u1, u2) (left) and joint PDF of (u1, u2, ..., u10) on the (u1, u2) plane by MC (right) : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01,dt = 4e āˆ’ 3 (ļ¬rst order Euler scheme), T = 1, Q = 10 (number of trunca- tions in the series representation), and sample size s = 106 . . . . . . 167 6.25 Samples of (u9, u10) (left) and joint PDF of (u1, u2, ..., u10) on the (u9, u10) plane by MC (right) : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01,dt = 4e āˆ’ 3 (ļ¬rst order Euler scheme), T = 1, Q = 10 (number of trunca- tions in the series representation), and sample size s = 106 . . . . . . . 168 6.26 First two moments for solution of the heat equation (6.1) driven by a 10-dimensional jump process with a LĀ“evy measure (6.9) obtained by MC/S with series representation (6.10) at ļ¬nal time T = 0.5 (left) and T = 1 (right) by MC : c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01, dt = 4e āˆ’ 3 (with the ļ¬rst order Euler scheme), Q = 10, and sample size s = 106 . 169 6.27 Q-convergence in PCM/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence in the E[u2 ] (left) between diļ¬€erent series truncation order Q and Q = 16 (as a reference). The heat equation (6.1) is driven by a 10-dimensional jump process with a LĀ“evy measure (6.9) obtained by MC/S with series representation (6.10). We show the L2 norm of these diļ¬€erences versus Q (right). Final time T = 1, c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01. The NSR at T = 1 is 6.62%. . . . . . . . . . . . . . . . 169 6.28 MC/S V.s. PCM/S with 10-dimensional LĀ“evy jump processes:diļ¬€erence between the E[u2 ] computed from MC/S and that computed from PCM/S at ļ¬nal time T = 0.5 (left) and T = 1 (right). The heat equa- tion (6.1) is driven by a 10-dimensional jump process with a LĀ“evy measure (6.9) obtained by MC/S with series representation (6.10). c = 0.1, Ī± = 0.5, Ī» = 10, Āµ = 0.01. In MC/S, time step t = 4Ɨ10āˆ’3 , Q = 10. In PCM/S, Q = 16. . . . . . . . . . . . . . . . . . . . . . . . 170 6.29 The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVA approximation of it with eļ¬€ective dimension of two (right up and right down). A = 0.5, d = 2. . . . . . . . . . . . . . . . 173 6.30 The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVA approximation of it with eļ¬€ective dimension of two (right up and right down). A = 0.1, d = 2. . . . . . . . . . . . . . . . 173 6.31 The function in Equation (6.82) with d = 2 (left up and left down) and the ANOVA approximation of it with eļ¬€ective dimension of two (right up and right down). A = 0.01, d = 2. . . . . . . . . . . . . . . 174 xix
  • 20. 6.32 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional LĀ“evy jump processes:the mean (left) for the solution of the heat equation (6.1) driven by a 10- dimensional jump process with a LĀ“evy measure (6.9) computed by 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms of dif- ference in E[u] between these three methods are plotted versus ļ¬nal time T (right). c = 1, Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4 . In 1D-ANOVA-FP: t = 4 Ɨ 10āˆ’3 in RK2, M = 30 elements, q = 4 GLL points on each element. In 2D-ANOVA-FP: t = 4 Ɨ 10āˆ’3 in RK2, M = 5 elements on each direction, q2 = 16 GLL points on each element. In PCM/S: Q = 10 in the series representation (6.10). Initial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 Ɨ 104 , t = 4 Ɨ 10āˆ’3 . NSR ā‰ˆ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 175 6.33 1D-ANOVA-FP V.s. 2D-ANOVA-FP with 10-dimensional LĀ“evy jump processes:the second moment (left) for the solution of heat equation (6.1) driven by a 10-dimensional jump process with a LĀ“evy measure (6.9) computed by 1D-ANOVA-FP, 2D-ANOVA-FP, and PCM/S. The L2 norms of diļ¬€erence in E[u2 ] between these three methods are plotted versus ļ¬nal time T (right). c = 1, Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4 . In 1D-ANOVA- FP: t = 4 Ɨ 10āˆ’3 in RK2, M = 30 elements, q = 4 GLL points on each element. In 2D-ANOVA-FP: t = 4 Ɨ 10āˆ’3 in RK2, M = 5 elements on each direction, q2 = 16 GLL points on each element. Ini- tial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 Ɨ 104 , t = 4Ɨ10āˆ’3 . In PCM/S: Q = 10 in the series representation (6.10). NSR ā‰ˆ 18.24% at T = 1. . . . . . . . . . . . . . . . . . . . . . . . . 176 6.34 Evolution of marginal distributions pi(xi, t) at ļ¬nal time t = 0.6, ..., 1. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4 . Initial condition from MC: t0 = 0.5, s = 104 , dt = 4 Ɨ 10āˆ’3 , Q = 10. 1D-ANOVA-FP : RK2 with time step dt = 4 Ɨ 10āˆ’3 , 30 elements with 4 GLL points on each element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.35 Showing the mean E[u] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and by solving 1D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 1e āˆ’ 4. Initial condition from MC: s = 104 , dt = 4āˆ’3 , Q = 10. 1D-ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3 , 30 elements with 4 GLL points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.36 The mean E[u2 ] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and by solving 1D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 1eāˆ’4. Initial condition from MC: s = 104 , dt = 4 Ɨ 10āˆ’3 , Q = 10. 1D- ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3 , 30 elements with 4 GLL points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 179 6.37 The mean E[u2 ] at diļ¬€erent ļ¬nal time by PCM (Q = 10) and by solving 2D-ANOVA-FP equations. c = 1 , Ī± = 0.5, Ī» = 10, Āµ = 10āˆ’4 . Initial condition from MC: s = 104 , dt = 4 Ɨ 10āˆ’3 , Q = 10. 2D- ANOVA-FP : RK2 with dt = 4 Ɨ 10āˆ’3 , 30 elements with 4 GLL points on each element. . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.38 Left: sensitivity index deļ¬ned in Equation (6.87) on each pair of (i, j), j ā‰„ i. Right: sensitivity index deļ¬ned in Equation (6.88) on each pair of (i, j), j ā‰„ i. They are computed from the MC data at t0 = 0.5 with s = 104 samples. . . . . . . . . . . . . . . . . . . . . . 182 xx
  • 21. 6.39 Error growth by 2D-ANOVA-FP in diļ¬€erent dimension d:the error growth l2u1rel(T; t0) in E[u] deļ¬ned in Equation (6.91) versus ļ¬nal time T (left); the error growth l2u2rel(T; t0) in E[u2 ] deļ¬ned in Equation (6.92) versus T (middle); l2u1rel(T = 1; t0) and l2u2rel(T = 1; t0) versus dimension d (right). We consider the diļ¬€usion equation (6.1) driven by a d-dimensional jump process with a LĀ“evy measure (6.9) computed by 2D-ANOVA-FP, and PCM/S. c = 1, Ī± = 0.5, Āµ = 10āˆ’4 (left, middle, right). In Equation (6.49): t = 4 Ɨ 10āˆ’3 in RK2, M = 30 elements, q = 4 GLL points on each element. In Equation (6.50): t = 4 Ɨ 10āˆ’3 in RK2, M = 5 elements on each direction, q2 = 16 GLL points on each element. Initial condition of ANOVA-FP: MC/S data at t0 = 0.5, s = 1 Ɨ 104 , t = 4 Ɨ 10āˆ’3 , and Q = 16. In PCM/S: Q = 16 in the series representation (6.10). NSR ā‰ˆ 20.5% at T = 1 for all the dimensions d = 2, 4, 6, 10, 14, 18. These runs were done on Intel (R) Core (TM) i5-3470 CPU @ 3.20 GHz in Matlab. . . 184 7.1 Summary of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 xxi
  • 23. 2 1.1 Motivation Stochastic partial diļ¬€erential equations (SPDEs) are widely used for stochastic mod- eling in diverse applications from physics, to engineering, biology and many other ļ¬elds, where the source of uncertainty includes random coeļ¬ƒcients and stochastic forcing. Our work is motivated by two things: application and shortcomings of past work. The source of uncertainty, practically, can be any non-Gaussian process. In many cases, the random parameters are only observed at discrete values, which implies that a discrete probability measure is more appropriate from the modeling point of view. More generally, random processes with jumps are of fundamental importance in stochastic modeling, e.g., stochastic-volatility jump-diļ¬€usion models in ļ¬nance [171], stochastic simulation algorithms for modeling diļ¬€usion, reaction and taxis in biol- ogy [41], ļ¬‚uid models with jumps [158], quantum-jump models in physics [35], etc. This serves as the motivation of our work on simulating SPDEs driven by discrete random variables (RVs). Nonlinear SPDEs with discrete RVs and jump processes are of practical use, since sources of stochastic excitations including uncertain parame- ters and boundary/initial conditions are typically observed at discrete values. Many complex systems of fundamental and industrial importance are signiļ¬cantly aļ¬€ected by the underlying ļ¬‚uctuations/variations in random excitations, such as stochastic- volatility jump-diļ¬€usion model in mathematical ļ¬nance [12, 13, 24, 27, 28, 171], stochastic simulation algorithms for modeling diļ¬€usion, reaction and taxis in biol- ogy [41], truncated Levy ļ¬‚ight model in turbulence [85, 106, 121, 158], quantum-jump models in physics [35], etc. An interesting source of uncertainty is LĀ“evy jump processes, such as tempered
  • 24. 3 Ī± stable (TĪ±S) processes. TĪ±S processes were introduced in statistical physics to model turbulence, e.g., the truncated LĀ“evy ļ¬‚ight model [85, 106, 121], and in math- ematical ļ¬nance to model stochastic volatility, e.g., the CGMY model [27, 28]. The empirical distribution of asset prices is not always in a stable distribution or a nor- mal distribution. The tail is heavier than a normal distribution and thinner than a stable distribution [20]. Therefore, the TĪ±S process was introduced as the CGMY model to modify the Black and Scholes model. More details of white noise the- ory for LĀ“evy jump processes with applications to SPDEs and ļ¬nance can be found in [18, 120, 96, 97, 124]. Although one-dimensional (1D) jump models are constructed in ļ¬nance with LĀ“evy processes [14, 86, 100], many ļ¬nancial models require multi- dimensional LĀ“evy jump processes with dependent components [33], such as basket option pricing [94], portfolio optimization [39], and risk scenarios for portfolios [33]. In history, multi-dimensional Gaussian models are widely applied in ļ¬nance because of the simplicity in the description of dependence structures [134], however in some applications we must take jumps in price processes into account [27, 28]. This work is constructed on previous work on the ļ¬eld of uncertainty quan- tiļ¬cation (UQ), which includes the generalized polynomial chaos method (gPC), multi-element generalized polynomial chaos method (MEgPC), probabilistic collo- cation method (PCM), sparse collocation method, analysis of variance (ANOVA), and many other variants (see, e.g., [8, 9, 50, 52, 58, 156] and references therein). 1.1.1 Computational limitations for UQ of nonlinear SPDEs Numerically, nonlinear SPDEs with discrete processes are often solved by gPC in- volving a system of coupled deterministic nonlinear equations [169], or probabilistic collocation method (PCM) [50, 170, 177] involving nonlinear corresponding PDEs
  • 25. 4 obtained at the collocation points. For stochastic processes with short correlation length, the number of RVs required to represent the processes can be extremely large. Therefore, the number of equations involved in the gPC propagator for a nonlinear SPDE driven by such a process can be very large and highly coupled. 1.1.2 Computational limitations for UQ of SPDEs driven by LĀ“evy jump processes For simulations of LĀ“evy jump processes as TĪ±S, we do not know the distribution of in- crements explicitly [33], but we may still simulate the trajectories of TĪ±S processes by the random walk approximation [10]. However, the random walk approximation does not identify the jump time and size of the large jumps precisely [139, 140, 141, 142]. In the heavy tailed case, large jumps contribute more than small jumps in functionals of a LĀ“evy process. Therefore, in this case, we have mainly used two other ways to simulate the trajectories of a TĪ±S process numerically: compound Poisson (CP) ap- proximation [33] and series representation [140]. In the CP approximation, we treat the jumps smaller than a certain size Ī“ by their expectation, and treat the remaining process with larger jumps as a CP process [33]. There are six diļ¬€erent series represen- tations of LĀ“evy jump processes. They are the inverse LĀ“evy measure method [44, 82], LePageā€™s method [92], Bondessonā€™s method [23], thinning method [140], rejection method [139], and shot noise method [140, 141]. However, in each representation, the number of RVs involved is very large (such as 100). In this work, for TĪ±S pro- cesses, we will use the shot noise representation for Lt as a series representation method because the tail of LĀ“evy measure of a TĪ±S process does not have an explicit inverse [142]. Both the CP and the series approximation converge slowly when the jumps of the LĀ“evy process are highly concentrated around zero, however both can
  • 26. 5 be improved by replacing the small jumps via Brownian motions [6]. The Ī±-stable distribution was introduced to model the empirical distribution of asset prices [104], replacing the normal distribution. In the past literature, the simulation of SDEs or functionals of TĪ±S processes was mainly done via MC [128]. MC for functionals of TĪ±S processes is possible after a change of measure that transform TĪ±S processes into stable processes [130]. 1.2 Introduction of TĪ±S LĀ“evy jump processes TĪ±S processes were introduced in statistical physics to model turbulence, e.g., the truncated LĀ“evy ļ¬‚ight model [85, 106, 121], and in mathematical ļ¬nance to model stochastic volatility, e.g., the CGMY model [27, 28]. Here, we consider a symmet- ric TĪ±S process (Lt) as a pure jump LĀ“evy martingale with characteristic triplet (0, Ī½, 0) [19, 143] (no drift and no Gaussian part). The LĀ“evy measure is given by [33] 1 : Ī½(x) = ceāˆ’Ī»|x| |x|Ī±+1 , 0 < Ī± < 2. (1.1) This LĀ“evy measure can be interpreted as an Esscher transformation [57] from that of a stable process with exponential tilting of the LĀ“evy measure. The parameter c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of the process. Also, Ī» > 0 ļ¬xes the decay rate of big jumps, while Ī± determines the relative importance of smaller jumps in the path of the process2 . The probability density for Lt at a given time is not available in a closed form (except when Ī± = 1 2 3 ). 1 In a more generalized form, LĀ“evy measure is Ī½(x) = cāˆ’eāˆ’Ī»āˆ’|x| |x|Ī±+1 Ix<0 + c+eāˆ’Ī»+|x| |x|Ī±+1 Ix>0. We may have diļ¬€erent coeļ¬ƒcients c+, cāˆ’, Ī»+, Ī»āˆ’ on the positive and the negative jump parts. 2 In the case when Ī± = 0, Lt is the gamma process. 3 See inverse Gaussian processes.
  • 27. 6 The characteristic exponent for Lt is [33]: Ī¦(s) = sāˆ’1 log E[eisLs ] = 2Ī“(āˆ’Ī±)Ī»Ī± c[(1 āˆ’ is Ī» )Ī± āˆ’ 1 + isĪ± Ī» ], Ī± = 1, (1.2) where Ī“(x) is the Gamma function and E is the expectation. By taking the deriva- tives of the characteristic exponent we obtain the mean and variance: E[Lt] = 0, V ar[Lt] = 2tĪ“(2 āˆ’ Ī±)cĪ»Ī±āˆ’2 . (1.3) In order to derive the second moments for the exact solutions of Equations (5.1) and (5.2), we introduce the ItĖ†o isometry. The jump of Lt is deļ¬ned by Lt = Lt āˆ’ Ltāˆ’ . We deļ¬ne the Poisson random measure N(t, U) as [71, 119, 123]: N(t, U) = 0ā‰¤sā‰¤t I LsāˆˆU , U āˆˆ B(R0), ĀÆU āŠ‚ R0. (1.4) Here R0 = R{0}, and B(R0) is the Ļƒ-algebra generated by the family of all Borel subsets U āŠ‚ R, such that ĀÆU āŠ‚ R0; IA is an indicator function. The Poisson random measure N(t, U) counts the number of jumps of size Ls āˆˆ U at time t. In order to introduce the ItĖ†o isometry, we deļ¬ne the compensated Poisson random measure ĖœN [71] as: ĖœN(dt, dz) = N(dt, dz) āˆ’ Ī½(dz)dt = N(dt, dz) āˆ’ E[N(dt, dz)]. (1.5) The TĪ±S process Lt (as a martingale) can be also written as: Lt = t 0 R0 z ĖœN(dĻ„, dz). (1.6) For any t, let Ft be the Ļƒ-algebra generated by (Lt, ĖœN(ds, dz)), z āˆˆ R0, s ā‰¤ t. We deļ¬ne the ļ¬ltration to be F = {Ft, t ā‰„ 0}. If a stochastic process Īøt(z), t ā‰„ 0, z āˆˆ R0
  • 28. 7 is Ft-adapted, we have the following ItĖ†o isometry [119]: E[( T 0 R0 Īøt(z) ĖœN(dt, dz))2 ] = E[ T 0 R0 Īø2 t (z)Ī½(dz)dt]. (1.7) 1.3 Organization of the thesis In Chapter 2, we discuss four methods to simulate LĀ“evy jump processes preliminar- ies and background information to the reader: 1. random walk approximation; 2. Karhumen-Loeve expansion; 3. compound Poisson approximation; 4. series repre- sentation. In Chapter 3, The methods of generating orthogonal polynomial bases with re- spect to discrete measures are presented, followed by a discussion about the error of numerical integration. Numerical solutions of the stochastic reaction equation and Korteweg- de Vries (KdV) equation, including adaptive procedures, are explained. Then, we summarize the work. In the appendices, we provide more details about the deterministic KdV equation solver, and the adaptive procedure. In Chapter 4, we deļ¬ne the WM expansion and derive the Wick-Malliavin prop- agators for a stochastic reaction equation and a stochastic Burgers equation. We present several numerical results for SPDEs with one RV and multiple RVs, in- cluding an adaptive procedure to control the error in time. We also compare the computational complexity between gPC and WM for stochastic Burgers equation with the same level of accuracy. Also, we provide an iterative algorithm to generate coeļ¬ƒcients in the WM approximation. In Chapter 5, we compare the CP approximation and the series representation
  • 29. 8 of a TĪ±S process. We solve a stochastic reaction-diļ¬€usion with TĪ±S white noise via MC and PCM, both with CP approximation or series representation of the TĪ±S pro- cess. We simulate the density evolution for an overdamped Langevin equation with TĪ±S white noise via the corresponding generalized FP equations. We compare the statistics obtained from the FP equations and MC or PCM methods. We conclude. Also, we provide algorithms of the rejection method and simulation of CP processes. We also provide the probability distributions to simplify the series representation. In Chapter 6, by MC, PCM and FP, we solve the moment statistics for the solu- tion of a heat equation driven by a 2D LĀ“evy noise in LePageā€™s series representation. By MC, PCM and FP, we solve the moment statistics for the solution of a heat equa- tion driven by a 2D LĀ“evy noise described by LĀ“evy copula as. By MC, PCM and FP, we solve the moment statistics for the solution of the heat equation driven by a 10D LĀ“evy noise in LePageā€™s series representation, where the FP equation is decomposed by the unanchored ANOVA decomposition. We also exam the error growth versus the dimension of the LĀ“evy process. We conclude. Also, we show how we simplify the multi-dimensional integration in FP equations into the 1D and 2D integrals. In Chapter 7, lastly, we summarize the scope of SPDEs, the scope of stochastic processes, and the methods we have experimented so far. We summarize the compu- tational cost and accuracy in our numerical experiments. We suggest feasible future works on methodology and applications.
  • 30. Chapter Two Simulation of LĀ“evy jump processes
  • 31. 10 In general there are three ways to generate a LĀ“evy process [140]: random walk ap- proximation, series representation and compound Poisson (CP) approximation. The random walk approximation approximate the continuous random walk by a discrete random walk on a discrete time sequence, if the marginal distribution of the process is known. It is often used to simulate LĀ“evy jump processes with large jumps, but it does not identify the jump time and size of the large jumps precisely [139, 140, 141, 142]. We attempt to simulate a non-Gaussian process by Karhumen-Lo`eve (KL) expansion here as well by computing the covariance kernel and its eigenfunctions. In the CP approximation, we treat the jumps smaller than a certain size by their expectation as a drift term, and the remaining process with large jumps as a CP process [33]. There are six diļ¬€erent series representations of LĀ“evy jump processes. They are the in- verse LĀ“evy measure method [44, 82], LePageā€™s method [92], Bondessonā€™s method [23], thinning method [140], rejection method [139], and shot noise method [140, 141]. 2.1 Random walk approximation to Poisson pro- cesses For a LĀ“evy jump process Lt, on a ļ¬xed time grid [t0, t1, t2, ..., tN ], we may approximate Lt by Lt = N i=1 XiI{t < ti}. When the marginal distribution of Lt is known, the distribution of Xi is known to be Ltiāˆ’tiāˆ’1 . Therefore, on the ļ¬xed time grid, we may generate the RVs Xi by sampling from the known distribution. When Lt is composed of large jumps with low intensity (or rate of jumps), this can be a good approximation. However, we are mostly interested in LĀ“evy jump processes with inļ¬nite activity (with high rates of jumps), therefore this will not be a good approximation for the kind of processes we are going to consider, such as tempered
  • 32. 11 Ī± stable processes. 2.2 KL expansion for Poisson processes Let us ļ¬rst take a Poisson process N(t; Ļ‰) with intensity Ī» on a computational time domain [0, T] as an example. We mimic the KL expansion for Gaussian processes to simulate non-Gaussian processes as Poisson processes. ā€¢ First we calculate the covariance kernel (assuming t > t). Cov(N(t; Ļ‰)N(t ; Ļ‰)) = E[N(t; Ļ‰)N(t ; Ļ‰)] āˆ’ E[N(t; Ļ‰)]E[N(t ; Ļ‰)] = E[N(t; Ļ‰)N(t; Ļ‰)] + E[N(t; Ļ‰)]E[N(t āˆ’ t; Ļ‰)] āˆ’ E[N(t; Ļ‰)]E[N(t ; Ļ‰)] = Ī»t, t > t, (2.1) Therefore, the covariance kernel is Cov(N(t; Ļ‰)N(t ; Ļ‰)) = Ī»(t t ) (2.2) ā€¢ The eigenvalues and eigenfunctions for this kernel would be: ek(t) = āˆš 2sin(k āˆ’ 1 2 )Ļ€t (2.3) and Ī»k = 1 (k āˆ’ 1 2 )2Ļ€2 (2.4) where k=1,2,3,... ā€¢ The stochastic process Nt approximated by ļ¬nite number of terms in the KL
  • 33. 12 expansion can be written as: ĖœN(t; Ļ‰) = Ī»t + M i=1 Ī»iYiei(t) (2.5) where 1 0 e2 k(t)dt = 1 (2.6) and T 0 e2 k(t)dt = T āˆ’ sin[T(1 āˆ’ 2k)Ļ€] Ļ€(1 āˆ’ 2k) (2.7) and they are orthogonal. ā€¢ The distribution of Yk can be calculated by the following. Given a sample path Ļ‰ āˆˆ ā„¦, < N(t; Ļ‰) āˆ’ Ī»t, ek(t) >= Yk āˆš Ī» Ļ€(k āˆ’ 1 2 ) < ek(t), ek(t) > = 2Yk āˆš Ī»[ T(2k āˆ’ 1)Ļ€ āˆ’ sin((2k āˆ’ 1)Ļ€T) Ļ€2(2k āˆ’ 1)2 ] =< N(t; Ļ‰), ek(t) > āˆ’ āˆš 2Ī» Ļ€2 [āˆ’2Ļ€Tcos(Ļ€T/2) + 4sin(Ļ€T/2)]. (2.8) Therefore, Yk = Ļ€2 (2k āˆ’ 1)2 [< N(t; Ļ‰), ek(t) > āˆ’ āˆš 2Ī» Ļ€2 [āˆ’2Ļ€Tcos(Ļ€T/2) + 4sin(Ļ€T/2)]] 2 āˆš Ī»[T(2k āˆ’ 1)Ļ€ āˆ’ sin((2k āˆ’ 1)Ļ€T] . (2.9) From each sample path each sample path Ļ‰, we can calculate the value of Y1, ..., YM . In this way the distribution of Y1, ..., YM can be sampled. Nu- merically, if we simulate enough number of samples of a Poisson process (by simulating the jump times and jump sizes separately), we may have the em- pirical distribution of RVs Y1, ..., YM . ā€¢ Now let us see how well the sample paths of the Poisson process Nt are ap-
  • 34. 13 5 4 3 2 1 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Empirical CDF for KL Exp RVs i CDF Figure 2.1: Empirical CDF of KL Expansion RVs Y1, ..., YM with M = 10 KL expansion terms, for a centered Poisson process (Nt āˆ’Ī»t) of Ī» = 10, Tmax = 1, with s = 10000 samples, and N = 200 points on the time domain [0, 1]. proximated by the KL expansion. ā€¢ Now let us see how well the mean of the Poisson process Nt are approximated by the KL expansion. ā€¢ Now let us see how well the second moment of the Poisson process Nt are approximated by the KL expansion. 2.3 Compound Poisson approximation to LĀ“evy jump processes Let us take a tempered Ī± stable process (TĪ±S) as an example here. TĪ±S processes were introduced in statistical physics to model turbulence, e.g., the truncated LĀ“evy ļ¬‚ight model [85, 106, 121], and in mathematical ļ¬nance to model stochastic volatility, e.g., the CGMY model [27, 28]. Here, we consider a symmetric TĪ±S process (Lt) as a pure jump LĀ“evy martingale with characteristic triplet (0, Ī½, 0) [19, 143] (no drift
  • 35. 14 0 1 2 3 4 5 100 50 0 50 100 150 200 250 300 Exact and Approx ed Sample Path by KL Exp time N(t;0 ) ex sample path approx ed sample path 10 Exp Terms =50 T max =5 0 1 2 3 4 5 1 0 1 2 3 4 5 6 Exact and Approx ed Sample Path by KL Exp time N(t;0 ) exact sample path approx ed sample path 10 Exp Terms =1 T max =5 Figure 2.2: Exact sample path vs. sample path approximated by the KL expansion: when Ī» is smaller, the sample path is better approximated. (Brownian motion is the limiting case for a centered poisson process with very large birth rate.) 0 1 2 3 4 5 50 0 50 100 150 200 250 300 Mean Rep by KL Exp w/ Sampled Coefficients time <N(t;)> Exact KL Exp 10 Exp Terms =50 T max =5 200 Samples 0 1 2 3 4 5 6 4 2 0 2 4 6 8 10 Mean Rep by KL Exp w/ Sampled Coefficients time <N(t;)> Exact KL Exp 10 Exp Terms =1 T max =5 200 Samples Figure 2.3: Exact mean vs. mean by KL expansion: when Ī» is larger, the KL representation seems to be better. 0 1 2 3 4 5 0 1 2 3 4 5 6 7 x 10 4 2nd Moment Rep by KL Exp w/ Sampled Coefficients time <N2 (t;)> Exact KL Exp 10 Exp Terms =50 T max =5 200 Samples 0 1 2 3 4 5 0 10 20 30 40 50 60 2nd Moment Rep by KL Exp w/ Sampled Coefficients Time <N2 (t;)> Exact KL Exp 10 Exp Terms =1 T max =5 200 Samples Figure 2.4: Exact 2nd moment vs. 2nd moment by KL expansion with sampled coeļ¬ƒcients. The 2nd moments are not as well approximated as the mean.
  • 36. 15 and no Gaussian part). The LĀ“evy measure is given by [33] 1 : Ī½(x) = ceāˆ’Ī»|x| |x|Ī±+1 , 0 < Ī± < 2. (2.10) This LĀ“evy measure can be interpreted as an Esscher transformation [57] from that of a stable process with exponential tilting of the LĀ“evy measure. The parameter c > 0 alters the intensity of jumps of all given sizes; it changes the time scale of the process. Also, Ī» > 0 ļ¬xes the decay rate of big jumps, while Ī± determines the relative importance of smaller jumps in the path of the process2 . The probability density for Lt at a given time is not available in a closed form (except when Ī± = 1 2 3 ). The characteristic exponent for Lt is [33]: Ī¦(s) = sāˆ’1 log E[eisLs ] = 2Ī“(āˆ’Ī±)Ī»Ī± c[(1 āˆ’ is Ī» )Ī± āˆ’ 1 + isĪ± Ī» ], Ī± = 1, (2.11) where Ī“(x) is the Gamma function and E is the expectation. By taking the deriva- tives of the characteristic exponent we obtain the mean and variance: E[Lt] = 0, V ar[Lt] = 2tĪ“(2 āˆ’ Ī±)cĪ»Ī±āˆ’2 . (2.12) In the CP approximation, we simulate the jumps larger than Ī“ as a CP process and replace jumps smaller than Ī“ by their expectation as a drift term [33]. Here we explain the method to approximate a TĪ±S subordinator Xt (without a Gaussian part and a drift) with the LĀ“evy measure Ī½(x) = ceāˆ’Ī»x xĪ±+1 Ix>0 (positive jumps only); this method can be generalized to a TĪ±S process with both positive and negative jumps. 1 In a more generalized form, LĀ“evy measure is Ī½(x) = cāˆ’eāˆ’Ī»āˆ’|x| |x|Ī±+1 Ix<0 + c+eāˆ’Ī»+|x| |x|Ī±+1 Ix>0. We may have diļ¬€erent coeļ¬ƒcients c+, cāˆ’, Ī»+, Ī»āˆ’ on the positive and the negative jump parts. 2 In the case when Ī± = 0, Lt is the gamma process. 3 See inverse Gaussian processes.
  • 37. 16 The CP approximation XĪ“ t for this TĪ±S subordinator Xt is: Xt ā‰ˆ XĪ“ t = sā‰¤t XsI Xsā‰„Ī“+E[ sā‰¤t XsI Xs<Ī“] = āˆž i=1 JĪ“ i Itā‰¤Ti +bĪ“ t ā‰ˆ Qcp i=1 JĪ“ i Itā‰¤Ti +bĪ“ t, (2.13) We introduce Qcp here as the number of jumps occurred before time t. The ļ¬rst term āˆž i=1 JĪ“ i Itā‰¤Ti is a compound Poisson process with jump intensity U(Ī“) = c āˆž Ī“ eāˆ’Ī»x dx xĪ±+1 (2.14) and jump size distribution pĪ“ (x) = 1 U(Ī“) ceāˆ’Ī»x xĪ±+1 Ixā‰„Ī“ for JĪ“ i . The jump size random variables (RVs) JĪ“ i are generated via the rejection method [37]. This is the algorithm of rejection method to generate RVs with distribution pĪ“ (x) = 1 U(Ī“) ceĪ»x xĪ±+1 Ixā‰„Ī“ for CP approximation [37] The distribution pĪ“ (x) can be bounded by pĪ“ (x) ā‰¤ Ī“āˆ’Ī± eāˆ’Ī»Ī“ Ī±U(Ī“) fĪ“ (x), (2.15) where fĪ“ (x) = Ī±Ī“āˆ’Ī± xĪ±+1 Ixā‰„Ī“. The algorithm to generate RVs with distribution pĪ“ (x) = 1 U(Ī“) ceĪ»x xĪ±+1 Ixā‰„Ī“ is [33, 37]: ā€¢ REPEAT ā€¢ Generate RVs W and V : independent and uniformly distributed on [0, 1] ā€¢ Set X = Ī“Wāˆ’1/Ī±
  • 38. 17 ā€¢ Set T = fĪ“(X)Ī“āˆ’Ī±eāˆ’Ī»Ī“ pĪ“(X)Ī±U(Ī“) ā€¢ UNTIL V T ā‰¤ 1 ā€¢ RETURN X . Here, Ti is the i-th jump arrival time of a Poisson process with intensity U(Ī“). The accuracy of CP approximation method can be improved by replacing the smaller jumps by a Brownian motion [6], when the growth of the LĀ“evy measure near zero is fast. The second term functions as a drift term, bĪ“ t, resulted from truncating the smaller jumps. The drift is bĪ“ = c Ī“ 0 eāˆ’Ī»xdx xĪ± . This integration diverges when Ī± ā‰„ 1, therefore the CP approximation method only applies to TĪ±S processes with 0 < Ī± < 1. In this paper, both the intensity U(Ī“) and drift bĪ“ are calculated via numerical integrations with Gauss-quadrature rules [54] with a speciļ¬ed relative tolerance (RelTol) 4 . In general, there are two algorithms to simulate a compound Poisson process [33]: the ļ¬rst method is to simulate the jump time Ti by exponentially distributed RVs and take the number of jumps Qcp as large as possible; the second method is to ļ¬rst generate and ļ¬x the number of jumps, then generate the jump time by uniformly distributed RVs on [0, t]. Algorithms for simulating a CP process (the second kind) with intensity and the jump size distribution in their explicit forms are known on a ļ¬xed time grid [33]. Here we describe how to simulate the trajectories of a CP process with intensity U(Ī“) and jump size distribution Ī½Ī“(x) U(Ī“) , on a simulation time domain [0, T] at time t. The algorithm to generate sample paths for CP processes without a drift: 4 The RelTol of numerical integration is deļ¬ned as |qāˆ’Q| |Q| , where q is the computed value of the integral and Q is the unknown exact value.
  • 39. 18 ā€¢ Simulate an RV N from Poisson distribution with parameter U(Ī“)T, as the total number of jumps on the interval [0, T]. ā€¢ Simulate N independent RVs, Ti, uniformly distributed on the interval [0, T], as jump times. ā€¢ Simulate N jump sizes, Yi with distribution Ī½Ī“(x) U(Ī“) . ā€¢ Then the trajectory at time t is given by N i=1 ITiā‰¤tYi. In order to simulate the sample paths of a symmetric TĪ±S process with a LĀ“evy measure given in Equation (5.3), we generate two independent TĪ±S subordinators via the CP approximation and subtract one from the other. The accuracy of the CP approximation is determined by the jump truncation size Ī“. The numerical experiments for this method will be given in Chapter 5. 2.4 Series representation to LĀ“evy jump processes Let { j}, {Ī·j}, and {Ī¾j} be sequences of i.i.d. RVs such that P( j = Ā±1) = 1/2, Ī·j āˆ¼ Exponential(Ī»), and Ī¾j āˆ¼Uniform(0, 1). Let {Ī“j} be arrival times in a Poisson process with rate one. Let {Uj} be i.i.d. uniform RVs on [0, T]. Then, a TĪ±S process Lt with LĀ“evy measure given in Equation (5.3) can be represented as [142]: Lt = +āˆž j=1 j[( Ī±Ī“j 2cT )āˆ’1/Ī± āˆ§ Ī·jĪ¾ 1/Ī± j ]I{Ujā‰¤t}, 0 ā‰¤ t ā‰¤ T. (2.16) Equation (5.14) converges almost surely as uniformly in t [139]. In numerical simu- lations, we truncate the series in Equation (5.14) up to Qs terms. The accuracy of
  • 40. 19 series representation approximation is determined by the number of truncations Qs. The numerical experiments for this method will be given in Chapter 5.
  • 41. Chapter Three Adaptive multi-element polynomial chaos with discrete measure: Algorithms and applications to SPDEs
  • 42. 21 We develop a multi-element probabilistic collocation method (ME-PCM) for arbi- trary discrete probability measures with ļ¬nite moments and apply it to solve partial diļ¬€erential equations with random parameters. The method is based on numeri- cal construction of orthogonal polynomial bases in terms of a discrete probability measure. To this end, we compare the accuracy and eļ¬ƒciency of ļ¬ve diļ¬€erent con- structions. We develop an adaptive procedure for decomposition of the parametric space using the local variance criterion. We then couple the ME-PCM with sparse grids to study the Korteweg-de Vries (KdV) equation subject to random excitation, where the random parameters are associated with either a discrete or a continuous probability measure. Numerical experiments demonstrate that the proposed algo- rithms lead to high accuracy and eļ¬ƒciency for hybrid (discrete-continuous) random inputs. 3.1 Notation Āµ, Ī½ probability measure of discrete RVs Ī¾ discrete RV Pi(Ī¾) generalized Polynomial Chaos basis function Ī“ij Dirac delta function S(Āµ) support of measure Āµ over discrete RV Ī¾ N size of the support S(Āµ) Ī±i, Ī²i coeļ¬ƒcients in the three term recurrence relation of orthogonal polynomial basis mk the kith moment of RV Ī¾ Ī“ integration domain of the discrete RV Wm,p (Ī“) Sobolev space h size of element in multi-element integration Nes number of elements in multi-element integration d number of quadrature points in Gauss quadrature rule Bi i-th element in the multi-element integration Ļƒ2 i local variance
  • 43. 22 3.2 Generation of orthogonal polynomials for dis- crete measures Let Āµ be a positive measure with inļ¬nite support S(Āµ) āŠ‚ R and ļ¬nite moments at all orders, i.e., S Ī¾n Āµ(dĪ¾) < āˆž, āˆ€n āˆˆ N0, (3.1) where N0 = {0, 1, 2, ...}, and it is deļ¬ned as a Riemann-Stieltjes integral. There exists one unique [54] set of orthogonal monic polynomials {Pi}āˆž i=0 with respect to the measure Āµ such that S Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾) = Ī“ijĪ³āˆ’2 i , i = 0, 1, 2, . . . , (3.2) where Ī³i = 0 are constants. In particular, the orthogonal polynomials satisfy a three-term recurrence relation [31, 43] Pi+1(Ī¾) = (Ī¾ āˆ’ Ī±i)Pi(Ī¾) āˆ’ Ī²iPiāˆ’1(Ī¾), i = 0, 1, 2, . . . (3.3) The uniqueness of the set of orthogonal polynomials with respect to Āµ can be also derived by constructing such set of polynomials starting from P0(Ī¾) = 1. We typ- ically choose Pāˆ’1(Ī¾) = 0 and Ī²0 to be a constant. Then the full set of orthogonal polynomials is completely determined by the coeļ¬ƒcients Ī±i and Ī²i. If the support S(Āµ) is a ļ¬nite set with data points {Ļ„1, ..., Ļ„N }, i.e., Āµ is a discrete measure deļ¬ned as Āµ = N i=1 Ī»iĪ“Ļ„i , Ī»i > 0, (3.4)
  • 44. 23 the corresponding orthogonality condition is ļ¬nite, up to order N āˆ’ 1 [46, 54], i.e., S P2 i (Ī¾)Āµ(dĪ¾) = 0, i ā‰„ N, (3.5) where Ī“Ļ„i indicates the empirical measure at Ļ„i, although by the recurrence relation (3.3) we can generate polynomials at any order greater than N āˆ’ 1. Furthermore, one way to test whether the coeļ¬ƒcients Ī±i are well approximated is to check the following relation [45, 46] Nāˆ’1 i=0 Ī±i = N i=1 Ļ„i. (3.6) One can prove that the coeļ¬ƒcient of Ī¾Nāˆ’1 in PN (Ī¾) is āˆ’ Nāˆ’1 i=0 Ī±i, and PN (Ī¾) = (Ī¾ āˆ’ Ļ„1)...(Ī¾ āˆ’ Ļ„N ), therefore equation (3.6) holds [46]. We subsequently examine ļ¬ve diļ¬€erent approaches of generating orthogonal poly- nomials for a discrete measure and point out the pros and cons of each method. In Nowak method, the coeļ¬ƒcients of the polynomials are directly derived from solving a linear system; in the other four methods, we generate coeļ¬ƒcients Ī±i and Ī²i by four diļ¬€erent numerical methods, and the coeļ¬ƒcients of polynomials are derived from the recurrence relation in equation (3.3). 3.2.1 Nowak method Deļ¬ne the k-th order moment as mk = S Ī¾k Āµ(dĪ¾), k = 0, 1, ..., 2d āˆ’ 1. (3.7)
  • 45. 24 The coeļ¬ƒcients of the d-th order polynomial Pd(Ī¾) = d i=0 aiĪ¾i are determined by the following linear system [125] ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ m0 m1 . . . md m1 m2 . . . md+1 . . . . . . . . . . . . mdāˆ’1 md . . . m2dāˆ’1 0 0 . . . 1 ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ a0 a1 . . . adāˆ’1 ad ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø = ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ 0 0 . . . 0 1 ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø , (3.8) where the (d + 1) by (d + 1) Vandermonde matrix needs to be inverted. Although this method is straightforward to implement, it is well known that the matrix may be ill conditioned when d is very large. The total computational complexity for solving the linear system in equation (3.8) is O(d2 ) to generate Pd(Ī¾) 1 . 3.2.2 Stieltjes method Stieltjes method is based on the following formulas of the coeļ¬ƒcients Ī±i and Ī²i [54] Ī±i = S Ī¾P2 i (Ī¾)Āµ(dĪ¾) S P2 i (Ī¾)Āµ(dĪ¾) , Ī²i = S Ī¾P2 i (Ī¾)Āµ(dĪ¾) S P2 iāˆ’1(Ī¾)Āµ(dĪ¾) , i = 0, 1, .., d āˆ’ 1. (3.9) For a discrete measure, the Stieltjes method is quite stable [54, 51]. When the discrete measure has a ļ¬nite number of elements in its support (N), the above formulas are exact. However, if we use Stieltjes method on a discrete measure with inļ¬nite support, i.e. Poisson distribution, we approximate the measure by a discrete 1 Here we notice that the Vandermonde matrix is in a Toeplitz matrix form. Therefore the computational complexity of solving this linear system is O(d2 ) [59, 157].
  • 46. 25 measure with ļ¬nite number of points; therefore, each time when we iterate for Ī±i and Ī²i, the error accumulates by neglecting the points with less weights. In that case, Ī±i and Ī²i may suļ¬€er from inaccuracy when i is close to N [54]. The computational complexity for integral evaluation in equation (3.9) is of the order O(N). 3.2.3 Fischer method Fischer proposed a procedure for generating the coeļ¬ƒcients Ī±i and Ī²i by adding data points one-by-one [45, 46]. Assume that the coeļ¬ƒcients Ī±i and Ī²i are known for the discrete measure Āµ = N i=1 Ī»iĪ“Ļ„i . Then, if we add another data point Ļ„ to the discrete measure Āµ and deļ¬ne a new discrete measure Ī½ = Āµ + Ī»Ī“Ļ„ , Ī» being the weight of the newly added data point Ļ„, the following relations hold [45, 46]: Ī±Ī½ i = Ī±i + Ī» Ī³2 i Pi(Ļ„)Pi+1(Ļ„) 1 + Ī» i j=0 Ī³2 j P2 j (Ļ„) āˆ’ Ī» Ī³2 iāˆ’1Pi(Ļ„)Piāˆ’1(Ļ„) 1 + Ī» iāˆ’1 j=0 Ī³2 j P2 j (Ļ„) (3.10) Ī²Ī½ i = Ī²i [1 + Ī» iāˆ’2 j=0 Ī³2 j P2 j (Ļ„)][1 + Ī» i j=0 Ī³2 j P2 j (Ļ„)] [1 + Ī» iāˆ’1 j=0 Ī³2 j P2 j (Ļ„)]2 (3.11) for i < N, and Ī±Ī½ N = Ļ„ āˆ’ Ī» Ī³2 Nāˆ’1PN (Ļ„)PNāˆ’1(Ļ„) 1 + Ī» Nāˆ’1 j=0 Ī³2 j P2 j (Ļ„) (3.12) Ī²Ī½ N = Ī»Ī³2 Nāˆ’1P2 N (Ļ„)[1 + Ī» Nāˆ’2 j=0 Ī³2 j P2 j (Ļ„)] [1 + Ī» Nāˆ’1 j=0 Ī³2 j P2 j (Ļ„)]2 , (3.13) where Ī±Ī½ i and Ī²Ī½ i indicate the coeļ¬ƒcients in the three-term recurrence formula (3.3) for the measure Ī½. The numerical stability of this algorithm depends on the stability of the recurrence relations above, and on the sequence of data points added [46]. For
  • 47. 26 example, the data points can be in either ascending or descending order. Fischerā€™s method basically modiļ¬es the available coeļ¬ƒcients Ī±i and Ī²i using the information induced by the new data point. Thus, this approach is very practical when an empirical distribution for stochastic inputs is altered by an additional possible value. For example, let us consider that we have already generated d probability collocation points with respect to the given discrete measure with N data points, and we want to add another data point into the discrete measure to generate d new probability collocation points with respect to the new measure. Using the Nowak method, we will need to reconstruct the moment matrix and invert the matrix again with N + 1 data points; however by Fischerā€™s method, we will only need to update 2d values of Ī±i and Ī²i by adding this new data point, which is more convenient. We generate a new sequence of {Ī±i, Ī²i} by adding a new data point into the measure, therefore the computational complexity for calculating the coeļ¬ƒcients {Ī³2 i , i = 0, .., d} for N times is O(N2 ). 3.2.4 Modiļ¬ed Chebyshev method Compared to the Chebyshev method [54], the modiļ¬ed Chebyshev method computes moments in a diļ¬€erent way. Deļ¬ne the quantities: Āµi,j = S Pi(Ī¾)Ī¾j Āµ(dĪ¾), i, j = 0, 1, 2, ... (3.14) Then, the coeļ¬ƒcients Ī±i and Ī²i satisfy [54]: Ī±0 = Āµ0,1 Āµ0,0 , Ī²0 = Āµ0,0, Ī±i = Āµi,i+1 Āµi,i āˆ’ Āµiāˆ’1,i Āµiāˆ’1,iāˆ’1 , Ī²i = Āµi,i Āµiāˆ’1,iāˆ’1 . (3.15)
  • 48. 27 Note that due to the orthogonality, Āµi,j = 0 when i > j. Starting from the moments Āµj, Āµi,j can be computed recursively as Āµi,j = Āµiāˆ’1,j+1 āˆ’ Ī±iāˆ’1Āµiāˆ’1,j āˆ’ Ī²iāˆ’1Āµiāˆ’2,j, (3.16) with Āµāˆ’1,0 = 0, Āµ0,j = Āµj, (3.17) where j = i, i + 1, ..., 2d āˆ’ i āˆ’ 1. However, this method suļ¬€ers from the same eļ¬€ects of ill-conditioning as the Nowak method [125] does, because they both rely on calculating moments. To sta- bilize the algorithm we introduce another way of deļ¬ning moments by polynomials: Ė†Āµi,j = S Pi(Ī¾)pj(Ī¾)Āµ(dĪ¾), (3.18) where {pi(Ī¾)} is chosen to be a set of orthogonal polynomials, e.g., Legendre poly- nomials. Deļ¬ne Ī½i = S pi(Ī¾)Āµ(dĪ¾). (3.19) Since {pi(Ī¾)}āˆž i=0 is not a set of orthogonal polynomials with respect to the measure Āµ(dĪ¾), Ī½i is, in general, not equal to zero. For all the following numerical experiments we used the Legendre polynomials for {pi(Ī¾)}āˆž i=0.2 Let Ė†Ī±i and Ė†Ī²i be the coeļ¬ƒcients in the three-term recurrence formula associated with the set {pi} of orthogonal poly- nomials. 2 Legendre polynomials {pi(Ī¾)}āˆž i=0 are deļ¬ned on [āˆ’1, 1], therefore in implementation of the Modiļ¬ed Chebyshev method, we scale the measure onto [āˆ’1, 1] ļ¬rst.
  • 49. 28 Then, we initialize the process of building up the coeļ¬ƒcients as Ė†Āµāˆ’1,j = 0, j = 1, 2, ..., 2d āˆ’ 2, Ė†Āµ0,j = Ī½j, j = 0, 2, ..., 2d āˆ’ 1, Ī±0 = Ė†Ī±0 + Ī½1 Ī½0 , Ī²0 = Ī½0, and compute the following coeļ¬ƒcients: Ė†Āµi,j = Ė†Āµiāˆ’1,j+1 āˆ’ (Ī±iāˆ’1 āˆ’ Ė†Ī±j)Ė†Āµiāˆ’1,j āˆ’ Ī²iāˆ’1 Ė†Āµiāˆ’2,j + Ė†Ī²j Ė†Āµiāˆ’1,jāˆ’1, (3.20) where j = i, i + 1, ..., 2d āˆ’ i āˆ’ 1. The coeļ¬ƒcients Ī±i and Ī²i can be obtained as Ī±i = Ė†Ī±i + Ė†Āµi,i+1 Ė†Āµi,i āˆ’ Ė†Āµiāˆ’1,i Ė†Āµiāˆ’1,iāˆ’1 , Ī²i = Ė†Āµi,i Ė†Āµiāˆ’1,iāˆ’1 . (3.21) Based on the modiļ¬ed moments, the ill-conditioning issue related to moments can be improved, although such an issue can still be severe especially when we consider orthogonality on inļ¬nite intervals. The computational complexity for generating Āµi,j and Ī½i is O(N). 3.2.5 Lanczos method The idea of Lanczos method is to tridiagonalize a matrix to obtain the coeļ¬ƒ- cients of the recurrence relation Ī±j and Ī²j. Suppose the discrete measure is Āµ = N i=1 Ī»iĪ“Ļ„i , Ī»i > 0. With weights Ī»i and Ļ„i in the expression of the measure Āµ, the
  • 50. 29 ļ¬rst step of this method is to construct a matrix [22]: ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ 1 āˆš Ī»1 āˆš Ī»2 . . . āˆš Ī»N āˆš Ī»1 Ļ„1 0 . . . 0 āˆš Ī»2 0 Ļ„2 . . . 0 . . . . . . . . . . . . . . . āˆš Ī»N 0 0 . . . Ļ„N ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø . (3.22) After we triagonalize it by the Lanczos algorithm, which is a process that reduces a symmetric matrix into a tridiagonal form with unitary transformations [59], we can obtain: ļ£« ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£¬ ļ£­ 1 āˆš Ī²0 0 . . . 0 āˆš Ī²0 Ī±0 āˆš Ī²1 . . . 0 0 āˆš Ī²1 Ī±1 . . . 0 . . . . . . . . . . . . . . . 0 0 0 . . . Ī±Nāˆ’1 ļ£¶ ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£· ļ£ø , (3.23) where the non-zero entries correspond to the coeļ¬ƒcients Ī±i and Ī²i. Lanczos method is motivated by the interest in the inverse Sturm-Liouville problem: given some information on the eigenvalues of the matrix with a highly structured form, or some of its principal sub-matrices, this method is able to generate a symmetric matrix, either Jacobi or banded, in a ļ¬nite number of steps. It is easy to program but can be considerably slow [22]. The computational complexity for the unitary transformation is O(N2 ).
  • 51. 30 3.2.6 Gaussian quadrature rule associated with a discrete measure Here we describe how to utilize the above ļ¬ve methods to perform integration over a discrete measure numerically, using the Gaussian quadrature rule [60] associated with Āµ. We consider integrals of the form S f(Ī¾)Āµ(dĪ¾) < āˆž. (3.24) With respect to Āµ, we generate the Āµ-orthogonal polynomials up to order d (d ā‰¤ N āˆ’ 1), denoted as {Pi(Ī¾)}d i=0, by one of the ļ¬ve methods. We calculated the zeros {Ī¾i}d i=1 from Pd(Ī¾) = adĪ¾d + adāˆ’1Ī¾dāˆ’1 + ... + a0, as Gaussian quadrature points, and Gaussian quadrature weights {wi}d i=1 by wi = ad adāˆ’1 S Āµ(dĪ¾)Pdāˆ’1(Ī¾)2 Pd(Ī¾i)Pdāˆ’1(Ī¾i) . (3.25) Therefore, numerically the integral is approximated by S f(Ī¾)Āµ(dĪ¾) ā‰ˆ d i=1 f(Ī¾i)wi. (3.26) In the case when zeros for polynomial Pd(Ī¾) do not have explicit formulas, Newton-Raphson is used [7, 174], with a speciļ¬ed tolerance as 10āˆ’16 (in double precision). In order to ensure that at each search we ļ¬nd a new root, the polynomial deļ¬‚ation method [81] is applied, where the searched roots are factored out of the
  • 52. 31 initial polynomial once they have been determined. All the calculations are done with double precision in this paper. 3.2.7 Orthogonality tests of numerically generated polyno- mials To investigate the stability of the ļ¬ve methods, we perform an orthogonality test, where the orthogonality is deļ¬ned as: orth(i) = 1 i iāˆ’1 j=0 | S Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾)| S P2 j (Ī¾)Āµ(dĪ¾) S P2 i (x)Āµ(dĪ¾) , i ā‰¤ N āˆ’ 1, (3.27) for the set {Pj(Ī¾)}i j=0 of orthogonal polynomials constructed numerically. Note that S Pi(Ī¾)Pj(Ī¾)Āµ(dĪ¾) = 0, 0 ā‰¤ j < i, for orthogonal polynomials constructed numeri- cally due to round-oļ¬€ errors, although they should be orthogonal theoretically. We compare the numerical orthogonality given by the aforementioned ļ¬ve meth- ods in ļ¬gure 3.1 for the following distribution: 3 f(k; n, p) = P(Ī¾ = 2k n āˆ’ 1) = n! k!(n āˆ’ k)! pk (1 āˆ’ p)nāˆ’k , k = 0, 1, 2, ..., n. (3.28) We see that Stieltjes, Modiļ¬ed Chebyshev, and Lanczos methods preserve the best numerical orthogonality when the polynomial order i is close to N. We notice that when N is large, the numerical orthogonality is preserved up to the order of 70, indicating the robustness of these three methods. The Nowak method exhibits the worst numerical orthogonality among the ļ¬ve methods, due to the ill-conditioning 3 We rescale the support for Binomial distribution with parameters (n, p), {0, .., n}, onto [āˆ’1, 1].
  • 53. 32 0 2 4 6 8 10 12 14 16 18 20 10 18 10 16 10 14 10 12 10 10 10 8 10 6 polynomial order i orth(i) Nowak Stieltjes Fischer Modified Chebyshev Lanczos n=20, p=1/2 0 10 20 30 40 50 60 70 80 90 100 10 20 10 15 10 10 10 5 10 0 polynomial order i orth(i) Nowak Stieltjes Fischer Modified Chebyshev Lanczos n=100, p=1/2 Figure 3.1: Orthogonality deļ¬ned in (3.27) with respect to the polynomial order i up to 20 with distribution deļ¬ned in (3.28) (n = 20, p = 1/2) (left) and i up to 100 with (n = 100, p = 1/2)(right). nature of the matrix in equation (3.8). The Fischer method exhibits better numerical orthogonality when the number of data points N in the discrete measure is small. The numerical orthogonality is lost when N is large, which serves as a motivation to use ME-PCM instead of PCM for numerical integration over discrete measures. Our results suggest that we shall use Stieltjes, Modiļ¬ed Chebyshev, and Lanczos methods for more accuracy. We also compare the cost by tracking the CPU time to evaluate (3.27) in ļ¬gure 3.2: for a ļ¬xed polynomial order i, we track the CPU time with respect to N, the number of points in the discrete measure deļ¬ned in (3.28); for a ļ¬xed N, we track the CPU time with respect to i. We observe that the Stieltjes method has the least computational cost while the Fischer method has the largest computational cost. Asymptotically, we observe that the computational complexity to evaluate (3.27) is O(i2 ) for Nowak method, O(N) for the Stieltjes method, O(N2 ) for the Fischer method, O(N) for the Modiļ¬ed Chebyshev method, and O(N2 ) for the Lanczos method. To conclude we recommend Stieltjes method as the most accurate and eļ¬ƒcient in generating orthogonal polynomials with respect to discrete measures, especially
  • 54. 33 20 40 80 100 10 4 10 3 10 2 10 1 10 0 n CPUtimetoevaluateorth(i) Nowak Stieltjes Fischer Modified Chebyshev Lanczos C1 *n2 C 2 *n p = 1/2 i = 4 10 20 40 80 100 10 4 10 3 10 2 10 1 10 0 polynomial order i CPUtimetoevaluateorth(i) Nowak Stieltjes Fischer Modified Chebyshev Lanczos C*i 2 n=100,p=1/2 Figure 3.2: CPU time (in seconds) on Intel (R) Core(TM) i5-3470 CPU @ 3.20 GHz in Matlab to evaluate orthogonality in (3.27) at the order i = 4 for distribution deļ¬ned in (3.28) with parameter n and p = 1/2 (left). CPU time to evaluate orthogonality in (3.27) at the order i for distribution deļ¬ned in (3.28) with parameter n = 100 and p = 1/2 (right). when higher orders are required. However, for generating polynomials at lower orders (for ME-PCM), the ļ¬ve methods are equally eļ¬€ective. We noticed from ļ¬gure 3.1 and 3.2 that the Stieltjes method exhibits the most accuracy and eļ¬ƒciency in generating orthogonal polynomials with respect to a dis- crete measure Āµ. Therefore, here we investigate the minimum polynomial order i (i ā‰¤ N āˆ’ 1) that the orthogonality orth(i) deļ¬ned in equation (3.27) of the Stieltjes method is larger than a threshold . In ļ¬gure 3.3, we perform this test on the distribu- tion given by (3.28) with diļ¬€erent parameters for n (n ā‰„ i). The highest polynomial order i for polynomial chaos shall be less than the minimum i that orth(i) exceeds a certain desired , for practical computations. The cost for numerical orthogonality is, in general, negligible compared to the cost for solving a stochastic problem by either Galerkin or collocation approaches. Hence, we can pay more attention on the accuracy, rather than the cost, of these ļ¬ve methods.
  • 55. 34 0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160 n (p=1/10) for measure defined in (28) polynomialorderi =1E 8 =1E 10 =1E 13 i = n Figure 3.3: Minimum polynomial order i (vertical axis) such that orth(i) deļ¬ned in (3.27) is greater than a threshold value Īµ (here Īµ = 1E āˆ’ 8, 1E āˆ’ 10, 1E āˆ’ 13), for distribution deļ¬ned in (3.28) with p = 1/10. Orthogonal polynomials are generated by the Stieltjes method. 3.3 Discussion about the error of numerical inte- gration 3.3.1 Theorem of numerical integration on discrete measure In [50], the h-convergence rate of ME-PCM [81] for numerical integration in terms of continuous measures was established with respect to the degree of exactness given by the quadrature rule. Let us ļ¬rst deļ¬ne the Sobolev space Wm+1,p (Ī“) to be the set of all functions f āˆˆ Lp (Ī“) such that for every multi-index Ī³ with |Ī³| ā‰¤ m + 1, the weak partial derivative DĪ³ f belongs to Lp (Ī“) [1, 40], i.e. Wm+1,p (Ī“) = {f āˆˆ Lp (Ī“) : DĪ³ f āˆˆ Lp (Ī“), āˆ€|Ī³| ā‰¤ m + 1}. (3.29)
  • 56. 35 Here Ī“ is an open set in Rn and 1 ā‰¤ p ā‰¤ +āˆž. The natural number m + 1 is called the order of the Sobolev space Wm+1,p (Ī“). Here the Sobolev space Wm+1,āˆž (A) in the following theorem is deļ¬ned for functions f : A ā†’ R subject to the norm: f m+1,āˆž,A = max |Ī³|ā‰¤m+1 ess supĪ¾āˆˆA|DĪ³ f(Ī¾)|, and the seminorm is deļ¬ned as: |f|m+1,āˆž,A = max |Ī³|=m+1 ess supĪ¾āˆˆA|DĪ³ f(Ī¾)|, where A āŠ‚ Rn , Ī³ āˆˆ Nn 0 , |Ī³| = Ī³1 + . . . + Ī³n and m + 1 āˆˆ N0. We ļ¬rst consider a one-dimensional discrete measure Āµ = N i=1 Ī»iĪ“Ļ„i , where N is a ļ¬nite number. For simplicity and without loss of generality, we assume that {Ļ„i}N i=1 āŠ‚ (0, 1). Otherwise, we can use a linear mapping to map (min{Ļ„i}N i=1āˆ’c, max{Ļ„i}N i=1+c) to (0, 1) with c being a arbitrarily small positive number. We then construct the approximation of the Dirac measure as ĀµĪµ = N i=1 Ī»iĪ·Īµ Ļ„i , (3.30) where Īµ is a small positive number and Ī·Īµ Ļ„i is deļ¬ned as Ī·Īµ Ļ„i = ļ£± ļ£“ļ£² ļ£“ļ£³ 1 Īµ if |Ī¾ āˆ’ Ļ„i| < Īµ/2, 0 otherwise. (3.31) First of all, Ī·Īµ Ļ„i deļ¬nes a continuous measure in (0, 1) with a ļ¬nite number of discon- tinuous points, where a uniform distribution is taken on the interval (Ļ„iāˆ’Īµ/2, Ļ„i+Īµ/2).
  • 57. 36 Second, Ī·Īµ Ļ„i converges to Ī“Ļ„i in the weak sense, i.e., lim Īµā†’0+ 1 0 g(Ī¾)Ī·Īµ Ļ„i (dĪ¾) = 1 0 g(Ī¾)Ī“Ļ„i (dĪ¾), (3.32) for all bounded continuous functions g(Ī¾). We write that lim Īµā†’0+ Ī·Īµ Ļ„i = Ī“Ļ„i . (3.33) It is seen that when Īµ is small enough, the intervals (Ļ„iāˆ’Īµ/2, Ļ„i+Īµ/2) can be mutually disjoint for i = 1, . . . , N. Due to the linearity, we have lim Īµā†’0+ ĀµĪµ = Āµ, (3.34) and the convergence is deļ¬ned in the weak sense as before. Then, ĀµĪµ is also a continuous measure with a ļ¬nite number of discontinuous points. The choice for Ī·Īµ Ļ„i is not unique. Another choice is Ī·Īµ Ļ„i = 1 Īµ Ī· Ī¾ āˆ’ Ļ„i Īµ , Ī·(Ī¾) = ļ£± ļ£“ļ£² ļ£“ļ£³ e āˆ’ 1 1āˆ’|Ī¾|2 if |Ī¾| < 1, 0 otherwise. (3.35) Such a choice is smooth. When Īµ is small enough, the domains deļ¬ned by |Ī¾āˆ’Ļ„i Īµ | < 1 are also mutually disjoint. We then have the following proposition. Proposition 1. For the continuous measure ĀµĪµ, we let Ī±i,Īµ and Ī²i,Īµ indicate the coeļ¬ƒcients in the three-term recurrence formula (3.3), which is valid for both con- tinuous and discrete measures. For the discrete measure Āµ, we let Ī±i and Ī²i indicate
  • 58. 37 the coeļ¬ƒcients in the three-term recurrence formula. We then have lim Īµā†’0+ Ī±i,Īµ = Ī±i, lim Īµā†’0+ Ī²i,Īµ = Ī²i. (3.36) In other words, the monic orthogonal polynomials deļ¬ned by ĀµĪµ will converge to those deļ¬ned by Āµ, i.e lim Īµā†’0+ Pi,Īµ(Ī¾) = Pi(Ī¾), (3.37) where Pi,Īµ and Pi are monic polynomials of order i corresponding to ĀµĪµ and Āµ, re- spectively. The coeļ¬ƒcients Ī±i,Īµ and Ī²i,Īµ are given by the formula, see equation (3.9), Ī±i,Īµ = (Ī¾Pi,Īµ, Pi,Īµ)ĀµĪµ (Pi,Īµ, Pi,Īµ)ĀµĪµ , i = 0, 1, 2, . . . , (3.38) Ī²i,Īµ = (Pi,Īµ, Pi,Īµ)ĀµĪµ (Piāˆ’1,Īµ, Piāˆ’1,Īµ)ĀµĪµ , i = 1, 2, . . . , (3.39) where (Ā·, Ā·)ĀµĪµ indicates the inner product with respect to ĀµĪµ. Correspondingly, we have Ī±i = (Ī¾Pi, Pi)Āµ (Pi, Pi)Āµ , i = 0, 1, 2, . . . , (3.40) Ī²i = (Pi, Pi)Āµ (Piāˆ’1,iāˆ’1)Āµ , i = 1, 2, . . . , (3.41) By deļ¬nition, Ī²0,Īµ = (1, 1)ĀµĪµ = 1, Ī²0 = (1, 1)Āµ = 1. The argument is based on induction. We assume that the equation (3.37) is true for k = i and k = i āˆ’ 1. When i = 0, this is trivial. To show that equation (3.37) holds for k = i + 1, we only need to prove equation (3.36) for k = i based on the observation that Pi+1,Īµ = (Ī¾ āˆ’ Ī±i,Īµ)Pi,Īµ āˆ’ Ī²i,ĪµPiāˆ’1,Īµ. We now show that all
  • 59. 38 inner products in equations (3.38) and (3.39) converges to the corresponding inner products in equations (3.40) and (3.41) as Īµ ā†’ 0+ . We here only consider (Pi,Īµ, Pi,Īµ)ĀµĪµ and other inner products can be dealt with in a similar way. We have (Pi,Īµ, Pi,Īµ)ĀµĪµ = (Pi, Pi)ĀµĪµ + 2(Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ + (Pi,Īµ āˆ’ Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ We then have (Pi, Pi)ĀµĪµ ā†’ (Pi, Pi)Āµ due to the deļ¬nition of ĀµĪµ. The second term on the right-hand side can be bounded as |(Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ | ā‰¤ ess supĪ¾Piess supĪ¾(Pi,Īµ āˆ’ Pi)(1, 1)ĀµĪµ . According to the assumption that Pi,Īµ ā†’ Pi, the right-hand side of the above in- equality goes to zero. Similarly, (Pi,Īµ āˆ’ Pi, Pi,Īµ āˆ’ Pi)ĀµĪµ goes to zero. We then have (Pi,Īµ, Pi,Īµ)ĀµĪµ ā†’ (Pi, Pi)Āµ. The conclusion is then achieved by induction. Remark 1. Since as Īµ ā†’ 0+ , the orthogonal polynomials deļ¬ned by ĀµĪµ will converge to those deļ¬ned by Āµ. The (Gauss) quadrature points and weights deļ¬ned by ĀµĪµ should also converge to those deļ¬ned by Āµ. We then recall the following theorem for continuous measures. Theorem 1 ([50]). Suppose f āˆˆ Wm+1,āˆž (Ī“) with Ī“ = (0, 1)n , and {Bi }Ne i=1 is a non-overlapping mesh of Ī“. Let h indicate the maximum side length of each element and QĪ“ m a quadrature rule with degree of exactness m in domain Ī“. (In other words Qm exactly integrates polynomials up to order m). Let QA m be the quadrature rule in subset A āŠ‚ Ī“, corresponding to QĪ“ m through an aļ¬ƒne linear mapping. We deļ¬ne a linear functional on Wm+1,āˆž (A) : EA(g) ā‰” A g(Ī¾)Āµ(dĪ¾) āˆ’ QA m(g), (3.42)
  • 60. 39 whose norm in the dual space of Wm+1,āˆž (A) is deļ¬ned as EA m+1,āˆž,A = sup g m+1,āˆž,Aā‰¤1 |EA(g)|. (3.43) Then, the following error estimate holds: Ī“ f(Ī¾)Āµ(dĪ¾) āˆ’ Ne i=1 QBi m f ā‰¤ Chm+1 EĪ“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“ (3.44) where C is a constant and EĪ“ m+1,āˆž,Ī“ refers to the norm in the dual space of Wm+1,āˆž (Ī“), which is deļ¬ned in equation (3.43). For discrete measures, we have the following theorem. Theorem 2. Suppose the function f satisļ¬es all assumptions required by Theorem 1. We add the following three assumptions for discrete measures: 1) The measure Āµ can be expressed as a product of n one-dimensional discrete measures, i.e., we consider n independent discrete random variables; 2) The quadrature rule QA m can be generated from the quadrature rules given by the n one-dimensional discrete measures by the tensor product; 3) The number of all the possible values for the discrete measure Āµ is ļ¬nite and they are located within Ī“. We then have Ī“ f(Ī¾)Āµ(dĪ¾) āˆ’ Ne i=1 QBi m f ā‰¤ CNāˆ’māˆ’1 es EĪ“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“, (3.45) where Nes indicates the number of integration elements for each random variable. The argument is based on Theorem 1 and the approximation ĀµĪµ of Āµ. Since we assume that Āµ is given by n independent discrete random variables, we can deļ¬ne a continuous approximation (see equation (3.30)) for each one-dimensional discrete measure and ĀµĪµ can be naturally chosen as the product of these n continuous one-
  • 61. 40 dimensional measures. We then consider Ī“ f(Ī¾)Āµ(dĪ¾) āˆ’ Ne i=1 QBi m f ā‰¤ Ī“ f(Ī¾)Āµ(dĪ¾) āˆ’ Ī“ f(Ī¾)ĀµĪµ(dĪ¾) + Ī“ f(Ī¾)ĀµĪµ(dĪ¾) āˆ’ Ne i=1 QĪµ,Bi m f + Ne i=1 QĪµ,Bi m f āˆ’ Ne i=1 QBi m f , where QĪµ,Bi m deļ¬nes the corresponding quadrature rule for the continuous measure ĀµĪµ. Since we assume that the quadrature rules QĪµ,Bi m and QBi m can be constructed by n one-dimensional quadrature rules, QĪµ,Bi m should converge to QBi m as Īµ goes to zero based on Proposition 1 and the fact that the construction procedure for QĪµ,Bi m and QBi m to have a degree of exactness m is measure independent. For the second term on the right-hand side, theorem 1 can be applied with a well-deļ¬ned h because we assume that all possible values for Āµ are located within Ī“, otherwise, this assumption can be achieved by a linear mapping. We then have Ī“ f(Ī¾)ĀµĪµ(dĪ¾) āˆ’ Ne i=1 QĪµ,Bi m f ā‰¤ Chm+1 EĪµ Ī“ m+1,āˆž,Ī“|f|m+1,āˆž,Ī“, (3.46) where EĪµ Ī“ is a linear functional deļ¬ned with respect to ĀµĪµ. We then let Īµ ā†’ 0+ . In the error bound given by equation (3.46), only EĪµ Ī“ m+1,āˆž,Ī“ is associated with ĀµĪµ. According to its deļ¬nition and noting that QĪµ,A m ā†’ QA m, lim Īµā†’0 EĪµ A(g) = lim Īµā†’0 A g(Ī¾)ĀµĪµ(dĪ¾) āˆ’ QĪµ,A m (g) = EA(g), which is a linear functional with respect to Āµ. Since ĀµĪµ ā†’ Āµ and QĪµ,Bi m ā†’ QBi m , the ļ¬rst and third term will go to zero. However, since we are working with discrete
  • 62. 41 measures, it is not convenient to use the element size. Instead we use the number of elements since h āˆ Nāˆ’1 es , where Nes indicates the number of elements per side. Then the conclusion is reached. The h-convergence rate of ME-PCM for discrete measures takes the form O N āˆ’(m+1) es . If we employ Gauss quadrature rule with d points, the degree of exactness is m = 2d āˆ’ 1, which corresponds to a h-convergence rate Nāˆ’2d es . The extra assumptions in Theorem 2 are actually quite practical. In applications, we often consider i.i.d ran- dom variables and the commonly used quadrature rules for high-dimensional cases, such as tensor-product rule and sparse grids, are obtained from one-dimensional quadrature rules. 3.3.2 Testing numerical integration with on RV We now verify the h-convergence rate numerically. We employ the Lanczos method [22] to generate the Gauss quadrature points. We then approximate integrals of GENZ functions [56] with respect to the binomial distribution Bino(n = 120, p = 1/2) using ME-PCM. We consider the following one-dimensional GENZ functions: ā€¢ GENZ1 function deals with oscillatory integrands: f1(Ī¾) = cos(2Ļ€w + cĪ¾), (3.47) ā€¢ GENZ4 function deals with Gaussian-like integrands: f4(Ī¾) = exp(āˆ’c2 (Ī¾ āˆ’ w)2 ), (3.48)
  • 63. 42 0 20 40 60 80 100 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 GENZ1 function (oscillations) w=1, c=0.01 w=1,c=0.1 w=1,c=1 10 0 10 1 10 6 10 5 10 4 10 3 10 2 Nes absoluteerror c=0.1,w=1 GENZ1 d=2 m=3 bino(120,1/2) Figure 3.4: Left: GENZ1 functions with diļ¬€erent values of c and w; Right: h-convergence of ME-PCM for function GENZ1. Two Gauss quadrature points, d = 2, are employed in each element corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method is employed to compute the orthogonal polynomials. where c and w are constants. Note that both GENZ1 and GENZ4 functions are smooth. In this section, we consider the absolute error deļ¬ned as | S f(Ī¾)Āµ(dĪ¾) āˆ’ d i=1 f(Ī¾i)wi|, where {Ī¾i} and {wi} (i = 1, ..., d) are d Gauss quadrature points and weights with respect to Āµ. In ļ¬gures 3.4 and 3.5, we plot the h-convergence behavior of ME-PCM for GENZ1 and GENZ4 functions, respectively. In each element, two Gauss quadrature points are employed, corresponding to a degree 3 of exactness, which means that the h- convergence rate should be Nāˆ’4 es . In ļ¬gures 3.4 and 3.5, we see that when Nes is large enough, the h-convergence rate of ME-PCM approaches the theoretical prediction, demonstrated by the reference straight lines CNāˆ’4 es . 3.3.3 Testing numerical integration with multiple RVs on sparse grids An interesting question is if the sparse grid approach is as eļ¬€ective for discrete mea- sures as it is for continuous measures [170], and how that compares to the tensor
  • 64. 43 0 20 40 60 80 100 120 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 GENZ4 function (Gaussian) c=0.01,w=1 c=0.1,w=1 c=1,w=1 10 0 10 1 10 13 10 12 10 11 10 10 10 9 N es absoluteerrors c=0.1,w=1 GENZ4 d=2 m=3 bino(120,1/2) Figure 3.5: Left: GENZ4 functions with diļ¬€erent values of c and w; Right: h-convergence of ME-PCM for function GENZ4. Two Gauss quadrature points, d = 2, are employed in each element corresponding to a degree m = 3 of exactness. c = 0.1, w = 1, Ī¾ āˆ¼ Bino(120, 1/2). Lanczos method is employed for numerical orthogonality. product grids. Let us denote the sparse grid level by k and the dimension by n. Assume that each random dimension is independent. We apply the Smolyak algo- rithm [149, 114, 115] to construct sparse grids, i.e., A(k + n, n) = k+1ā‰¤|i|ā‰¤k+n (āˆ’1)k+nāˆ’|i| ļ£« ļ£¬ ļ£­ n āˆ’ 1 k + n āˆ’ |i| ļ£¶ ļ£· ļ£ø (Ui1 āŠ— ... āŠ— Uin ), (3.49) where A(k + n, n) deļ¬nes a cubature formula with respect to the n-dimensional dis- crete measure and Uij deļ¬nes the quadrature rule of i-th level for the j-th dimension [170]. We use Gauss quadrature rule to deļ¬ne Uij , which implies that the grids at diļ¬€erent levels are not necessarily nested. Two-dimensional non-nested sparse grid points are plotted in ļ¬gure 3.6, where each dimension has the same discrete measure as binomial distribution Bino(10, 1/2). We then use sparse grids to approximate the integration of the following two GENZ functions with M RVs [56]: