SlideShare ist ein Scribd-Unternehmen logo
1 von 175
Downloaden Sie, um offline zu lesen
Lectures on Lévy Processes and Stochastic
                 Calculus (Koc University)
          Lecture 3: The Lévy-Itô Decomposition

                                   David Applebaum

                School of Mathematics and Statistics, University of Sheffield, UK


                                  7th December 2011




Dave Applebaum (Sheffield UK)                Lecture 3                        December 2011   1 / 44
Filtrations, Markov Processes and Martingales


We recall the probability space (Ω, F, P) which underlies our
investigations. F contains all possible events in Ω.
When we introduce the arrow of time, its convenient to be able to
consider only those events which can occur up to and including time t.
We denote by Ft this sub-σ-algebra of F. To be able to consider all
time instants on an equal footing, we define a filtration to be an
increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e.

                                0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft .




 Dave Applebaum (Sheffield UK)             Lecture 3         December 2011   2 / 44
Filtrations, Markov Processes and Martingales


We recall the probability space (Ω, F, P) which underlies our
investigations. F contains all possible events in Ω.
When we introduce the arrow of time, its convenient to be able to
consider only those events which can occur up to and including time t.
We denote by Ft this sub-σ-algebra of F. To be able to consider all
time instants on an equal footing, we define a filtration to be an
increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e.

                                0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft .




 Dave Applebaum (Sheffield UK)             Lecture 3         December 2011   2 / 44
Filtrations, Markov Processes and Martingales


We recall the probability space (Ω, F, P) which underlies our
investigations. F contains all possible events in Ω.
When we introduce the arrow of time, its convenient to be able to
consider only those events which can occur up to and including time t.
We denote by Ft this sub-σ-algebra of F. To be able to consider all
time instants on an equal footing, we define a filtration to be an
increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e.

                                0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft .




 Dave Applebaum (Sheffield UK)             Lecture 3         December 2011   2 / 44
Filtrations, Markov Processes and Martingales


We recall the probability space (Ω, F, P) which underlies our
investigations. F contains all possible events in Ω.
When we introduce the arrow of time, its convenient to be able to
consider only those events which can occur up to and including time t.
We denote by Ft this sub-σ-algebra of F. To be able to consider all
time instants on an equal footing, we define a filtration to be an
increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e.

                                0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft .




 Dave Applebaum (Sheffield UK)             Lecture 3         December 2011   2 / 44
A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration
if each X (t) is Ft -measurable.
e.g. any process is adapted to its natural filtration,

                                FtX = σ{X (s); 0 ≤ s ≤ t}.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for all
f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞,

                       E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.).                   (0.1)

(i.e. “past” and “future” are independent, given the present).
The transition probabilities of a Markov process are

                           ps,t (x, A) = P(X (t) ∈ A|X (s) = x),


i.e. the probability that the process is in the Borel set A at time t given
that it is at the point x at the earlier time s.
 Dave Applebaum (Sheffield UK)             Lecture 3                    December 2011     3 / 44
A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration
if each X (t) is Ft -measurable.
e.g. any process is adapted to its natural filtration,

                                FtX = σ{X (s); 0 ≤ s ≤ t}.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for all
f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞,

                       E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.).                   (0.1)

(i.e. “past” and “future” are independent, given the present).
The transition probabilities of a Markov process are

                           ps,t (x, A) = P(X (t) ∈ A|X (s) = x),


i.e. the probability that the process is in the Borel set A at time t given
that it is at the point x at the earlier time s.
 Dave Applebaum (Sheffield UK)             Lecture 3                    December 2011     3 / 44
A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration
if each X (t) is Ft -measurable.
e.g. any process is adapted to its natural filtration,

                                FtX = σ{X (s); 0 ≤ s ≤ t}.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for all
f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞,

                       E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.).                   (0.1)

(i.e. “past” and “future” are independent, given the present).
The transition probabilities of a Markov process are

                           ps,t (x, A) = P(X (t) ∈ A|X (s) = x),


i.e. the probability that the process is in the Borel set A at time t given
that it is at the point x at the earlier time s.
 Dave Applebaum (Sheffield UK)             Lecture 3                    December 2011     3 / 44
A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration
if each X (t) is Ft -measurable.
e.g. any process is adapted to its natural filtration,

                                FtX = σ{X (s); 0 ≤ s ≤ t}.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for all
f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞,

                       E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.).                   (0.1)

(i.e. “past” and “future” are independent, given the present).
The transition probabilities of a Markov process are

                           ps,t (x, A) = P(X (t) ∈ A|X (s) = x),


i.e. the probability that the process is in the Borel set A at time t given
that it is at the point x at the earlier time s.
 Dave Applebaum (Sheffield UK)             Lecture 3                    December 2011     3 / 44
A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration
if each X (t) is Ft -measurable.
e.g. any process is adapted to its natural filtration,

                                FtX = σ{X (s); 0 ≤ s ≤ t}.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for all
f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞,

                       E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.).                   (0.1)

(i.e. “past” and “future” are independent, given the present).
The transition probabilities of a Markov process are

                           ps,t (x, A) = P(X (t) ∈ A|X (s) = x),


i.e. the probability that the process is in the Borel set A at time t given
that it is at the point x at the earlier time s.
 Dave Applebaum (Sheffield UK)             Lecture 3                    December 2011     3 / 44
Theorem
If X is a Lévy process (adapted to its own natural filtration) wherein
each X (t) has law qt , then it is a Markov process with transition
probabilities ps,t (x, A) = qt−s (A − x).

Proof. This essentially follows from

          E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs )
                                =        f (X (s) + y )qt−s (dy ).          2
                                    Rd




 Dave Applebaum (Sheffield UK)              Lecture 3                 December 2011   4 / 44
Theorem
If X is a Lévy process (adapted to its own natural filtration) wherein
each X (t) has law qt , then it is a Markov process with transition
probabilities ps,t (x, A) = qt−s (A − x).

Proof. This essentially follows from

          E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs )
                                =        f (X (s) + y )qt−s (dy ).          2
                                    Rd




 Dave Applebaum (Sheffield UK)              Lecture 3                 December 2011   4 / 44
Theorem
If X is a Lévy process (adapted to its own natural filtration) wherein
each X (t) has law qt , then it is a Markov process with transition
probabilities ps,t (x, A) = qt−s (A − x).

Proof. This essentially follows from

          E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs )
                                =        f (X (s) + y )qt−s (dy ).          2
                                    Rd




 Dave Applebaum (Sheffield UK)              Lecture 3                 December 2011   4 / 44
Now let X be an adapted process defined on a filtered probability
space which also satisfies the integrability requirement E(|X (t)|) < ∞
for all t ≥ 0.
We say that it is a martingale if for all 0 ≤ s < t < ∞,

                                E(X (t)|Fs ) = X (s)   a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.




 Dave Applebaum (Sheffield UK)             Lecture 3           December 2011   5 / 44
Now let X be an adapted process defined on a filtered probability
space which also satisfies the integrability requirement E(|X (t)|) < ∞
for all t ≥ 0.
We say that it is a martingale if for all 0 ≤ s < t < ∞,

                                E(X (t)|Fs ) = X (s)   a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.




 Dave Applebaum (Sheffield UK)             Lecture 3           December 2011   5 / 44
Now let X be an adapted process defined on a filtered probability
space which also satisfies the integrability requirement E(|X (t)|) < ∞
for all t ≥ 0.
We say that it is a martingale if for all 0 ≤ s < t < ∞,

                                E(X (t)|Fs ) = X (s)   a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.




 Dave Applebaum (Sheffield UK)             Lecture 3           December 2011   5 / 44
Now let X be an adapted process defined on a filtered probability
space which also satisfies the integrability requirement E(|X (t)|) < ∞
for all t ≥ 0.
We say that it is a martingale if for all 0 ≤ s < t < ∞,

                                E(X (t)|Fs ) = X (s)   a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.




 Dave Applebaum (Sheffield UK)             Lecture 3           December 2011   5 / 44
An adapted Lévy process with zero mean is a martingale (with respect
to its natural filtration)
since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation
Es (·) := E(·|Fs ):

                   Es (X (t)) = Es (X (s) + X (t) − X (s))
                                = X (s) + E(X (t) − X (s)) = X (s)


Although there is no good reason why a generic Lévy process should
be a martingale (or even have finite mean), there are some important
examples:




 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   6 / 44
An adapted Lévy process with zero mean is a martingale (with respect
to its natural filtration)
since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation
Es (·) := E(·|Fs ):

                   Es (X (t)) = Es (X (s) + X (t) − X (s))
                                = X (s) + E(X (t) − X (s)) = X (s)


Although there is no good reason why a generic Lévy process should
be a martingale (or even have finite mean), there are some important
examples:




 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   6 / 44
An adapted Lévy process with zero mean is a martingale (with respect
to its natural filtration)
since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation
Es (·) := E(·|Fs ):

                   Es (X (t)) = Es (X (s) + X (t) − X (s))
                                = X (s) + E(X (t) − X (s)) = X (s)


Although there is no good reason why a generic Lévy process should
be a martingale (or even have finite mean), there are some important
examples:




 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   6 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
e.g. the processes whose values at time t are

     σB(t) where B(t) is a standard Brownian motion, and σ is an r × d
     matrix.
     ˜          ˜
     N(t) where N is a compensated Poisson process with intensity λ.
Some important martingales associated to Lévy processes include:
     exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed.
     |σB(t)|2 − tr(A)t where A = σ T σ.
      ˜
     N(t)2 − λt.




 Dave Applebaum (Sheffield UK)     Lecture 3             December 2011   7 / 44
Càdlàg Paths



A function f : R+ → Rd is càdlàg if it is continue à droite et limité à
gauche, i.e. right continuous with left limits. Such a function has only
jump discontinuities.
Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,
{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.
If the filtration satisfies the “usual hypotheses” of right continuity and
completion, then every Lévy process has a càdlàg modification which
is itself a Lévy process.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   8 / 44
Càdlàg Paths



A function f : R+ → Rd is càdlàg if it is continue à droite et limité à
gauche, i.e. right continuous with left limits. Such a function has only
jump discontinuities.
Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,
{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.
If the filtration satisfies the “usual hypotheses” of right continuity and
completion, then every Lévy process has a càdlàg modification which
is itself a Lévy process.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   8 / 44
Càdlàg Paths



A function f : R+ → Rd is càdlàg if it is continue à droite et limité à
gauche, i.e. right continuous with left limits. Such a function has only
jump discontinuities.
Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,
{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.
If the filtration satisfies the “usual hypotheses” of right continuity and
completion, then every Lévy process has a càdlàg modification which
is itself a Lévy process.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   8 / 44
Càdlàg Paths



A function f : R+ → Rd is càdlàg if it is continue à droite et limité à
gauche, i.e. right continuous with left limits. Such a function has only
jump discontinuities.
Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,
{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.
If the filtration satisfies the “usual hypotheses” of right continuity and
completion, then every Lévy process has a càdlàg modification which
is itself a Lévy process.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   8 / 44
Càdlàg Paths



A function f : R+ → Rd is càdlàg if it is continue à droite et limité à
gauche, i.e. right continuous with left limits. Such a function has only
jump discontinuities.
Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,
{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.
If the filtration satisfies the “usual hypotheses” of right continuity and
completion, then every Lévy process has a càdlàg modification which
is itself a Lévy process.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   8 / 44
From now on, we will always make the following assumptions:-
     (Ω, F, P) will be a fixed probability space equipped with a filtration
     (Ft , t ≥ 0) which satisfies the “usual hypotheses”.
     Every Lévy process X = (X (t), t ≥ 0) will be assumed to be
     Ft -adapted and have càdlàg sample paths.
     X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   9 / 44
From now on, we will always make the following assumptions:-
     (Ω, F, P) will be a fixed probability space equipped with a filtration
     (Ft , t ≥ 0) which satisfies the “usual hypotheses”.
     Every Lévy process X = (X (t), t ≥ 0) will be assumed to be
     Ft -adapted and have càdlàg sample paths.
     X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   9 / 44
From now on, we will always make the following assumptions:-
     (Ω, F, P) will be a fixed probability space equipped with a filtration
     (Ft , t ≥ 0) which satisfies the “usual hypotheses”.
     Every Lévy process X = (X (t), t ≥ 0) will be assumed to be
     Ft -adapted and have càdlàg sample paths.
     X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   9 / 44
From now on, we will always make the following assumptions:-
     (Ω, F, P) will be a fixed probability space equipped with a filtration
     (Ft , t ≥ 0) which satisfies the “usual hypotheses”.
     Every Lévy process X = (X (t), t ≥ 0) will be assumed to be
     Ft -adapted and have càdlàg sample paths.
     X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   9 / 44
The Jumps of A Lévy Process - Poisson Random
Measures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy
process is defined by

                                ∆X (t) = X (t) − X (t−),

for each t ≥ 0.
Theorem
If N is a Lévy process which is increasing (a.s.) and is such that
(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 and
Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It
follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is
i.i.d.
 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   10 / 44
The Jumps of A Lévy Process - Poisson Random
Measures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy
process is defined by

                                ∆X (t) = X (t) − X (t−),

for each t ≥ 0.
Theorem
If N is a Lévy process which is increasing (a.s.) and is such that
(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 and
Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It
follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is
i.i.d.
 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   10 / 44
The Jumps of A Lévy Process - Poisson Random
Measures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy
process is defined by

                                ∆X (t) = X (t) − X (t−),

for each t ≥ 0.
Theorem
If N is a Lévy process which is increasing (a.s.) and is such that
(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 and
Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It
follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is
i.i.d.
 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   10 / 44
The Jumps of A Lévy Process - Poisson Random
Measures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy
process is defined by

                                ∆X (t) = X (t) − X (t−),

for each t ≥ 0.
Theorem
If N is a Lévy process which is increasing (a.s.) and is such that
(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 and
Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It
follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is
i.i.d.
 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   10 / 44
The Jumps of A Lévy Process - Poisson Random
Measures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy
process is defined by

                                ∆X (t) = X (t) − X (t−),

for each t ≥ 0.
Theorem
If N is a Lévy process which is increasing (a.s.) and is such that
(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 and
Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It
follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is
i.i.d.
 Dave Applebaum (Sheffield UK)            Lecture 3               December 2011   10 / 44
By (L2) again, we have for each s, t ≥ 0,

            P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0)
                                = P(T1 > s)P(T1 > t)


From the fact that N is increasing (a.s.), it follows easily that the map
t → P(T1 > t) is decreasing and by a straightforward application of
stochastic continuity (L3) we find that the map t → P(T1 > t) is
continuous at t = 0. Hence there exists λ > 0 such that
P(T1 > t) = e−λt for each t ≥ 0.




 Dave Applebaum (Sheffield UK)          Lecture 3            December 2011   11 / 44
By (L2) again, we have for each s, t ≥ 0,

            P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0)
                                = P(T1 > s)P(T1 > t)


From the fact that N is increasing (a.s.), it follows easily that the map
t → P(T1 > t) is decreasing and by a straightforward application of
stochastic continuity (L3) we find that the map t → P(T1 > t) is
continuous at t = 0. Hence there exists λ > 0 such that
P(T1 > t) = e−λt for each t ≥ 0.




 Dave Applebaum (Sheffield UK)          Lecture 3            December 2011   11 / 44
By (L2) again, we have for each s, t ≥ 0,

            P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0)
                                = P(T1 > s)P(T1 > t)


From the fact that N is increasing (a.s.), it follows easily that the map
t → P(T1 > t) is decreasing and by a straightforward application of
stochastic continuity (L3) we find that the map t → P(T1 > t) is
continuous at t = 0. Hence there exists λ > 0 such that
P(T1 > t) = e−λt for each t ≥ 0.




 Dave Applebaum (Sheffield UK)          Lecture 3            December 2011   11 / 44
By (L2) again, we have for each s, t ≥ 0,

            P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0)
                                = P(T1 > s)P(T1 > t)


From the fact that N is increasing (a.s.), it follows easily that the map
t → P(T1 > t) is decreasing and by a straightforward application of
stochastic continuity (L3) we find that the map t → P(T1 > t) is
continuous at t = 0. Hence there exists λ > 0 such that
P(T1 > t) = e−λt for each t ≥ 0.




 Dave Applebaum (Sheffield UK)          Lecture 3            December 2011   11 / 44
By (L2) again, we have for each s, t ≥ 0,

            P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0)
                                = P(T1 > s)P(T1 > t)


From the fact that N is increasing (a.s.), it follows easily that the map
t → P(T1 > t) is decreasing and by a straightforward application of
stochastic continuity (L3) we find that the map t → P(T1 > t) is
continuous at t = 0. Hence there exists λ > 0 such that
P(T1 > t) = e−λt for each t ≥ 0.




 Dave Applebaum (Sheffield UK)          Lecture 3            December 2011   11 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
So T1 has an exponential distribution with parameter λ and

                            P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.                                                   n
Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) ,
                                                               n!
then

P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t).


                  But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )
is the sum of (n + 1) i.i.d. exponential random variables, and so has a
                                                 λn+1 sn
gamma distribution with density fTn+1 (s) = e−λs         for s > 0.
                                                   n!
The required result follows on integration.                            2


 Dave Applebaum (Sheffield UK)            Lecture 3              December 2011   12 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
The following result shows that ∆X is not a straightforward process to
analyse.
Lemma
If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.).

Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,
then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by
(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),
and so has a subsequence which converges almost surely to X (t).
The result follows by uniqueness of limits.                            2




 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   13 / 44
Much of the analytic difficulty in manipulating Lévy processes arises
from the fact that it is possible for them to have

                                        |∆X (s)| = ∞ a.s.
                                0≤s≤t


and the way in which these difficulties is overcome exploits the fact that
we always have
                            |∆X (s)|2 < ∞ a.s.
                                0≤s≤t

We will gain more insight into these ideas as the discussion
progresses.




 Dave Applebaum (Sheffield UK)              Lecture 3        December 2011   14 / 44
Much of the analytic difficulty in manipulating Lévy processes arises
from the fact that it is possible for them to have

                                        |∆X (s)| = ∞ a.s.
                                0≤s≤t


and the way in which these difficulties is overcome exploits the fact that
we always have
                            |∆X (s)|2 < ∞ a.s.
                                0≤s≤t

We will gain more insight into these ideas as the discussion
progresses.




 Dave Applebaum (Sheffield UK)              Lecture 3        December 2011   14 / 44
Much of the analytic difficulty in manipulating Lévy processes arises
from the fact that it is possible for them to have

                                        |∆X (s)| = ∞ a.s.
                                0≤s≤t


and the way in which these difficulties is overcome exploits the fact that
we always have
                            |∆X (s)|2 < ∞ a.s.
                                0≤s≤t

We will gain more insight into these ideas as the discussion
progresses.




 Dave Applebaum (Sheffield UK)              Lecture 3        December 2011   14 / 44
Rather than exploring ∆X itself further, we will find it more profitable to
count jumps of specified size. More precisely, let 0 ≤ t < ∞ and
A ∈ B(Rd − {0}). Define

                        N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A}
                                     =            1A (∆X (s)).
                                         0≤s≤t

Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a
counting measure on B(Rd − {0}) and hence

                                E(N(t, A)) =       N(t, A)(ω)dP(ω)

is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)).



 Dave Applebaum (Sheffield UK)                  Lecture 3             December 2011   15 / 44
Rather than exploring ∆X itself further, we will find it more profitable to
count jumps of specified size. More precisely, let 0 ≤ t < ∞ and
A ∈ B(Rd − {0}). Define

                        N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A}
                                     =            1A (∆X (s)).
                                         0≤s≤t

Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a
counting measure on B(Rd − {0}) and hence

                                E(N(t, A)) =       N(t, A)(ω)dP(ω)

is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)).



 Dave Applebaum (Sheffield UK)                  Lecture 3             December 2011   15 / 44
Rather than exploring ∆X itself further, we will find it more profitable to
count jumps of specified size. More precisely, let 0 ≤ t < ∞ and
A ∈ B(Rd − {0}). Define

                        N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A}
                                     =            1A (∆X (s)).
                                         0≤s≤t

Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a
counting measure on B(Rd − {0}) and hence

                                E(N(t, A)) =       N(t, A)(ω)dP(ω)

is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)).



 Dave Applebaum (Sheffield UK)                  Lecture 3             December 2011   15 / 44
Rather than exploring ∆X itself further, we will find it more profitable to
count jumps of specified size. More precisely, let 0 ≤ t < ∞ and
A ∈ B(Rd − {0}). Define

                        N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A}
                                     =            1A (∆X (s)).
                                         0≤s≤t

Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a
counting measure on B(Rd − {0}) and hence

                                E(N(t, A)) =       N(t, A)(ω)dP(ω)

is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)).



 Dave Applebaum (Sheffield UK)                  Lecture 3             December 2011   15 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
/ ¯
We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.

Lemma
If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0.
                                                A
Proof. Define a sequence of stopping times (Tn , n ∈ N) by
T1A = inf{t > 0; ∆X (t) ∈ A}, and for
         A              A
n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}.
                                       A
Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A

(a.s.).
                          A
Indeed suppose that T1 = 0 with non-zero probability and let
N = {ω ∈ Ω : T1   A = 0}. Assume that ω ∈ Ω − N . Then given any

u > 0, we can find 0 < δ, δ < u and > 0 such that
|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right
continuity of X (·)(ω) at the origin.


 Dave Applebaum (Sheffield UK)     Lecture 3               December 2011   16 / 44
Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero
                                    A
                                                A
probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M
then we obtain a contradiction with the fact that X has a left limit
(almost surely) at T A (ω).
Hence, for each t ≥ 0,

                   N(t, A) =          1{Tn ≤t} < ∞ a.s.
                                         A                2
                                n∈N




 Dave Applebaum (Sheffield UK)              Lecture 3      December 2011   17 / 44
Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero
                                    A
                                                A
probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M
then we obtain a contradiction with the fact that X has a left limit
(almost surely) at T A (ω).
Hence, for each t ≥ 0,

                   N(t, A) =          1{Tn ≤t} < ∞ a.s.
                                         A                2
                                n∈N




 Dave Applebaum (Sheffield UK)              Lecture 3      December 2011   17 / 44
Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero
                                    A
                                                A
probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M
then we obtain a contradiction with the fact that X has a left limit
(almost surely) at T A (ω).
Hence, for each t ≥ 0,

                   N(t, A) =          1{Tn ≤t} < ∞ a.s.
                                         A                2
                                n∈N




 Dave Applebaum (Sheffield UK)              Lecture 3      December 2011   17 / 44
Be aware that if A fails to be bounded below, then this lemma may no
longer hold, because of the accumulation of large numbers of small
jumps.
The following result should at least be plausible, given Theorem 2 and
Lemma 4.
Theorem
 1   If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A).
 2   If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables
     N(t, A1 ), . . . , N(t, Am ) are independent.

It follows immediately that µ(A) < ∞ whenever A is bounded below,
hence the measure µ is σ-finite.



 Dave Applebaum (Sheffield UK)      Lecture 3                 December 2011   18 / 44
Be aware that if A fails to be bounded below, then this lemma may no
longer hold, because of the accumulation of large numbers of small
jumps.
The following result should at least be plausible, given Theorem 2 and
Lemma 4.
Theorem
 1   If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A).
 2   If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables
     N(t, A1 ), . . . , N(t, Am ) are independent.

It follows immediately that µ(A) < ∞ whenever A is bounded below,
hence the measure µ is σ-finite.



 Dave Applebaum (Sheffield UK)      Lecture 3                 December 2011   18 / 44
Be aware that if A fails to be bounded below, then this lemma may no
longer hold, because of the accumulation of large numbers of small
jumps.
The following result should at least be plausible, given Theorem 2 and
Lemma 4.
Theorem
 1   If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A).
 2   If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables
     N(t, A1 ), . . . , N(t, Am ) are independent.

It follows immediately that µ(A) < ∞ whenever A is bounded below,
hence the measure µ is σ-finite.



 Dave Applebaum (Sheffield UK)      Lecture 3                 December 2011   18 / 44
Be aware that if A fails to be bounded below, then this lemma may no
longer hold, because of the accumulation of large numbers of small
jumps.
The following result should at least be plausible, given Theorem 2 and
Lemma 4.
Theorem
 1   If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A).
 2   If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables
     N(t, A1 ), . . . , N(t, Am ) are independent.

It follows immediately that µ(A) < ∞ whenever A is bounded below,
hence the measure µ is σ-finite.



 Dave Applebaum (Sheffield UK)      Lecture 3                 December 2011   18 / 44
The main properties of N, which we will use extensively in the sequel,
are summarised below:-.
 1   For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on
     B(Rd − {0}).
 2   For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A) = E(N(1, A)).
 3                       ˜
     The compensator (N(t, A), t ≥ 0) is a martingale-valued measure
             ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e.
     where N(t,
                                  ˜
     For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   19 / 44
The main properties of N, which we will use extensively in the sequel,
are summarised below:-.
 1   For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on
     B(Rd − {0}).
 2   For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A) = E(N(1, A)).
 3                       ˜
     The compensator (N(t, A), t ≥ 0) is a martingale-valued measure
             ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e.
     where N(t,
                                  ˜
     For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   19 / 44
The main properties of N, which we will use extensively in the sequel,
are summarised below:-.
 1   For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on
     B(Rd − {0}).
 2   For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A) = E(N(1, A)).
 3                       ˜
     The compensator (N(t, A), t ≥ 0) is a martingale-valued measure
             ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e.
     where N(t,
                                  ˜
     For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   19 / 44
The main properties of N, which we will use extensively in the sequel,
are summarised below:-.
 1   For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on
     B(Rd − {0}).
 2   For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A) = E(N(1, A)).
 3                       ˜
     The compensator (N(t, A), t ≥ 0) is a martingale-valued measure
             ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e.
     where N(t,
                                  ˜
     For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   19 / 44
The main properties of N, which we will use extensively in the sequel,
are summarised below:-.
 1   For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on
     B(Rd − {0}).
 2   For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process
     with intensity µ(A) = E(N(1, A)).
 3                       ˜
     The compensator (N(t, A), t ≥ 0) is a martingale-valued measure
             ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e.
     where N(t,
                                  ˜
     For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   19 / 44
Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A be
bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson
integral of f as a random finite sum by

                           f (x)N(t, dx)(ω) :=         f (x)N(t, {x})(ω).
                       A                         x∈A


Note that each A f (x)N(t, dx) is an Rd -valued random variable and
gives rise to a càdlàg stochastic process, as we vary t.
Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we
have
                 f (x)N(t, dx) =      f (∆X (u))1A (∆X (u)).      (0.2)
                     A                   0≤u≤t




 Dave Applebaum (Sheffield UK)              Lecture 3                    December 2011   20 / 44
Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A be
bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson
integral of f as a random finite sum by

                           f (x)N(t, dx)(ω) :=         f (x)N(t, {x})(ω).
                       A                         x∈A


Note that each A f (x)N(t, dx) is an Rd -valued random variable and
gives rise to a càdlàg stochastic process, as we vary t.
Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we
have
                 f (x)N(t, dx) =      f (∆X (u))1A (∆X (u)).      (0.2)
                     A                   0≤u≤t




 Dave Applebaum (Sheffield UK)              Lecture 3                    December 2011   20 / 44
Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A be
bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson
integral of f as a random finite sum by

                           f (x)N(t, dx)(ω) :=         f (x)N(t, {x})(ω).
                       A                         x∈A


Note that each A f (x)N(t, dx) is an Rd -valued random variable and
gives rise to a càdlàg stochastic process, as we vary t.
Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we
have
                 f (x)N(t, dx) =      f (∆X (u))1A (∆X (u)).      (0.2)
                     A                   0≤u≤t




 Dave Applebaum (Sheffield UK)              Lecture 3                    December 2011   20 / 44
Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A be
bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson
integral of f as a random finite sum by

                           f (x)N(t, dx)(ω) :=         f (x)N(t, {x})(ω).
                       A                         x∈A


Note that each A f (x)N(t, dx) is an Rd -valued random variable and
gives rise to a càdlàg stochastic process, as we vary t.
Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we
have
                 f (x)N(t, dx) =      f (∆X (u))1A (∆X (u)).      (0.2)
                     A                   0≤u≤t




 Dave Applebaum (Sheffield UK)              Lecture 3                    December 2011   20 / 44
Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A be
bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson
integral of f as a random finite sum by

                           f (x)N(t, dx)(ω) :=         f (x)N(t, {x})(ω).
                       A                         x∈A


Note that each A f (x)N(t, dx) is an Rd -valued random variable and
gives rise to a càdlàg stochastic process, as we vary t.
Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we
have
                 f (x)N(t, dx) =      f (∆X (u))1A (∆X (u)).      (0.2)
                     A                   0≤u≤t




 Dave Applebaum (Sheffield UK)              Lecture 3                    December 2011   20 / 44
In the sequel, we will sometimes use µA to denote the restriction to A
of the measure µ. in the following theorem, Var stands for variance.
Theorem
Let A be bounded below, then

       A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with
 1

     characteristic function


     E exp i u,                     f (x)N(t, dx)             = exp t          (ei(u,x) − 1)µf ,A (dx)
                                A                                         Rd


     for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each
     B ∈ B(Rd ).
 2   If f ∈ L1 (A, µA ), then

                            E             f (x)N(t, dx)       =t       f (x)µ(dx).
                                      A                            A

 Dave Applebaum (Sheffield UK)                     Lecture 3                         December 2011   21 / 44
In the sequel, we will sometimes use µA to denote the restriction to A
of the measure µ. in the following theorem, Var stands for variance.
Theorem
Let A be bounded below, then

       A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with
 1

     characteristic function


     E exp i u,                     f (x)N(t, dx)             = exp t          (ei(u,x) − 1)µf ,A (dx)
                                A                                         Rd


     for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each
     B ∈ B(Rd ).
 2   If f ∈ L1 (A, µA ), then

                            E             f (x)N(t, dx)       =t       f (x)µ(dx).
                                      A                            A

 Dave Applebaum (Sheffield UK)                     Lecture 3                         December 2011   21 / 44
In the sequel, we will sometimes use µA to denote the restriction to A
of the measure µ. in the following theorem, Var stands for variance.
Theorem
Let A be bounded below, then

       A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with
 1

     characteristic function


     E exp i u,                     f (x)N(t, dx)             = exp t          (ei(u,x) − 1)µf ,A (dx)
                                A                                         Rd


     for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each
     B ∈ B(Rd ).
 2   If f ∈ L1 (A, µA ), then

                            E             f (x)N(t, dx)       =t       f (x)µ(dx).
                                      A                            A

 Dave Applebaum (Sheffield UK)                     Lecture 3                         December 2011   21 / 44
In the sequel, we will sometimes use µA to denote the restriction to A
of the measure µ. in the following theorem, Var stands for variance.
Theorem
Let A be bounded below, then

       A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with
 1

     characteristic function


     E exp i u,                     f (x)N(t, dx)             = exp t          (ei(u,x) − 1)µf ,A (dx)
                                A                                         Rd


     for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each
     B ∈ B(Rd ).
 2   If f ∈ L1 (A, µA ), then

                            E             f (x)N(t, dx)       =t       f (x)µ(dx).
                                      A                            A

 Dave Applebaum (Sheffield UK)                     Lecture 3                         December 2011   21 / 44
Theorem
3 If f ∈ L2 (A, µA ), then

                      Var           f (x)N(t, dx)        =t       |f (x)|2 µ(dx).
                                A                             A




 Dave Applebaum (Sheffield UK)                Lecture 3                        December 2011   22 / 44
Theorem
3 If f ∈ L2 (A, µA ), then

                      Var           f (x)N(t, dx)        =t       |f (x)|2 µ(dx).
                                A                             A




 Dave Applebaum (Sheffield UK)                Lecture 3                        December 2011   22 / 44
Proof. - part of it!
1) For simplicity, we will prove this result in the case where
f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj
                                                                j=1
where each cj ∈ Rd . We can assume, without loss of generality, that
the Aj ’s are disjoint Borel subsets of A.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   23 / 44
Proof. - part of it!
1) For simplicity, we will prove this result in the case where
f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj
                                                                j=1
where each cj ∈ Rd . We can assume, without loss of generality, that
the Aj ’s are disjoint Borel subsets of A.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   23 / 44
Proof. - part of it!
1) For simplicity, we will prove this result in the case where
f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj
                                                                j=1
where each cj ∈ Rd . We can assume, without loss of generality, that
the Aj ’s are disjoint Borel subsets of A.




 Dave Applebaum (Sheffield UK)     Lecture 3                December 2011   23 / 44
By Theorem 5, we find that
                                                                                           
                                                                            n                 
E exp i u,                 f (x)N(t, dx)         = E exp i u,                   cj N(t, Aj ) 
                       A                                 
                                                                            j=1
                                                                                               
                                                       n
                                                 =           E exp i u, cj N(t, Aj )
                                                       j=1
                                                       n
                                                 =           exp t ei(u,cj ) − 1 µ(Aj )
                                                       j=1

                                                 = exp t              (ei(u,f (x)) − 1)µ(dx) .
                                                                  A




 Dave Applebaum (Sheffield UK)              Lecture 3                         December 2011   24 / 44
By Theorem 5, we find that
                                                                                           
                                                                            n                 
E exp i u,                 f (x)N(t, dx)         = E exp i u,                   cj N(t, Aj ) 
                       A                                 
                                                                            j=1
                                                                                               
                                                       n
                                                 =           E exp i u, cj N(t, Aj )
                                                       j=1
                                                       n
                                                 =           exp t ei(u,cj ) − 1 µ(Aj )
                                                       j=1

                                                 = exp t              (ei(u,f (x)) − 1)µ(dx) .
                                                                  A




 Dave Applebaum (Sheffield UK)              Lecture 3                         December 2011   24 / 44
By Theorem 5, we find that
                                                                                           
                                                                            n                 
E exp i u,                 f (x)N(t, dx)         = E exp i u,                   cj N(t, Aj ) 
                       A                                 
                                                                            j=1
                                                                                               
                                                       n
                                                 =           E exp i u, cj N(t, Aj )
                                                       j=1
                                                       n
                                                 =           exp t ei(u,cj ) − 1 µ(Aj )
                                                       j=1

                                                 = exp t              (ei(u,f (x)) − 1)µ(dx) .
                                                                  A




 Dave Applebaum (Sheffield UK)              Lecture 3                         December 2011   24 / 44
By Theorem 5, we find that
                                                                                           
                                                                            n                 
E exp i u,                 f (x)N(t, dx)         = E exp i u,                   cj N(t, Aj ) 
                       A                                 
                                                                            j=1
                                                                                               
                                                       n
                                                 =           E exp i u, cj N(t, Aj )
                                                       j=1
                                                       n
                                                 =           exp t ei(u,cj ) − 1 µ(Aj )
                                                       j=1

                                                 = exp t              (ei(u,f (x)) − 1)µ(dx) .
                                                                  A




 Dave Applebaum (Sheffield UK)              Lecture 3                         December 2011   24 / 44
Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple
functions converging to f in L1 and hence a subsequence which
converges to f almost surely. Passing to the limit along this
subsequence in the above yields the required result, via dominated
convergence.
(2) and (3) follow from (1) by differentiation.                       2




 Dave Applebaum (Sheffield UK)   Lecture 3               December 2011   25 / 44
Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple
functions converging to f in L1 and hence a subsequence which
converges to f almost surely. Passing to the limit along this
subsequence in the above yields the required result, via dominated
convergence.
(2) and (3) follow from (1) by differentiation.                       2




 Dave Applebaum (Sheffield UK)   Lecture 3               December 2011   25 / 44
Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple
functions converging to f in L1 and hence a subsequence which
converges to f almost surely. Passing to the limit along this
subsequence in the above yields the required result, via dominated
convergence.
(2) and (3) follow from (1) by differentiation.                       2




 Dave Applebaum (Sheffield UK)   Lecture 3               December 2011   25 / 44
It follows from Theorem 6 (2) that a Poisson integral will fail to have a
finite mean if f ∈ L1 (A, µ).
                 /
For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson
integral by

                          ˜
                     f (x)N(t, dx) =       f (x)N(t, dx) − t       f (x)µ(dx).
                 A                     A                       A


A straightforward argument shows that
         ˜
  A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact
extensively in the sequel.




 Dave Applebaum (Sheffield UK)               Lecture 3                    December 2011   26 / 44
It follows from Theorem 6 (2) that a Poisson integral will fail to have a
finite mean if f ∈ L1 (A, µ).
                 /
For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson
integral by

                          ˜
                     f (x)N(t, dx) =       f (x)N(t, dx) − t       f (x)µ(dx).
                 A                     A                       A


A straightforward argument shows that
         ˜
  A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact
extensively in the sequel.




 Dave Applebaum (Sheffield UK)               Lecture 3                    December 2011   26 / 44
It follows from Theorem 6 (2) that a Poisson integral will fail to have a
finite mean if f ∈ L1 (A, µ).
                 /
For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson
integral by

                          ˜
                     f (x)N(t, dx) =       f (x)N(t, dx) − t       f (x)µ(dx).
                 A                     A                       A


A straightforward argument shows that
         ˜
  A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact
extensively in the sequel.




 Dave Applebaum (Sheffield UK)               Lecture 3                    December 2011   26 / 44
Note that by Theorem 6 (2) and (3), we can easily deduce the following
two important facts:

                          E exp i u,                       ˜
                                                      f (x)N(t, dx)
                                                  A

                    = exp t                  (ei(u,x) − 1 − i(u, x))µf ,A (dx) ,                 (0.3)
                                        Rd

for each u ∈ Rd , and for f ∈ L2 (A, µA ),
                                                    2
                    E                    ˜
                                    f (x)N(t, dx)         =t       |f (x)|2 µ(dx).               (0.4)
                                A                              A




 Dave Applebaum (Sheffield UK)                     Lecture 3                      December 2011    27 / 44
Note that by Theorem 6 (2) and (3), we can easily deduce the following
two important facts:

                          E exp i u,                       ˜
                                                      f (x)N(t, dx)
                                                  A

                    = exp t                  (ei(u,x) − 1 − i(u, x))µf ,A (dx) ,                 (0.3)
                                        Rd

for each u ∈ Rd , and for f ∈ L2 (A, µA ),
                                                    2
                    E                    ˜
                                    f (x)N(t, dx)         =t       |f (x)|2 µ(dx).               (0.4)
                                A                              A




 Dave Applebaum (Sheffield UK)                     Lecture 3                      December 2011    27 / 44
Processes of Finite Variation



We begin by introducing a useful class of functions. Let
P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval
[a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define
the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the
partition P by the prescription
                                             n
                                VarP (g) =         |g(ti+1 ) − g(ti )|.
                                             i=1




 Dave Applebaum (Sheffield UK)                 Lecture 3                   December 2011   28 / 44
Processes of Finite Variation



We begin by introducing a useful class of functions. Let
P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval
[a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define
the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the
partition P by the prescription
                                             n
                                VarP (g) =         |g(ti+1 ) − g(ti )|.
                                             i=1




 Dave Applebaum (Sheffield UK)                 Lecture 3                   December 2011   28 / 44
Processes of Finite Variation



We begin by introducing a useful class of functions. Let
P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval
[a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define
the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the
partition P by the prescription
                                             n
                                VarP (g) =         |g(ti+1 ) − g(ti )|.
                                             i=1




 Dave Applebaum (Sheffield UK)                 Lecture 3                   December 2011   28 / 44
Processes of Finite Variation



We begin by introducing a useful class of functions. Let
P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval
[a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define
the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the
partition P by the prescription
                                             n
                                VarP (g) =         |g(ti+1 ) − g(ti )|.
                                             i=1




 Dave Applebaum (Sheffield UK)                 Lecture 3                   December 2011   28 / 44
Processes of Finite Variation



We begin by introducing a useful class of functions. Let
P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval
[a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define
the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the
partition P by the prescription
                                             n
                                VarP (g) =         |g(ti+1 ) − g(ti )|.
                                             i=1




 Dave Applebaum (Sheffield UK)                 Lecture 3                   December 2011   28 / 44
If V (g) = supP VarP (g) < ∞, we say that g has finite variation on
[a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite
variation if it has finite variation on each compact interval.
It is a trivial observation that every non-decreasing g is of finite
variation. Conversely if g is of finite variation, then it can always be
written as the difference of two non-decreasing functions - to see this,
just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on
                    2         2
[a, t].




 Dave Applebaum (Sheffield UK)      Lecture 3                December 2011   29 / 44
If V (g) = supP VarP (g) < ∞, we say that g has finite variation on
[a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite
variation if it has finite variation on each compact interval.
It is a trivial observation that every non-decreasing g is of finite
variation. Conversely if g is of finite variation, then it can always be
written as the difference of two non-decreasing functions - to see this,
just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on
                    2         2
[a, t].




 Dave Applebaum (Sheffield UK)      Lecture 3                December 2011   29 / 44
If V (g) = supP VarP (g) < ∞, we say that g has finite variation on
[a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite
variation if it has finite variation on each compact interval.
It is a trivial observation that every non-decreasing g is of finite
variation. Conversely if g is of finite variation, then it can always be
written as the difference of two non-decreasing functions - to see this,
just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on
                    2         2
[a, t].




 Dave Applebaum (Sheffield UK)      Lecture 3                December 2011   29 / 44
If V (g) = supP VarP (g) < ∞, we say that g has finite variation on
[a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite
variation if it has finite variation on each compact interval.
It is a trivial observation that every non-decreasing g is of finite
variation. Conversely if g is of finite variation, then it can always be
written as the difference of two non-decreasing functions - to see this,
just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on
                    2         2
[a, t].




 Dave Applebaum (Sheffield UK)      Lecture 3                December 2011   29 / 44
If V (g) = supP VarP (g) < ∞, we say that g has finite variation on
[a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite
variation if it has finite variation on each compact interval.
It is a trivial observation that every non-decreasing g is of finite
variation. Conversely if g is of finite variation, then it can always be
written as the difference of two non-decreasing functions - to see this,
just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on
                    2         2
[a, t].




 Dave Applebaum (Sheffield UK)      Lecture 3                December 2011   29 / 44
Functions of finite variation are important in integration, for suppose
that we are given a function g which we are proposing as an integrator,
then as a minimum we will want to be able to define the Stieltjes
integral I fdg, for all continuous functions f (where I is some finite
interval). In fact a necessary and sufficient condition for obtaining such
an integral as a limit of Riemann sums is that g has finite variation.
A stochastic process (X (t), t ≥ 0) is of finite variation if the paths
(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.




 Dave Applebaum (Sheffield UK)    Lecture 3                December 2011   30 / 44
Functions of finite variation are important in integration, for suppose
that we are given a function g which we are proposing as an integrator,
then as a minimum we will want to be able to define the Stieltjes
integral I fdg, for all continuous functions f (where I is some finite
interval). In fact a necessary and sufficient condition for obtaining such
an integral as a limit of Riemann sums is that g has finite variation.
A stochastic process (X (t), t ≥ 0) is of finite variation if the paths
(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.




 Dave Applebaum (Sheffield UK)    Lecture 3                December 2011   30 / 44
Functions of finite variation are important in integration, for suppose
that we are given a function g which we are proposing as an integrator,
then as a minimum we will want to be able to define the Stieltjes
integral I fdg, for all continuous functions f (where I is some finite
interval). In fact a necessary and sufficient condition for obtaining such
an integral as a limit of Riemann sums is that g has finite variation.
A stochastic process (X (t), t ≥ 0) is of finite variation if the paths
(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.




 Dave Applebaum (Sheffield UK)    Lecture 3                December 2011   30 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
The following is an important example for us.
Example Poisson Integrals
Let N be a Poisson random measure with intensity measure µ and let
f : Rd → Rd be Borel measurable. For A bounded below, let
Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite
variation on [0, t] for each t ≥ 0. To see this, we observe that for all
partitions P of [0, t], we have

                VarP (Y ) ≤             |f (∆X (s))|1A (∆X (s)) < ∞ a.s.            (0.5)
                                0≤s≤t


where X (t) =         A xN(t, dx),   for each t ≥ 0.




 Dave Applebaum (Sheffield UK)               Lecture 3               December 2011    31 / 44
In fact, a necessary and sufficient condition for a Lévy process to be of
finite variation is that there is no Brownian part (i.e. a = 0 in the
Lévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   32 / 44
In fact, a necessary and sufficient condition for a Lévy process to be of
finite variation is that there is no Brownian part (i.e. a = 0 in the
Lévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)    Lecture 3               December 2011   32 / 44
The Lévy-Itô Decomposition


This is the key result of this lecture.
First, note that for A bounded below, for each t ≥ 0

                              xN(t, dx) =           ∆X (u)1A (∆X (u))
                          A                 0≤u≤t

is the sum of all the jumps taking values in the set A up to the time t.
Since the paths of X are càdlàg, this is clearly a finite random sum. In
particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than
one. It is a compound Poisson process, has finite variation but may
have no finite moments.




 Dave Applebaum (Sheffield UK)                Lecture 3                  December 2011   33 / 44
The Lévy-Itô Decomposition


This is the key result of this lecture.
First, note that for A bounded below, for each t ≥ 0

                              xN(t, dx) =           ∆X (u)1A (∆X (u))
                          A                 0≤u≤t

is the sum of all the jumps taking values in the set A up to the time t.
Since the paths of X are càdlàg, this is clearly a finite random sum. In
particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than
one. It is a compound Poisson process, has finite variation but may
have no finite moments.




 Dave Applebaum (Sheffield UK)                Lecture 3                  December 2011   33 / 44
The Lévy-Itô Decomposition


This is the key result of this lecture.
First, note that for A bounded below, for each t ≥ 0

                              xN(t, dx) =           ∆X (u)1A (∆X (u))
                          A                 0≤u≤t

is the sum of all the jumps taking values in the set A up to the time t.
Since the paths of X are càdlàg, this is clearly a finite random sum. In
particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than
one. It is a compound Poisson process, has finite variation but may
have no finite moments.




 Dave Applebaum (Sheffield UK)                Lecture 3                  December 2011   33 / 44
The Lévy-Itô Decomposition


This is the key result of this lecture.
First, note that for A bounded below, for each t ≥ 0

                              xN(t, dx) =           ∆X (u)1A (∆X (u))
                          A                 0≤u≤t

is the sum of all the jumps taking values in the set A up to the time t.
Since the paths of X are càdlàg, this is clearly a finite random sum. In
particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than
one. It is a compound Poisson process, has finite variation but may
have no finite moments.




 Dave Applebaum (Sheffield UK)                Lecture 3                  December 2011   33 / 44
On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a
Lévy process having finite moments to all orders.
Now lets turn our attention to the small jumps. We study compensated
integrals, which we know are martingales. Introduce the notation

                                 M(t, A) :=         ˜
                                                  x N(t, dx)
                                              A

for t ≥ 0 and A bounded below. For each m ∈ N, let

                                              1          1
                          Bm =    x ∈ Rd ,       < |x| ≤
                                             m+1         m
                                       n
and for each n ∈ N, let An =           m=1 Bm .




 Dave Applebaum (Sheffield UK)            Lecture 3             December 2011   34 / 44
On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a
Lévy process having finite moments to all orders.
Now lets turn our attention to the small jumps. We study compensated
integrals, which we know are martingales. Introduce the notation

                                 M(t, A) :=         ˜
                                                  x N(t, dx)
                                              A

for t ≥ 0 and A bounded below. For each m ∈ N, let

                                              1          1
                          Bm =    x ∈ Rd ,       < |x| ≤
                                             m+1         m
                                       n
and for each n ∈ N, let An =           m=1 Bm .




 Dave Applebaum (Sheffield UK)            Lecture 3             December 2011   34 / 44
On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a
Lévy process having finite moments to all orders.
Now lets turn our attention to the small jumps. We study compensated
integrals, which we know are martingales. Introduce the notation

                                 M(t, A) :=         ˜
                                                  x N(t, dx)
                                              A

for t ≥ 0 and A bounded below. For each m ∈ N, let

                                              1          1
                          Bm =    x ∈ Rd ,       < |x| ≤
                                             m+1         m
                                       n
and for each n ∈ N, let An =           m=1 Bm .




 Dave Applebaum (Sheffield UK)            Lecture 3             December 2011   34 / 44
On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a
Lévy process having finite moments to all orders.
Now lets turn our attention to the small jumps. We study compensated
integrals, which we know are martingales. Introduce the notation

                                 M(t, A) :=         ˜
                                                  x N(t, dx)
                                              A

for t ≥ 0 and A bounded below. For each m ∈ N, let

                                              1          1
                          Bm =    x ∈ Rd ,       < |x| ≤
                                             m+1         m
                                       n
and for each n ∈ N, let An =           m=1 Bm .




 Dave Applebaum (Sheffield UK)            Lecture 3             December 2011   34 / 44
On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a
Lévy process having finite moments to all orders.
Now lets turn our attention to the small jumps. We study compensated
integrals, which we know are martingales. Introduce the notation

                                 M(t, A) :=         ˜
                                                  x N(t, dx)
                                              A

for t ≥ 0 and A bounded below. For each m ∈ N, let

                                              1          1
                          Bm =    x ∈ Rd ,       < |x| ≤
                                             m+1         m
                                       n
and for each n ∈ N, let An =           m=1 Bm .




 Dave Applebaum (Sheffield UK)            Lecture 3             December 2011   34 / 44
Define
                                    ˜
                                  x N(t, dx) := L2 − lim M(t, An ),
                          |x|<1                         n→∞

which is a martingale. Moreover, on taking limits in (0.3), we get


E exp i        u,              ˜
                             x N(t, dx)     = exp t              (ei(u,x) − 1 − i(u, x))µ(dx)
                     |x|<1                               |x|<1




 Dave Applebaum (Sheffield UK)               Lecture 3                     December 2011   35 / 44
Define
                                    ˜
                                  x N(t, dx) := L2 − lim M(t, An ),
                          |x|<1                         n→∞

which is a martingale. Moreover, on taking limits in (0.3), we get


E exp i        u,              ˜
                             x N(t, dx)     = exp t              (ei(u,x) − 1 − i(u, x))µ(dx)
                     |x|<1                               |x|<1




 Dave Applebaum (Sheffield UK)               Lecture 3                     December 2011   35 / 44
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)
Koc3(dba)

Weitere ähnliche Inhalte

Was ist angesagt?

5. fourier properties
5. fourier properties5. fourier properties
5. fourier propertiesskysunilyadav
 
Optics Fourier Transform Ii
Optics Fourier Transform IiOptics Fourier Transform Ii
Optics Fourier Transform Iidiarmseven
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2Teng Li
 
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal Operators
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal OperatorsWeyl's Theorem for Algebraically Totally K - Quasi – Paranormal Operators
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal OperatorsIOSR Journals
 
Totally R*-Continuous and Totally R*-Irresolute Functions
Totally R*-Continuous and Totally R*-Irresolute FunctionsTotally R*-Continuous and Totally R*-Irresolute Functions
Totally R*-Continuous and Totally R*-Irresolute Functionsinventionjournals
 
Introduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysisIntroduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysis宗翰 謝
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysiszukun
 
Jyokyo-kai-20120605
Jyokyo-kai-20120605Jyokyo-kai-20120605
Jyokyo-kai-20120605ketanaka
 
弱値の半古典論
弱値の半古典論弱値の半古典論
弱値の半古典論tanaka-atushi
 
Eece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformSandilya Sridhara
 
Optics Fourier Transform I
Optics Fourier Transform IOptics Fourier Transform I
Optics Fourier Transform Idiarmseven
 
A Komlo ́sTheorem for general Banach lattices of measurable functions
A Komlo ́sTheorem for general Banach lattices of measurable functionsA Komlo ́sTheorem for general Banach lattices of measurable functions
A Komlo ́sTheorem for general Banach lattices of measurable functionsesasancpe
 
Probabilistic diameter and its properties.
Probabilistic diameter and its properties.Probabilistic diameter and its properties.
Probabilistic diameter and its properties.inventionjournals
 
Expressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ CalculusExpressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ Calculusevastsdsh
 
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...BRNSS Publication Hub
 
The structure of functions
The structure of functionsThe structure of functions
The structure of functionsSpringer
 
Fourier analysis techniques fourier series
Fourier analysis techniques   fourier seriesFourier analysis techniques   fourier series
Fourier analysis techniques fourier seriesJawad Khan
 
Hamilton-Jacobi approach for second order traffic flow models
Hamilton-Jacobi approach for second order traffic flow modelsHamilton-Jacobi approach for second order traffic flow models
Hamilton-Jacobi approach for second order traffic flow modelsGuillaume Costeseque
 

Was ist angesagt? (20)

5. fourier properties
5. fourier properties5. fourier properties
5. fourier properties
 
Optics Fourier Transform Ii
Optics Fourier Transform IiOptics Fourier Transform Ii
Optics Fourier Transform Ii
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2
 
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal Operators
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal OperatorsWeyl's Theorem for Algebraically Totally K - Quasi – Paranormal Operators
Weyl's Theorem for Algebraically Totally K - Quasi – Paranormal Operators
 
Totally R*-Continuous and Totally R*-Irresolute Functions
Totally R*-Continuous and Totally R*-Irresolute FunctionsTotally R*-Continuous and Totally R*-Irresolute Functions
Totally R*-Continuous and Totally R*-Irresolute Functions
 
Introduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysisIntroduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysis
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Jyokyo-kai-20120605
Jyokyo-kai-20120605Jyokyo-kai-20120605
Jyokyo-kai-20120605
 
弱値の半古典論
弱値の半古典論弱値の半古典論
弱値の半古典論
 
Eece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transform
 
Optics Fourier Transform I
Optics Fourier Transform IOptics Fourier Transform I
Optics Fourier Transform I
 
A Komlo ́sTheorem for general Banach lattices of measurable functions
A Komlo ́sTheorem for general Banach lattices of measurable functionsA Komlo ́sTheorem for general Banach lattices of measurable functions
A Komlo ́sTheorem for general Banach lattices of measurable functions
 
Probabilistic diameter and its properties.
Probabilistic diameter and its properties.Probabilistic diameter and its properties.
Probabilistic diameter and its properties.
 
Expressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ CalculusExpressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ Calculus
 
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...
New Method for Finding an Optimal Solution of Generalized Fuzzy Transportatio...
 
Properties of Fourier transform
Properties of Fourier transformProperties of Fourier transform
Properties of Fourier transform
 
The structure of functions
The structure of functionsThe structure of functions
The structure of functions
 
Fourier analysis techniques fourier series
Fourier analysis techniques   fourier seriesFourier analysis techniques   fourier series
Fourier analysis techniques fourier series
 
Hamilton-Jacobi approach for second order traffic flow models
Hamilton-Jacobi approach for second order traffic flow modelsHamilton-Jacobi approach for second order traffic flow models
Hamilton-Jacobi approach for second order traffic flow models
 
Ch06 6
Ch06 6Ch06 6
Ch06 6
 

Ähnlich wie Koc3(dba)

Overview of Stochastic Calculus Foundations
Overview of Stochastic Calculus FoundationsOverview of Stochastic Calculus Foundations
Overview of Stochastic Calculus FoundationsAshwin Rao
 
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docx
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docxStochastic Calculus, Summer 2014, July 22,Lecture 7Con.docx
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docxdessiechisomjj4
 
Limit in Dual Space
Limit in Dual SpaceLimit in Dual Space
Limit in Dual SpaceQUESTJOURNAL
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Alexander Decker
 
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)IJERA Editor
 
Existance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential EquartionExistance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential Equartioninventionjournals
 
On Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular IntegralsOn Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular IntegralsVjekoslavKovac1
 
Algorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlgorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlex Prut
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4Phong Vo
 
the fourier series
the fourier seriesthe fourier series
the fourier seriessafi al amu
 
A generalisation of the ratio-of-uniform algorithm
A generalisation of the ratio-of-uniform algorithmA generalisation of the ratio-of-uniform algorithm
A generalisation of the ratio-of-uniform algorithmChristian Robert
 

Ähnlich wie Koc3(dba) (20)

Ft3 new
Ft3 newFt3 new
Ft3 new
 
Overview of Stochastic Calculus Foundations
Overview of Stochastic Calculus FoundationsOverview of Stochastic Calculus Foundations
Overview of Stochastic Calculus Foundations
 
Koc2(dba)
Koc2(dba)Koc2(dba)
Koc2(dba)
 
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docx
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docxStochastic Calculus, Summer 2014, July 22,Lecture 7Con.docx
Stochastic Calculus, Summer 2014, July 22,Lecture 7Con.docx
 
Limit in Dual Space
Limit in Dual SpaceLimit in Dual Space
Limit in Dual Space
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
 
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
 
Existance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential EquartionExistance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential Equartion
 
On Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular IntegralsOn Twisted Paraproducts and some other Multilinear Singular Integrals
On Twisted Paraproducts and some other Multilinear Singular Integrals
 
Algorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlgorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography Theory
 
04_AJMS_254_19.pdf
04_AJMS_254_19.pdf04_AJMS_254_19.pdf
04_AJMS_254_19.pdf
 
Intro probability 4
Intro probability 4Intro probability 4
Intro probability 4
 
smtlecture.7
smtlecture.7smtlecture.7
smtlecture.7
 
the fourier series
the fourier seriesthe fourier series
the fourier series
 
Slides mc gill-v3
Slides mc gill-v3Slides mc gill-v3
Slides mc gill-v3
 
20120140504015 2
20120140504015 220120140504015 2
20120140504015 2
 
Slides mc gill-v4
Slides mc gill-v4Slides mc gill-v4
Slides mc gill-v4
 
A generalisation of the ratio-of-uniform algorithm
A generalisation of the ratio-of-uniform algorithmA generalisation of the ratio-of-uniform algorithm
A generalisation of the ratio-of-uniform algorithm
 

Kürzlich hochgeladen

fca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdffca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdfHenry Tapper
 
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...amilabibi1
 
Vp Girls near me Delhi Call Now or WhatsApp
Vp Girls near me Delhi Call Now or WhatsAppVp Girls near me Delhi Call Now or WhatsApp
Vp Girls near me Delhi Call Now or WhatsAppmiss dipika
 
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...Amil Baba Dawood bangali
 
Stock Market Brief Deck FOR 4/17 video.pdf
Stock Market Brief Deck FOR 4/17 video.pdfStock Market Brief Deck FOR 4/17 video.pdf
Stock Market Brief Deck FOR 4/17 video.pdfMichael Silva
 
Classical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithClassical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithAdamYassin2
 
government_intervention_in_business_ownership[1].pdf
government_intervention_in_business_ownership[1].pdfgovernment_intervention_in_business_ownership[1].pdf
government_intervention_in_business_ownership[1].pdfshaunmashale756
 
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACT
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACTGOODSANDSERVICETAX IN INDIAN ECONOMY IMPACT
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACTharshitverma1762
 
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...First NO1 World Amil baba in Faisalabad
 
Stock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfStock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfMichael Silva
 
Stock Market Brief Deck for "this does not happen often".pdf
Stock Market Brief Deck for "this does not happen often".pdfStock Market Brief Deck for "this does not happen often".pdf
Stock Market Brief Deck for "this does not happen often".pdfMichael Silva
 
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170Sonam Pathan
 
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》rnrncn29
 
The Core Functions of the Bangko Sentral ng Pilipinas
The Core Functions of the Bangko Sentral ng PilipinasThe Core Functions of the Bangko Sentral ng Pilipinas
The Core Functions of the Bangko Sentral ng PilipinasCherylouCamus
 
(中央兰开夏大学毕业证学位证成绩单-案例)
(中央兰开夏大学毕业证学位证成绩单-案例)(中央兰开夏大学毕业证学位证成绩单-案例)
(中央兰开夏大学毕业证学位证成绩单-案例)twfkn8xj
 
Economic Risk Factor Update: April 2024 [SlideShare]
Economic Risk Factor Update: April 2024 [SlideShare]Economic Risk Factor Update: April 2024 [SlideShare]
Economic Risk Factor Update: April 2024 [SlideShare]Commonwealth
 
House of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHouse of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHenry Tapper
 
SBP-Market-Operations and market managment
SBP-Market-Operations and market managmentSBP-Market-Operations and market managment
SBP-Market-Operations and market managmentfactical
 
Tenets of Physiocracy History of Economic
Tenets of Physiocracy History of EconomicTenets of Physiocracy History of Economic
Tenets of Physiocracy History of Economiccinemoviesu
 

Kürzlich hochgeladen (20)

fca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdffca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdf
 
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...
Amil Baba In Pakistan amil baba in Lahore amil baba in Islamabad amil baba in...
 
Vp Girls near me Delhi Call Now or WhatsApp
Vp Girls near me Delhi Call Now or WhatsAppVp Girls near me Delhi Call Now or WhatsApp
Vp Girls near me Delhi Call Now or WhatsApp
 
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...
NO1 WorldWide online istikhara for love marriage vashikaran specialist love p...
 
Stock Market Brief Deck FOR 4/17 video.pdf
Stock Market Brief Deck FOR 4/17 video.pdfStock Market Brief Deck FOR 4/17 video.pdf
Stock Market Brief Deck FOR 4/17 video.pdf
 
Classical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam SmithClassical Theory of Macroeconomics by Adam Smith
Classical Theory of Macroeconomics by Adam Smith
 
government_intervention_in_business_ownership[1].pdf
government_intervention_in_business_ownership[1].pdfgovernment_intervention_in_business_ownership[1].pdf
government_intervention_in_business_ownership[1].pdf
 
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACT
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACTGOODSANDSERVICETAX IN INDIAN ECONOMY IMPACT
GOODSANDSERVICETAX IN INDIAN ECONOMY IMPACT
 
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
 
Stock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfStock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdf
 
Stock Market Brief Deck for "this does not happen often".pdf
Stock Market Brief Deck for "this does not happen often".pdfStock Market Brief Deck for "this does not happen often".pdf
Stock Market Brief Deck for "this does not happen often".pdf
 
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170
Call Girls Near Golden Tulip Essential Hotel, New Delhi 9873777170
 
🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road
 
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》
《加拿大本地办假证-寻找办理Dalhousie毕业证和达尔豪斯大学毕业证书的中介代理》
 
The Core Functions of the Bangko Sentral ng Pilipinas
The Core Functions of the Bangko Sentral ng PilipinasThe Core Functions of the Bangko Sentral ng Pilipinas
The Core Functions of the Bangko Sentral ng Pilipinas
 
(中央兰开夏大学毕业证学位证成绩单-案例)
(中央兰开夏大学毕业证学位证成绩单-案例)(中央兰开夏大学毕业证学位证成绩单-案例)
(中央兰开夏大学毕业证学位证成绩单-案例)
 
Economic Risk Factor Update: April 2024 [SlideShare]
Economic Risk Factor Update: April 2024 [SlideShare]Economic Risk Factor Update: April 2024 [SlideShare]
Economic Risk Factor Update: April 2024 [SlideShare]
 
House of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHouse of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview document
 
SBP-Market-Operations and market managment
SBP-Market-Operations and market managmentSBP-Market-Operations and market managment
SBP-Market-Operations and market managment
 
Tenets of Physiocracy History of Economic
Tenets of Physiocracy History of EconomicTenets of Physiocracy History of Economic
Tenets of Physiocracy History of Economic
 

Koc3(dba)

  • 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 3: The Lévy-Itô Decomposition David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 7th December 2011 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 1 / 44
  • 2. Filtrations, Markov Processes and Martingales We recall the probability space (Ω, F, P) which underlies our investigations. F contains all possible events in Ω. When we introduce the arrow of time, its convenient to be able to consider only those events which can occur up to and including time t. We denote by Ft this sub-σ-algebra of F. To be able to consider all time instants on an equal footing, we define a filtration to be an increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44
  • 3. Filtrations, Markov Processes and Martingales We recall the probability space (Ω, F, P) which underlies our investigations. F contains all possible events in Ω. When we introduce the arrow of time, its convenient to be able to consider only those events which can occur up to and including time t. We denote by Ft this sub-σ-algebra of F. To be able to consider all time instants on an equal footing, we define a filtration to be an increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44
  • 4. Filtrations, Markov Processes and Martingales We recall the probability space (Ω, F, P) which underlies our investigations. F contains all possible events in Ω. When we introduce the arrow of time, its convenient to be able to consider only those events which can occur up to and including time t. We denote by Ft this sub-σ-algebra of F. To be able to consider all time instants on an equal footing, we define a filtration to be an increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44
  • 5. Filtrations, Markov Processes and Martingales We recall the probability space (Ω, F, P) which underlies our investigations. F contains all possible events in Ω. When we introduce the arrow of time, its convenient to be able to consider only those events which can occur up to and including time t. We denote by Ft this sub-σ-algebra of F. To be able to consider all time instants on an equal footing, we define a filtration to be an increasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44
  • 6. A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration if each X (t) is Ft -measurable. e.g. any process is adapted to its natural filtration, FtX = σ{X (s); 0 ≤ s ≤ t}. An adapted process X = (X (t), t ≥ 0) is a Markov process if for all f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1) (i.e. “past” and “future” are independent, given the present). The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x), i.e. the probability that the process is in the Borel set A at time t given that it is at the point x at the earlier time s. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44
  • 7. A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration if each X (t) is Ft -measurable. e.g. any process is adapted to its natural filtration, FtX = σ{X (s); 0 ≤ s ≤ t}. An adapted process X = (X (t), t ≥ 0) is a Markov process if for all f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1) (i.e. “past” and “future” are independent, given the present). The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x), i.e. the probability that the process is in the Borel set A at time t given that it is at the point x at the earlier time s. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44
  • 8. A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration if each X (t) is Ft -measurable. e.g. any process is adapted to its natural filtration, FtX = σ{X (s); 0 ≤ s ≤ t}. An adapted process X = (X (t), t ≥ 0) is a Markov process if for all f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1) (i.e. “past” and “future” are independent, given the present). The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x), i.e. the probability that the process is in the Borel set A at time t given that it is at the point x at the earlier time s. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44
  • 9. A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration if each X (t) is Ft -measurable. e.g. any process is adapted to its natural filtration, FtX = σ{X (s); 0 ≤ s ≤ t}. An adapted process X = (X (t), t ≥ 0) is a Markov process if for all f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1) (i.e. “past” and “future” are independent, given the present). The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x), i.e. the probability that the process is in the Borel set A at time t given that it is at the point x at the earlier time s. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44
  • 10. A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtration if each X (t) is Ft -measurable. e.g. any process is adapted to its natural filtration, FtX = σ{X (s); 0 ≤ s ≤ t}. An adapted process X = (X (t), t ≥ 0) is a Markov process if for all f ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1) (i.e. “past” and “future” are independent, given the present). The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x), i.e. the probability that the process is in the Borel set A at time t given that it is at the point x at the earlier time s. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44
  • 11. Theorem If X is a Lévy process (adapted to its own natural filtration) wherein each X (t) has law qt , then it is a Markov process with transition probabilities ps,t (x, A) = qt−s (A − x). Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44
  • 12. Theorem If X is a Lévy process (adapted to its own natural filtration) wherein each X (t) has law qt , then it is a Markov process with transition probabilities ps,t (x, A) = qt−s (A − x). Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44
  • 13. Theorem If X is a Lévy process (adapted to its own natural filtration) wherein each X (t) has law qt , then it is a Markov process with transition probabilities ps,t (x, A) = qt−s (A − x). Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44
  • 14. Now let X be an adapted process defined on a filtered probability space which also satisfies the integrability requirement E(|X (t)|) < ∞ for all t ≥ 0. We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s. Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44
  • 15. Now let X be an adapted process defined on a filtered probability space which also satisfies the integrability requirement E(|X (t)|) < ∞ for all t ≥ 0. We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s. Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44
  • 16. Now let X be an adapted process defined on a filtered probability space which also satisfies the integrability requirement E(|X (t)|) < ∞ for all t ≥ 0. We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s. Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44
  • 17. Now let X be an adapted process defined on a filtered probability space which also satisfies the integrability requirement E(|X (t)|) < ∞ for all t ≥ 0. We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s. Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44
  • 18. An adapted Lévy process with zero mean is a martingale (with respect to its natural filtration) since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation Es (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s) Although there is no good reason why a generic Lévy process should be a martingale (or even have finite mean), there are some important examples: Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44
  • 19. An adapted Lévy process with zero mean is a martingale (with respect to its natural filtration) since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation Es (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s) Although there is no good reason why a generic Lévy process should be a martingale (or even have finite mean), there are some important examples: Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44
  • 20. An adapted Lévy process with zero mean is a martingale (with respect to its natural filtration) since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notation Es (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s) Although there is no good reason why a generic Lévy process should be a martingale (or even have finite mean), there are some important examples: Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44
  • 21. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 22. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 23. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 24. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 25. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 26. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 27. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ. Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is fixed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44
  • 28. Càdlàg Paths A function f : R+ → Rd is càdlàg if it is continue à droite et limité à gauche, i.e. right continuous with left limits. Such a function has only jump discontinuities. Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg, {0 ≤ t ≤ T , ∆f (t) = 0} is at most countable. If the filtration satisfies the “usual hypotheses” of right continuity and completion, then every Lévy process has a càdlàg modification which is itself a Lévy process. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44
  • 29. Càdlàg Paths A function f : R+ → Rd is càdlàg if it is continue à droite et limité à gauche, i.e. right continuous with left limits. Such a function has only jump discontinuities. Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg, {0 ≤ t ≤ T , ∆f (t) = 0} is at most countable. If the filtration satisfies the “usual hypotheses” of right continuity and completion, then every Lévy process has a càdlàg modification which is itself a Lévy process. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44
  • 30. Càdlàg Paths A function f : R+ → Rd is càdlàg if it is continue à droite et limité à gauche, i.e. right continuous with left limits. Such a function has only jump discontinuities. Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg, {0 ≤ t ≤ T , ∆f (t) = 0} is at most countable. If the filtration satisfies the “usual hypotheses” of right continuity and completion, then every Lévy process has a càdlàg modification which is itself a Lévy process. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44
  • 31. Càdlàg Paths A function f : R+ → Rd is càdlàg if it is continue à droite et limité à gauche, i.e. right continuous with left limits. Such a function has only jump discontinuities. Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg, {0 ≤ t ≤ T , ∆f (t) = 0} is at most countable. If the filtration satisfies the “usual hypotheses” of right continuity and completion, then every Lévy process has a càdlàg modification which is itself a Lévy process. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44
  • 32. Càdlàg Paths A function f : R+ → Rd is càdlàg if it is continue à droite et limité à gauche, i.e. right continuous with left limits. Such a function has only jump discontinuities. Define f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg, {0 ≤ t ≤ T , ∆f (t) = 0} is at most countable. If the filtration satisfies the “usual hypotheses” of right continuity and completion, then every Lévy process has a càdlàg modification which is itself a Lévy process. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44
  • 33. From now on, we will always make the following assumptions:- (Ω, F, P) will be a fixed probability space equipped with a filtration (Ft , t ≥ 0) which satisfies the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44
  • 34. From now on, we will always make the following assumptions:- (Ω, F, P) will be a fixed probability space equipped with a filtration (Ft , t ≥ 0) which satisfies the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44
  • 35. From now on, we will always make the following assumptions:- (Ω, F, P) will be a fixed probability space equipped with a filtration (Ft , t ≥ 0) which satisfies the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44
  • 36. From now on, we will always make the following assumptions:- (Ω, F, P) will be a fixed probability space equipped with a filtration (Ft , t ≥ 0) which satisfies the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44
  • 37. The Jumps of A Lévy Process - Poisson Random Measures The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy process is defined by ∆X (t) = X (t) − X (t−), for each t ≥ 0. Theorem If N is a Lévy process which is increasing (a.s.) and is such that (∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process. Proof. Define a sequence of stopping times recursively by T0 = 0 and Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is i.i.d. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44
  • 38. The Jumps of A Lévy Process - Poisson Random Measures The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy process is defined by ∆X (t) = X (t) − X (t−), for each t ≥ 0. Theorem If N is a Lévy process which is increasing (a.s.) and is such that (∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process. Proof. Define a sequence of stopping times recursively by T0 = 0 and Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is i.i.d. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44
  • 39. The Jumps of A Lévy Process - Poisson Random Measures The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy process is defined by ∆X (t) = X (t) − X (t−), for each t ≥ 0. Theorem If N is a Lévy process which is increasing (a.s.) and is such that (∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process. Proof. Define a sequence of stopping times recursively by T0 = 0 and Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is i.i.d. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44
  • 40. The Jumps of A Lévy Process - Poisson Random Measures The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy process is defined by ∆X (t) = X (t) − X (t−), for each t ≥ 0. Theorem If N is a Lévy process which is increasing (a.s.) and is such that (∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process. Proof. Define a sequence of stopping times recursively by T0 = 0 and Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is i.i.d. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44
  • 41. The Jumps of A Lévy Process - Poisson Random Measures The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévy process is defined by ∆X (t) = X (t) − X (t−), for each t ≥ 0. Theorem If N is a Lévy process which is increasing (a.s.) and is such that (∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process. Proof. Define a sequence of stopping times recursively by T0 = 0 and Tn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. It follows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) is i.i.d. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44
  • 42. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t) From the fact that N is increasing (a.s.), it follows easily that the map t → P(T1 > t) is decreasing and by a straightforward application of stochastic continuity (L3) we find that the map t → P(T1 > t) is continuous at t = 0. Hence there exists λ > 0 such that P(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44
  • 43. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t) From the fact that N is increasing (a.s.), it follows easily that the map t → P(T1 > t) is decreasing and by a straightforward application of stochastic continuity (L3) we find that the map t → P(T1 > t) is continuous at t = 0. Hence there exists λ > 0 such that P(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44
  • 44. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t) From the fact that N is increasing (a.s.), it follows easily that the map t → P(T1 > t) is decreasing and by a straightforward application of stochastic continuity (L3) we find that the map t → P(T1 > t) is continuous at t = 0. Hence there exists λ > 0 such that P(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44
  • 45. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t) From the fact that N is increasing (a.s.), it follows easily that the map t → P(T1 > t) is decreasing and by a straightforward application of stochastic continuity (L3) we find that the map t → P(T1 > t) is continuous at t = 0. Hence there exists λ > 0 such that P(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44
  • 46. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t) From the fact that N is increasing (a.s.), it follows easily that the map t → P(T1 > t) is decreasing and by a straightforward application of stochastic continuity (L3) we find that the map t → P(T1 > t) is continuous at t = 0. Hence there exists λ > 0 such that P(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44
  • 47. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 48. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 49. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 50. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 51. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 52. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt , for each t ≥ 0. n Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n! then P(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn ) is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sn gamma distribution with density fTn+1 (s) = e−λs for s > 0. n! The required result follows on integration. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44
  • 53. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 54. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 55. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 56. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 57. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 58. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 59. The following result shows that ∆X is not a straightforward process to analyse. Lemma If X is a Lévy process, then for fixed t > 0, ∆X (t) = 0 (a.s.). Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞, then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by (L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t), and so has a subsequence which converges almost surely to X (t). The result follows by uniqueness of limits. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44
  • 60. Much of the analytic difficulty in manipulating Lévy processes arises from the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤t and the way in which these difficulties is overcome exploits the fact that we always have |∆X (s)|2 < ∞ a.s. 0≤s≤t We will gain more insight into these ideas as the discussion progresses. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44
  • 61. Much of the analytic difficulty in manipulating Lévy processes arises from the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤t and the way in which these difficulties is overcome exploits the fact that we always have |∆X (s)|2 < ∞ a.s. 0≤s≤t We will gain more insight into these ideas as the discussion progresses. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44
  • 62. Much of the analytic difficulty in manipulating Lévy processes arises from the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤t and the way in which these difficulties is overcome exploits the fact that we always have |∆X (s)|2 < ∞ a.s. 0≤s≤t We will gain more insight into these ideas as the discussion progresses. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44
  • 63. Rather than exploring ∆X itself further, we will find it more profitable to count jumps of specified size. More precisely, let 0 ≤ t < ∞ and A ∈ B(Rd − {0}). Define N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤t Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a counting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω) is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44
  • 64. Rather than exploring ∆X itself further, we will find it more profitable to count jumps of specified size. More precisely, let 0 ≤ t < ∞ and A ∈ B(Rd − {0}). Define N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤t Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a counting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω) is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44
  • 65. Rather than exploring ∆X itself further, we will find it more profitable to count jumps of specified size. More precisely, let 0 ≤ t < ∞ and A ∈ B(Rd − {0}). Define N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤t Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a counting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω) is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44
  • 66. Rather than exploring ∆X itself further, we will find it more profitable to count jumps of specified size. More precisely, let 0 ≤ t < ∞ and A ∈ B(Rd − {0}). Define N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤t Note that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is a counting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω) is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44
  • 67. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 68. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 69. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 70. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 71. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 72. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 73. / ¯ We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A. Lemma If A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. A Proof. Define a sequence of stopping times (Tn , n ∈ N) by T1A = inf{t > 0; ∆X (t) ∈ A}, and for A A n > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. A Since X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A (a.s.). A Indeed suppose that T1 = 0 with non-zero probability and let N = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given any u > 0, we can find 0 < δ, δ < u and > 0 such that |X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) right continuity of X (·)(ω) at the origin. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44
  • 74. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A A probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M then we obtain a contradiction with the fact that X has a left limit (almost surely) at T A (ω). Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44
  • 75. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A A probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M then we obtain a contradiction with the fact that X has a left limit (almost surely) at T A (ω). Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44
  • 76. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A A probability and define M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − M then we obtain a contradiction with the fact that X has a left limit (almost surely) at T A (ω). Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44
  • 77. Be aware that if A fails to be bounded below, then this lemma may no longer hold, because of the accumulation of large numbers of small jumps. The following result should at least be plausible, given Theorem 2 and Lemma 4. Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent. It follows immediately that µ(A) < ∞ whenever A is bounded below, hence the measure µ is σ-finite. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44
  • 78. Be aware that if A fails to be bounded below, then this lemma may no longer hold, because of the accumulation of large numbers of small jumps. The following result should at least be plausible, given Theorem 2 and Lemma 4. Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent. It follows immediately that µ(A) < ∞ whenever A is bounded below, hence the measure µ is σ-finite. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44
  • 79. Be aware that if A fails to be bounded below, then this lemma may no longer hold, because of the accumulation of large numbers of small jumps. The following result should at least be plausible, given Theorem 2 and Lemma 4. Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent. It follows immediately that µ(A) < ∞ whenever A is bounded below, hence the measure µ is σ-finite. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44
  • 80. Be aware that if A fails to be bounded below, then this lemma may no longer hold, because of the accumulation of large numbers of small jumps. The following result should at least be plausible, given Theorem 2 and Lemma 4. Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent. It follows immediately that µ(A) < ∞ whenever A is bounded below, hence the measure µ is σ-finite. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44
  • 81. The main properties of N, which we will use extensively in the sequel, are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44
  • 82. The main properties of N, which we will use extensively in the sequel, are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44
  • 83. The main properties of N, which we will use extensively in the sequel, are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44
  • 84. The main properties of N, which we will use extensively in the sequel, are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44
  • 85. The main properties of N, which we will use extensively in the sequel, are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For fixed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44
  • 86. Poisson Integration Let f be a Borel measurable function from Rd to Rd and let A be bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson integral of f as a random finite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈A Note that each A f (x)N(t, dx) is an Rd -valued random variable and gives rise to a càdlàg stochastic process, as we vary t. Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we have f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44
  • 87. Poisson Integration Let f be a Borel measurable function from Rd to Rd and let A be bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson integral of f as a random finite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈A Note that each A f (x)N(t, dx) is an Rd -valued random variable and gives rise to a càdlàg stochastic process, as we vary t. Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we have f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44
  • 88. Poisson Integration Let f be a Borel measurable function from Rd to Rd and let A be bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson integral of f as a random finite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈A Note that each A f (x)N(t, dx) is an Rd -valued random variable and gives rise to a càdlàg stochastic process, as we vary t. Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we have f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44
  • 89. Poisson Integration Let f be a Borel measurable function from Rd to Rd and let A be bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson integral of f as a random finite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈A Note that each A f (x)N(t, dx) is an Rd -valued random variable and gives rise to a càdlàg stochastic process, as we vary t. Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we have f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44
  • 90. Poisson Integration Let f be a Borel measurable function from Rd to Rd and let A be bounded below, then for each t > 0, ω ∈ Ω, we may define the Poisson integral of f as a random finite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈A Note that each A f (x)N(t, dx) is an Rd -valued random variable and gives rise to a càdlàg stochastic process, as we vary t. Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, we have f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44
  • 91. In the sequel, we will sometimes use µA to denote the restriction to A of the measure µ. in the following theorem, Var stands for variance. Theorem Let A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44
  • 92. In the sequel, we will sometimes use µA to denote the restriction to A of the measure µ. in the following theorem, Var stands for variance. Theorem Let A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44
  • 93. In the sequel, we will sometimes use µA to denote the restriction to A of the measure µ. in the following theorem, Var stands for variance. Theorem Let A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44
  • 94. In the sequel, we will sometimes use µA to denote the restriction to A of the measure µ. in the following theorem, Var stands for variance. Theorem Let A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44
  • 95. Theorem 3 If f ∈ L2 (A, µA ), then Var f (x)N(t, dx) =t |f (x)|2 µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44
  • 96. Theorem 3 If f ∈ L2 (A, µA ), then Var f (x)N(t, dx) =t |f (x)|2 µ(dx). A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44
  • 97. Proof. - part of it! 1) For simplicity, we will prove this result in the case where f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1 where each cj ∈ Rd . We can assume, without loss of generality, that the Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44
  • 98. Proof. - part of it! 1) For simplicity, we will prove this result in the case where f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1 where each cj ∈ Rd . We can assume, without loss of generality, that the Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44
  • 99. Proof. - part of it! 1) For simplicity, we will prove this result in the case where f ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1 where each cj ∈ Rd . We can assume, without loss of generality, that the Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44
  • 100. By Theorem 5, we find that     n  E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44
  • 101. By Theorem 5, we find that     n  E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44
  • 102. By Theorem 5, we find that     n  E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44
  • 103. By Theorem 5, we find that     n  E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44
  • 104. Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple functions converging to f in L1 and hence a subsequence which converges to f almost surely. Passing to the limit along this subsequence in the above yields the required result, via dominated convergence. (2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44
  • 105. Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple functions converging to f in L1 and hence a subsequence which converges to f almost surely. Passing to the limit along this subsequence in the above yields the required result, via dominated convergence. (2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44
  • 106. Now for an arbitrary f ∈ L1 (A, µA ), we can find a sequence of simple functions converging to f in L1 and hence a subsequence which converges to f almost surely. Passing to the limit along this subsequence in the above yields the required result, via dominated convergence. (2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44
  • 107. It follows from Theorem 6 (2) that a Poisson integral will fail to have a finite mean if f ∈ L1 (A, µ). / For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson integral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A A A straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact extensively in the sequel. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44
  • 108. It follows from Theorem 6 (2) that a Poisson integral will fail to have a finite mean if f ∈ L1 (A, µ). / For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson integral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A A A straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact extensively in the sequel. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44
  • 109. It follows from Theorem 6 (2) that a Poisson integral will fail to have a finite mean if f ∈ L1 (A, µ). / For each f ∈ L1 (A, µA ), t ≥ 0, we define the compensated Poisson integral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A A A straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this fact extensively in the sequel. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44
  • 110. Note that by Theorem 6 (2) and (3), we can easily deduce the following two important facts: E exp i u, ˜ f (x)N(t, dx) A = exp t (ei(u,x) − 1 − i(u, x))µf ,A (dx) , (0.3) Rd for each u ∈ Rd , and for f ∈ L2 (A, µA ), 2 E ˜ f (x)N(t, dx) =t |f (x)|2 µ(dx). (0.4) A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44
  • 111. Note that by Theorem 6 (2) and (3), we can easily deduce the following two important facts: E exp i u, ˜ f (x)N(t, dx) A = exp t (ei(u,x) − 1 − i(u, x))µf ,A (dx) , (0.3) Rd for each u ∈ Rd , and for f ∈ L2 (A, µA ), 2 E ˜ f (x)N(t, dx) =t |f (x)|2 µ(dx). (0.4) A A Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44
  • 112. Processes of Finite Variation We begin by introducing a useful class of functions. Let P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval [a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the partition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44
  • 113. Processes of Finite Variation We begin by introducing a useful class of functions. Let P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval [a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the partition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44
  • 114. Processes of Finite Variation We begin by introducing a useful class of functions. Let P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval [a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the partition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44
  • 115. Processes of Finite Variation We begin by introducing a useful class of functions. Let P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval [a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the partition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44
  • 116. Processes of Finite Variation We begin by introducing a useful class of functions. Let P = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval [a, b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We define the variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over the partition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44
  • 117. If V (g) = supP VarP (g) < ∞, we say that g has finite variation on [a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite variation if it has finite variation on each compact interval. It is a trivial observation that every non-decreasing g is of finite variation. Conversely if g is of finite variation, then it can always be written as the difference of two non-decreasing functions - to see this, just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2 [a, t]. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44
  • 118. If V (g) = supP VarP (g) < ∞, we say that g has finite variation on [a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite variation if it has finite variation on each compact interval. It is a trivial observation that every non-decreasing g is of finite variation. Conversely if g is of finite variation, then it can always be written as the difference of two non-decreasing functions - to see this, just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2 [a, t]. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44
  • 119. If V (g) = supP VarP (g) < ∞, we say that g has finite variation on [a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite variation if it has finite variation on each compact interval. It is a trivial observation that every non-decreasing g is of finite variation. Conversely if g is of finite variation, then it can always be written as the difference of two non-decreasing functions - to see this, just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2 [a, t]. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44
  • 120. If V (g) = supP VarP (g) < ∞, we say that g has finite variation on [a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite variation if it has finite variation on each compact interval. It is a trivial observation that every non-decreasing g is of finite variation. Conversely if g is of finite variation, then it can always be written as the difference of two non-decreasing functions - to see this, just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2 [a, t]. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44
  • 121. If V (g) = supP VarP (g) < ∞, we say that g has finite variation on [a, b]. If g is defined on the whole of R (or R+ ), it is said to have finite variation if it has finite variation on each compact interval. It is a trivial observation that every non-decreasing g is of finite variation. Conversely if g is of finite variation, then it can always be written as the difference of two non-decreasing functions - to see this, just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2 [a, t]. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44
  • 122. Functions of finite variation are important in integration, for suppose that we are given a function g which we are proposing as an integrator, then as a minimum we will want to be able to define the Stieltjes integral I fdg, for all continuous functions f (where I is some finite interval). In fact a necessary and sufficient condition for obtaining such an integral as a limit of Riemann sums is that g has finite variation. A stochastic process (X (t), t ≥ 0) is of finite variation if the paths (X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44
  • 123. Functions of finite variation are important in integration, for suppose that we are given a function g which we are proposing as an integrator, then as a minimum we will want to be able to define the Stieltjes integral I fdg, for all continuous functions f (where I is some finite interval). In fact a necessary and sufficient condition for obtaining such an integral as a limit of Riemann sums is that g has finite variation. A stochastic process (X (t), t ≥ 0) is of finite variation if the paths (X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44
  • 124. Functions of finite variation are important in integration, for suppose that we are given a function g which we are proposing as an integrator, then as a minimum we will want to be able to define the Stieltjes integral I fdg, for all continuous functions f (where I is some finite interval). In fact a necessary and sufficient condition for obtaining such an integral as a limit of Riemann sums is that g has finite variation. A stochastic process (X (t), t ≥ 0) is of finite variation if the paths (X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44
  • 125. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 126. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 127. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 128. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 129. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 130. The following is an important example for us. Example Poisson Integrals Let N be a Poisson random measure with intensity measure µ and let f : Rd → Rd be Borel measurable. For A bounded below, let Y = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of finite variation on [0, t] for each t ≥ 0. To see this, we observe that for all partitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤t where X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44
  • 131. In fact, a necessary and sufficient condition for a Lévy process to be of finite variation is that there is no Brownian part (i.e. a = 0 in the Lévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44
  • 132. In fact, a necessary and sufficient condition for a Lévy process to be of finite variation is that there is no Brownian part (i.e. a = 0 in the Lévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44
  • 133. The Lévy-Itô Decomposition This is the key result of this lecture. First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤t is the sum of all the jumps taking values in the set A up to the time t. Since the paths of X are càdlàg, this is clearly a finite random sum. In particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than one. It is a compound Poisson process, has finite variation but may have no finite moments. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44
  • 134. The Lévy-Itô Decomposition This is the key result of this lecture. First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤t is the sum of all the jumps taking values in the set A up to the time t. Since the paths of X are càdlàg, this is clearly a finite random sum. In particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than one. It is a compound Poisson process, has finite variation but may have no finite moments. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44
  • 135. The Lévy-Itô Decomposition This is the key result of this lecture. First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤t is the sum of all the jumps taking values in the set A up to the time t. Since the paths of X are càdlàg, this is clearly a finite random sum. In particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than one. It is a compound Poisson process, has finite variation but may have no finite moments. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44
  • 136. The Lévy-Itô Decomposition This is the key result of this lecture. First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤t is the sum of all the jumps taking values in the set A up to the time t. Since the paths of X are càdlàg, this is clearly a finite random sum. In particular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger than one. It is a compound Poisson process, has finite variation but may have no finite moments. Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44
  • 137. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a Lévy process having finite moments to all orders. Now lets turn our attention to the small jumps. We study compensated integrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) A for t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m n and for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44
  • 138. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a Lévy process having finite moments to all orders. Now lets turn our attention to the small jumps. We study compensated integrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) A for t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m n and for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44
  • 139. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a Lévy process having finite moments to all orders. Now lets turn our attention to the small jumps. We study compensated integrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) A for t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m n and for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44
  • 140. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a Lévy process having finite moments to all orders. Now lets turn our attention to the small jumps. We study compensated integrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) A for t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m n and for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44
  • 141. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is a Lévy process having finite moments to all orders. Now lets turn our attention to the small jumps. We study compensated integrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) A for t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m n and for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44
  • 142. Define ˜ x N(t, dx) := L2 − lim M(t, An ), |x|<1 n→∞ which is a martingale. Moreover, on taking limits in (0.3), we get E exp i u, ˜ x N(t, dx) = exp t (ei(u,x) − 1 − i(u, x))µ(dx) |x|<1 |x|<1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44
  • 143. Define ˜ x N(t, dx) := L2 − lim M(t, An ), |x|<1 n→∞ which is a martingale. Moreover, on taking limits in (0.3), we get E exp i u, ˜ x N(t, dx) = exp t (ei(u,x) − 1 − i(u, x))µ(dx) |x|<1 |x|<1 Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44