SlideShare ist ein Scribd-Unternehmen logo
1 von 244
Downloaden Sie, um offline zu lesen
Lectures on Lévy Processes and Stochastic
              Calculus (Koc University)
 Lecture 4: Stochastic Integration and Itô’s Formula

                                   David Applebaum

                School of Mathematics and Statistics, University of Sheffield, UK


                                  8th December 2011




Dave Applebaum (Sheffield UK)                Lecture 4                        December 2011   1 / 58
Stochastic Integration


We next give a rather rapid account of stochastic integration in a form
suitable for application to Lévy processes.
Let X = M + C be a semimartingale. The problem of stochastic
integration is to make sense of objects of the form
                     t                        t                      t
                         F (s)dX (s) :=           F (s)dM(s) +           F (s)dC(s).
                 0                        0                      0


The second integral can be well-defined using the usual
Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a
continuous martingale of finite variation, then M is a.s. constant.




 Dave Applebaum (Sheffield UK)                     Lecture 4                      December 2011   2 / 58
Stochastic Integration


We next give a rather rapid account of stochastic integration in a form
suitable for application to Lévy processes.
Let X = M + C be a semimartingale. The problem of stochastic
integration is to make sense of objects of the form
                     t                        t                      t
                         F (s)dX (s) :=           F (s)dM(s) +           F (s)dC(s).
                 0                        0                      0


The second integral can be well-defined using the usual
Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a
continuous martingale of finite variation, then M is a.s. constant.




 Dave Applebaum (Sheffield UK)                     Lecture 4                      December 2011   2 / 58
Stochastic Integration


We next give a rather rapid account of stochastic integration in a form
suitable for application to Lévy processes.
Let X = M + C be a semimartingale. The problem of stochastic
integration is to make sense of objects of the form
                     t                        t                      t
                         F (s)dX (s) :=           F (s)dM(s) +           F (s)dC(s).
                 0                        0                      0


The second integral can be well-defined using the usual
Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a
continuous martingale of finite variation, then M is a.s. constant.




 Dave Applebaum (Sheffield UK)                     Lecture 4                      December 2011   2 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
Refer to the martingale part of the Lévy-Itô decomposition. Define a
“martingale-valued measure” by
                                                ˜
                         M(t, E) = B(t)δ0 (E) + N(t, E − {0}),


for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian
motion. The following key properties then hold:-
     M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for
     0 ≤ s < t < ∞.
     E(M((s, t], E)) = 0.
     E(M((s, t], E)2 ) = ρ((s, t], E)
     where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})).




 Dave Applebaum (Sheffield UK)           Lecture 4                December 2011   3 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We’re going to unify the usual stochastic integral with the Poisson
integral, by defining:
     t                                  t                     t
             F (s, x)M(ds, dx) :=           G(s)dB(s) +                           ˜
                                                                          F (s, x)N(ds, dx).
 0       E                          0                     0       E−{0}


where G(s) = F (s, 0). Of course, we need some conditions on the
class of integrands:-
Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra
with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)
and (2) below are measurable.
 1       For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft
         measurable,
 2       For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left
         continuous.


 Dave Applebaum (Sheffield UK)                 Lecture 4                      December 2011   4 / 58
We call P the predictable σ-algebra. A P-measurable mapping
G : [0, T ] × E × Ω → R is then said to be predictable. The definition
clearly extends naturally to the case where [0, T ] is replaced by R+ .
Note that by (1), if G is predictable then the process t → G(t, x, ·) is
adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it
is clearly predictable.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   5 / 58
We call P the predictable σ-algebra. A P-measurable mapping
G : [0, T ] × E × Ω → R is then said to be predictable. The definition
clearly extends naturally to the case where [0, T ] is replaced by R+ .
Note that by (1), if G is predictable then the process t → G(t, x, ·) is
adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it
is clearly predictable.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   5 / 58
We call P the predictable σ-algebra. A P-measurable mapping
G : [0, T ] × E × Ω → R is then said to be predictable. The definition
clearly extends naturally to the case where [0, T ] is replaced by R+ .
Note that by (1), if G is predictable then the process t → G(t, x, ·) is
adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it
is clearly predictable.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   5 / 58
We call P the predictable σ-algebra. A P-measurable mapping
G : [0, T ] × E × Ω → R is then said to be predictable. The definition
clearly extends naturally to the case where [0, T ] is replaced by R+ .
Note that by (1), if G is predictable then the process t → G(t, x, ·) is
adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it
is clearly predictable.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   5 / 58
We call P the predictable σ-algebra. A P-measurable mapping
G : [0, T ] × E × Ω → R is then said to be predictable. The definition
clearly extends naturally to the case where [0, T ] is replaced by R+ .
Note that by (1), if G is predictable then the process t → G(t, x, ·) is
adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it
is clearly predictable.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   5 / 58
Define H2 (T , E) to be the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P and which satisfy the following conditions:
     F is predictable,

                                    T
                                            E(|F (t, x)|2 )ρ(dt, dx) < ∞.
                                0       E


It can be shown that H2 (T , E) is a real Hilbert space with respect to
                                    T
the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx).




 Dave Applebaum (Sheffield UK)                    Lecture 4                  December 2011   6 / 58
Define H2 (T , E) to be the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P and which satisfy the following conditions:
     F is predictable,

                                    T
                                            E(|F (t, x)|2 )ρ(dt, dx) < ∞.
                                0       E


It can be shown that H2 (T , E) is a real Hilbert space with respect to
                                    T
the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx).




 Dave Applebaum (Sheffield UK)                    Lecture 4                  December 2011   6 / 58
Define H2 (T , E) to be the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P and which satisfy the following conditions:
     F is predictable,

                                    T
                                            E(|F (t, x)|2 )ρ(dt, dx) < ∞.
                                0       E


It can be shown that H2 (T , E) is a real Hilbert space with respect to
                                    T
the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx).




 Dave Applebaum (Sheffield UK)                    Lecture 4                  December 2011   6 / 58
Define H2 (T , E) to be the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P and which satisfy the following conditions:
     F is predictable,

                                    T
                                            E(|F (t, x)|2 )ρ(dt, dx) < ∞.
                                0       E


It can be shown that H2 (T , E) is a real Hilbert space with respect to
                                    T
the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx).




 Dave Applebaum (Sheffield UK)                    Lecture 4                  December 2011   6 / 58
Define S(T , E) to be the linear space of all simple processes in
H2 (T , E),where F is simple if for some m, n ∈ N, there exists
0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel
subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that
                                       m    n
                                 F =              Fj,k 1(tj ,tj+1 ] 1Ak ,
                                       j=1 k =1


where each Fj,k is a bounded Ftj -measurable random variable. Note
that F is left continuous and B(E) ⊗ Ft measurable, hence it is
predictable. An important fact is that

                                S(T , E) is dense in H2 (T , E),




 Dave Applebaum (Sheffield UK)                   Lecture 4                   December 2011   7 / 58
Define S(T , E) to be the linear space of all simple processes in
H2 (T , E),where F is simple if for some m, n ∈ N, there exists
0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel
subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that
                                       m    n
                                 F =              Fj,k 1(tj ,tj+1 ] 1Ak ,
                                       j=1 k =1


where each Fj,k is a bounded Ftj -measurable random variable. Note
that F is left continuous and B(E) ⊗ Ft measurable, hence it is
predictable. An important fact is that

                                S(T , E) is dense in H2 (T , E),




 Dave Applebaum (Sheffield UK)                   Lecture 4                   December 2011   7 / 58
Define S(T , E) to be the linear space of all simple processes in
H2 (T , E),where F is simple if for some m, n ∈ N, there exists
0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel
subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that
                                       m    n
                                 F =              Fj,k 1(tj ,tj+1 ] 1Ak ,
                                       j=1 k =1


where each Fj,k is a bounded Ftj -measurable random variable. Note
that F is left continuous and B(E) ⊗ Ft measurable, hence it is
predictable. An important fact is that

                                S(T , E) is dense in H2 (T , E),




 Dave Applebaum (Sheffield UK)                   Lecture 4                   December 2011   7 / 58
Define S(T , E) to be the linear space of all simple processes in
H2 (T , E),where F is simple if for some m, n ∈ N, there exists
0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel
subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that
                                       m    n
                                 F =              Fj,k 1(tj ,tj+1 ] 1Ak ,
                                       j=1 k =1


where each Fj,k is a bounded Ftj -measurable random variable. Note
that F is left continuous and B(E) ⊗ Ft measurable, hence it is
predictable. An important fact is that

                                S(T , E) is dense in H2 (T , E),




 Dave Applebaum (Sheffield UK)                   Lecture 4                   December 2011   7 / 58
Define S(T , E) to be the linear space of all simple processes in
H2 (T , E),where F is simple if for some m, n ∈ N, there exists
0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel
subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that
                                       m    n
                                 F =              Fj,k 1(tj ,tj+1 ] 1Ak ,
                                       j=1 k =1


where each Fj,k is a bounded Ftj -measurable random variable. Note
that F is left continuous and B(E) ⊗ Ft measurable, hence it is
predictable. An important fact is that

                                S(T , E) is dense in H2 (T , E),




 Dave Applebaum (Sheffield UK)                   Lecture 4                   December 2011   7 / 58
One of Itô’s greatest achievements was the definition of the stochastic
integral IT (F ), for F simple, by separating the “past” from the “future”
within the Riemann sum:-
                                       m    n
                           IT (F ) =              Fj,k M((tj , tj+1 ], Ak ),                   (0.1)
                                       j=1 k =1


so on each time interval [tj , tj+1 ], Fj,k encapsulates information
obtained by time tj , while M gives the innovation into the future (tj , tj+1 ].




 Dave Applebaum (Sheffield UK)                   Lecture 4                      December 2011     8 / 58
One of Itô’s greatest achievements was the definition of the stochastic
integral IT (F ), for F simple, by separating the “past” from the “future”
within the Riemann sum:-
                                       m    n
                           IT (F ) =              Fj,k M((tj , tj+1 ], Ak ),                   (0.1)
                                       j=1 k =1


so on each time interval [tj , tj+1 ], Fj,k encapsulates information
obtained by time tj , while M gives the innovation into the future (tj , tj+1 ].




 Dave Applebaum (Sheffield UK)                   Lecture 4                      December 2011     8 / 58
One of Itô’s greatest achievements was the definition of the stochastic
integral IT (F ), for F simple, by separating the “past” from the “future”
within the Riemann sum:-
                                       m    n
                           IT (F ) =              Fj,k M((tj , tj+1 ], Ak ),                   (0.1)
                                       j=1 k =1


so on each time interval [tj , tj+1 ], Fj,k encapsulates information
obtained by time tj , while M gives the innovation into the future (tj , tj+1 ].




 Dave Applebaum (Sheffield UK)                   Lecture 4                      December 2011     8 / 58
Lemma
For each T ≥ 0, F ∈ S(T , E),
                                                       T
         E(IT (F )) = 0,        E(IT (F )2 ) =                 E(|F (t, x)|2 )ρ(dt, dx).
                                                  0        E

Proof. E(IT (F )) = 0 is a straightforward application of linearity and
independence. The second result is quite messy - we lose nothing
important by just looking at the Brownian case, with d = 1.




 Dave Applebaum (Sheffield UK)              Lecture 4                           December 2011   9 / 58
Lemma
For each T ≥ 0, F ∈ S(T , E),
                                                       T
         E(IT (F )) = 0,        E(IT (F )2 ) =                 E(|F (t, x)|2 )ρ(dt, dx).
                                                  0        E

Proof. E(IT (F )) = 0 is a straightforward application of linearity and
independence. The second result is quite messy - we lose nothing
important by just looking at the Brownian case, with d = 1.




 Dave Applebaum (Sheffield UK)              Lecture 4                           December 2011   9 / 58
Lemma
For each T ≥ 0, F ∈ S(T , E),
                                                       T
         E(IT (F )) = 0,        E(IT (F )2 ) =                 E(|F (t, x)|2 )ρ(dt, dx).
                                                  0        E

Proof. E(IT (F )) = 0 is a straightforward application of linearity and
independence. The second result is quite messy - we lose nothing
important by just looking at the Brownian case, with d = 1.




 Dave Applebaum (Sheffield UK)              Lecture 4                           December 2011   9 / 58
m                                        m
So let F (t) :=          j=1 Fj 1(tj ,tj+1 ] ,   then IT (F ) =   j=1 Fj (B(tj+1 )   − B(tj )),

                           m     m
           IT (F )2 =                 Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )).
                          j=1 p=1


Now fix j and split the second sum into three pieces - corresponding to
p < j, p = j and p > j.




 Dave Applebaum (Sheffield UK)                       Lecture 4                 December 2011       10 / 58
m                                        m
So let F (t) :=          j=1 Fj 1(tj ,tj+1 ] ,   then IT (F ) =   j=1 Fj (B(tj+1 )   − B(tj )),

                           m     m
           IT (F )2 =                 Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )).
                          j=1 p=1


Now fix j and split the second sum into three pieces - corresponding to
p < j, p = j and p > j.




 Dave Applebaum (Sheffield UK)                       Lecture 4                 December 2011       10 / 58
m                                        m
So let F (t) :=          j=1 Fj 1(tj ,tj+1 ] ,   then IT (F ) =   j=1 Fj (B(tj+1 )   − B(tj )),

                           m     m
           IT (F )2 =                 Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )).
                          j=1 p=1


Now fix j and split the second sum into three pieces - corresponding to
p < j, p = j and p > j.




 Dave Applebaum (Sheffield UK)                       Lecture 4                 December 2011       10 / 58
m                                        m
So let F (t) :=          j=1 Fj 1(tj ,tj+1 ] ,   then IT (F ) =   j=1 Fj (B(tj+1 )   − B(tj )),

                           m     m
           IT (F )2 =                 Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )).
                          j=1 p=1


Now fix j and split the second sum into three pieces - corresponding to
p < j, p = j and p > j.




 Dave Applebaum (Sheffield UK)                       Lecture 4                 December 2011       10 / 58
When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of
B(tj+1 ) − B(tj ),

                      E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))]
                = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.

Exactly the same argument works when p > j.




 Dave Applebaum (Sheffield UK)             Lecture 4                     December 2011   11 / 58
When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of
B(tj+1 ) − B(tj ),

                      E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))]
                = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.

Exactly the same argument works when p > j.




 Dave Applebaum (Sheffield UK)             Lecture 4                     December 2011   11 / 58
When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of
B(tj+1 ) − B(tj ),

                      E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))]
                = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.

Exactly the same argument works when p > j.




 Dave Applebaum (Sheffield UK)             Lecture 4                     December 2011   11 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
What remains is the case p = j, and by independence again

                                     m
                    E(IT (F )2 ) =         E(Fj2 )E(B(tj+1 ) − B(tj ))2
                                     j=1
                                     m
                                =          E(Fj2 )(tj+1 − tj ).   2
                                     j=1

We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into
L2 (Ω, F, P),and hence it extends to an isometric embedding of the
whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this
extension as IT and we call IT (F ) the (Itô) stochastic integral of
F ∈ H2 (T , E). When convenient, we will use the Leibniz notation
           T
IT (F ) := 0 E F (t, x)M(dt, dx).



 Dave Applebaum (Sheffield UK)              Lecture 4                  December 2011   12 / 58
T
So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we
can find a sequence (Fn , n ∈ N) of simple processes such that
                T                                     T
                        F (t, x)M(dt, dx) = lim               Fn (t, x)M(dt, dx).
            0       E                        n→∞ 0        E

The limit is taken in the L2 -sense and is independent of the choice of
approximating sequence.




 Dave Applebaum (Sheffield UK)             Lecture 4                      December 2011   13 / 58
T
So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we
can find a sequence (Fn , n ∈ N) of simple processes such that
                T                                     T
                        F (t, x)M(dt, dx) = lim               Fn (t, x)M(dt, dx).
            0       E                        n→∞ 0        E

The limit is taken in the L2 -sense and is independent of the choice of
approximating sequence.




 Dave Applebaum (Sheffield UK)             Lecture 4                      December 2011   13 / 58
T
So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we
can find a sequence (Fn , n ∈ N) of simple processes such that
                T                                     T
                        F (t, x)M(dt, dx) = lim               Fn (t, x)M(dt, dx).
            0       E                        n→∞ 0        E

The limit is taken in the L2 -sense and is independent of the choice of
approximating sequence.




 Dave Applebaum (Sheffield UK)             Lecture 4                      December 2011   13 / 58
T
So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we
can find a sequence (Fn , n ∈ N) of simple processes such that
                T                                     T
                        F (t, x)M(dt, dx) = lim               Fn (t, x)M(dt, dx).
            0       E                        n→∞ 0        E

The limit is taken in the L2 -sense and is independent of the choice of
approximating sequence.




 Dave Applebaum (Sheffield UK)             Lecture 4                      December 2011   13 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
The following theorem summarises some useful properties of the
stochastic integral.

Theorem
If F , G ∈ H2 (T , E) and α, β ∈ R, then :
  1   IT (αF + βG) = αIT (F ) + βIT (G).
                                                 T
  2   E(IT (F )) = 0,           E(IT (F )2 ) =   0       E   E(|F (t, x)|2 )ρ(dt, dx).
  3   (It (F ), t ≥ 0) is Ft -adapted.
  4   (It (F ), t ≥ 0) is a square-integrable martingale.




 Dave Applebaum (Sheffield UK)                Lecture 4                          December 2011   14 / 58
Proof. (1) and (2) are easy.
For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then
each process (It (Fn ), t ≥ 0) is clearly adapted. Since each
It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence
(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the
required result follows.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   15 / 58
Proof. (1) and (2) are easy.
For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then
each process (It (Fn ), t ≥ 0) is clearly adapted. Since each
It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence
(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the
required result follows.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   15 / 58
Proof. (1) and (2) are easy.
For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then
each process (It (Fn ), t ≥ 0) is clearly adapted. Since each
It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence
(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the
required result follows.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   15 / 58
Proof. (1) and (2) are easy.
For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then
each process (It (Fn ), t ≥ 0) is clearly adapted. Since each
It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence
(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the
required result follows.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   15 / 58
Proof. (1) and (2) are easy.
For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then
each process (It (Fn ), t ≥ 0) is clearly adapted. Since each
It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence
(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the
required result follows.




 Dave Applebaum (Sheffield UK)     Lecture 4                December 2011   15 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
(4) Let F ∈ S(T , E) and (without loss of generality) choose
0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )
and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,
                                                              
                                             m      n
          Es (Is,t (F )) = Es                          Fj,k M((tj , tj+1 ], Ak )
                                           j=l+1 k =1
                                      n     n
                                =                Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0.
                                    j=l+1 k =1


The result now follows by the continuity of Es in L2 . Indeed, let
(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have

||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2
                                          = ||F − Fn ||T ,ρ → 0 as n → ∞.                                 2


 Dave Applebaum (Sheffield UK)                    Lecture 4                      December 2011   16 / 58
We can extend the stochastic integral IT (F ) to integrands in
P2 (T , E).This is the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P, and which satisfy the following conditions:
     F is predictable.
            T
     P      0     E   |F (t, x)|2 ρ(dt, dx) < ∞ = 1.

If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not
necessarily a martingale.




 Dave Applebaum (Sheffield UK)            Lecture 4             December 2011   17 / 58
We can extend the stochastic integral IT (F ) to integrands in
P2 (T , E).This is the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P, and which satisfy the following conditions:
     F is predictable.
            T
     P      0     E   |F (t, x)|2 ρ(dt, dx) < ∞ = 1.

If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not
necessarily a martingale.




 Dave Applebaum (Sheffield UK)            Lecture 4             December 2011   17 / 58
We can extend the stochastic integral IT (F ) to integrands in
P2 (T , E).This is the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P, and which satisfy the following conditions:
     F is predictable.
            T
     P      0     E   |F (t, x)|2 ρ(dt, dx) < ∞ = 1.

If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not
necessarily a martingale.




 Dave Applebaum (Sheffield UK)            Lecture 4             December 2011   17 / 58
We can extend the stochastic integral IT (F ) to integrands in
P2 (T , E).This is the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P, and which satisfy the following conditions:
     F is predictable.
            T
     P      0     E   |F (t, x)|2 ρ(dt, dx) < ∞ = 1.

If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not
necessarily a martingale.




 Dave Applebaum (Sheffield UK)            Lecture 4             December 2011   17 / 58
We can extend the stochastic integral IT (F ) to integrands in
P2 (T , E).This is the linear space of all equivalence classes of
mappings F : [0, T ] × E × Ω → R which coincide almost everywhere
with respect to ρ × P, and which satisfy the following conditions:
     F is predictable.
            T
     P      0     E   |F (t, x)|2 ρ(dt, dx) < ∞ = 1.

If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not
necessarily a martingale.




 Dave Applebaum (Sheffield UK)            Lecture 4             December 2011   17 / 58
Poisson Stochastic Integrals



Let A be an arbitrary Borel set in Rd − {0} which is bounded below,
and introduce the compound Poisson process P = (P(t), t ≥ 0), where
each P(t) = A xN(t, dx). Let K be a predictable mapping, then
generalising the earlier Poisson integrals, we define
            T
                    K (t, x)N(dt, dx) =           K (u, ∆P(u))1A (∆P(u)),           (0.2)
        0       A                         0≤u≤T

as a random finite sum.




 Dave Applebaum (Sheffield UK)              Lecture 4                December 2011    18 / 58
Poisson Stochastic Integrals



Let A be an arbitrary Borel set in Rd − {0} which is bounded below,
and introduce the compound Poisson process P = (P(t), t ≥ 0), where
each P(t) = A xN(t, dx). Let K be a predictable mapping, then
generalising the earlier Poisson integrals, we define
            T
                    K (t, x)N(dt, dx) =           K (u, ∆P(u))1A (∆P(u)),           (0.2)
        0       A                         0≤u≤T

as a random finite sum.




 Dave Applebaum (Sheffield UK)              Lecture 4                December 2011    18 / 58
Poisson Stochastic Integrals



Let A be an arbitrary Borel set in Rd − {0} which is bounded below,
and introduce the compound Poisson process P = (P(t), t ≥ 0), where
each P(t) = A xN(t, dx). Let K be a predictable mapping, then
generalising the earlier Poisson integrals, we define
            T
                    K (t, x)N(dt, dx) =           K (u, ∆P(u))1A (∆P(u)),           (0.2)
        0       A                         0≤u≤T

as a random finite sum.




 Dave Applebaum (Sheffield UK)              Lecture 4                December 2011    18 / 58
Poisson Stochastic Integrals



Let A be an arbitrary Borel set in Rd − {0} which is bounded below,
and introduce the compound Poisson process P = (P(t), t ≥ 0), where
each P(t) = A xN(t, dx). Let K be a predictable mapping, then
generalising the earlier Poisson integrals, we define
            T
                    K (t, x)N(dt, dx) =           K (u, ∆P(u))1A (∆P(u)),           (0.2)
        0       A                         0≤u≤T

as a random finite sum.




 Dave Applebaum (Sheffield UK)              Lecture 4                December 2011    18 / 58
In particular, if H satisfies the square-integrability condition given
above, we may then define, for each 1 ≤ i ≤ d,
                              T
                                                ˜
                                      H i (t, x)N(dt, dx)
                          0       A
                              T                                    T
              :=                      H i (t, x)N(dt, dx) −                H i (t, x)ν(dx)dt.
                          0       A                            0       A


The definition (0.2) can, in principle, be used to define stochastic
integrals for a more general class of integrands than we have been
considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process
of intensity 1 and let f : R → R, then we may define
                    t
                        f (N(s))dN(s) =                f (N(s−) + ∆N(s))∆N(s).
                0                              0≤s≤t




 Dave Applebaum (Sheffield UK)                      Lecture 4                         December 2011   19 / 58
In particular, if H satisfies the square-integrability condition given
above, we may then define, for each 1 ≤ i ≤ d,
                              T
                                                ˜
                                      H i (t, x)N(dt, dx)
                          0       A
                              T                                    T
              :=                      H i (t, x)N(dt, dx) −                H i (t, x)ν(dx)dt.
                          0       A                            0       A


The definition (0.2) can, in principle, be used to define stochastic
integrals for a more general class of integrands than we have been
considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process
of intensity 1 and let f : R → R, then we may define
                    t
                        f (N(s))dN(s) =                f (N(s−) + ∆N(s))∆N(s).
                0                              0≤s≤t




 Dave Applebaum (Sheffield UK)                      Lecture 4                         December 2011   19 / 58
In particular, if H satisfies the square-integrability condition given
above, we may then define, for each 1 ≤ i ≤ d,
                              T
                                                ˜
                                      H i (t, x)N(dt, dx)
                          0       A
                              T                                    T
              :=                      H i (t, x)N(dt, dx) −                H i (t, x)ν(dx)dt.
                          0       A                            0       A


The definition (0.2) can, in principle, be used to define stochastic
integrals for a more general class of integrands than we have been
considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process
of intensity 1 and let f : R → R, then we may define
                    t
                        f (N(s))dN(s) =                f (N(s−) + ∆N(s))∆N(s).
                0                              0≤s≤t




 Dave Applebaum (Sheffield UK)                      Lecture 4                         December 2011   19 / 58
Lévy-type stochastic integrals
               ˆ
We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this
subsection. We say that an Rd -valued stochastic process
Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written
in the following form for each 1 ≤ i ≤ d, t ≥ 0,

                                        t                    t
 Y i (t) = Y i (0) +                        Gi (s)ds +           Fji (s)dB j (s)
                                    0                    0
                        t                                                  t
            +                                 ˜
                                    H i (s, x)N(ds, dx) +                              K i (s, x)N(ds, dx),
                    0       |x|<1                                      0       |x|≥1


where for each
                                   1
1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is
predictable. B is an m-dimensional standard Brownian motion and N is
an independent Poisson random measure on R+ × (Rd − {0}) with
              ˜
compensator N and intensity measure ν, which we will assume is a
Lévy measure.
 Dave Applebaum (Sheffield UK)                       Lecture 4                               December 2011   20 / 58
Lévy-type stochastic integrals
               ˆ
We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this
subsection. We say that an Rd -valued stochastic process
Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written
in the following form for each 1 ≤ i ≤ d, t ≥ 0,

                                        t                    t
 Y i (t) = Y i (0) +                        Gi (s)ds +           Fji (s)dB j (s)
                                    0                    0
                        t                                                  t
            +                                 ˜
                                    H i (s, x)N(ds, dx) +                              K i (s, x)N(ds, dx),
                    0       |x|<1                                      0       |x|≥1


where for each
                                   1
1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is
predictable. B is an m-dimensional standard Brownian motion and N is
an independent Poisson random measure on R+ × (Rd − {0}) with
              ˜
compensator N and intensity measure ν, which we will assume is a
Lévy measure.
 Dave Applebaum (Sheffield UK)                       Lecture 4                               December 2011   20 / 58
Lévy-type stochastic integrals
               ˆ
We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this
subsection. We say that an Rd -valued stochastic process
Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written
in the following form for each 1 ≤ i ≤ d, t ≥ 0,

                                        t                    t
 Y i (t) = Y i (0) +                        Gi (s)ds +           Fji (s)dB j (s)
                                    0                    0
                        t                                                  t
            +                                 ˜
                                    H i (s, x)N(ds, dx) +                              K i (s, x)N(ds, dx),
                    0       |x|<1                                      0       |x|≥1


where for each
                                   1
1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is
predictable. B is an m-dimensional standard Brownian motion and N is
an independent Poisson random measure on R+ × (Rd − {0}) with
              ˜
compensator N and intensity measure ν, which we will assume is a
Lévy measure.
 Dave Applebaum (Sheffield UK)                       Lecture 4                               December 2011   20 / 58
Lévy-type stochastic integrals
               ˆ
We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this
subsection. We say that an Rd -valued stochastic process
Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written
in the following form for each 1 ≤ i ≤ d, t ≥ 0,

                                        t                    t
 Y i (t) = Y i (0) +                        Gi (s)ds +           Fji (s)dB j (s)
                                    0                    0
                        t                                                  t
            +                                 ˜
                                    H i (s, x)N(ds, dx) +                              K i (s, x)N(ds, dx),
                    0       |x|<1                                      0       |x|≥1


where for each
                                   1
1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is
predictable. B is an m-dimensional standard Brownian motion and N is
an independent Poisson random measure on R+ × (Rd − {0}) with
              ˜
compensator N and intensity measure ν, which we will assume is a
Lévy measure.
 Dave Applebaum (Sheffield UK)                       Lecture 4                               December 2011   20 / 58
Lévy-type stochastic integrals
               ˆ
We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this
subsection. We say that an Rd -valued stochastic process
Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written
in the following form for each 1 ≤ i ≤ d, t ≥ 0,

                                        t                    t
 Y i (t) = Y i (0) +                        Gi (s)ds +           Fji (s)dB j (s)
                                    0                    0
                        t                                                  t
            +                                 ˜
                                    H i (s, x)N(ds, dx) +                              K i (s, x)N(ds, dx),
                    0       |x|<1                                      0       |x|≥1


where for each
                                   1
1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is
predictable. B is an m-dimensional standard Brownian motion and N is
an independent Poisson random measure on R+ × (Rd − {0}) with
              ˜
compensator N and intensity measure ν, which we will assume is a
Lévy measure.
 Dave Applebaum (Sheffield UK)                       Lecture 4                               December 2011   20 / 58
We will assume that the random variable Y (0) is F0 -measurable, and
then it is clear that Y is an adapted process.
We can often simplify complicated expressions by employing the
notation of stochastic differentials to represent Lévy-type stochastic
integrals. We then write the last expression as

                                                     ˜
               dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx)
                           + K (t, x)N(dt, dx).


When we want to particularly emphasise the domains of integration
with respect to x, we will use an equivalent notation

          dY (t) = G(t)dt + F (t)dB(t) +                                    ˜
                                                                     H(t, x)N(dt, dx)
                                                             |x|<1

                       +                K (t, x)N(dt, dx).
                                |x|≥1


Clearly Y is a semimartingale.
 Dave Applebaum (Sheffield UK)                   Lecture 4                     December 2011   21 / 58
We will assume that the random variable Y (0) is F0 -measurable, and
then it is clear that Y is an adapted process.
We can often simplify complicated expressions by employing the
notation of stochastic differentials to represent Lévy-type stochastic
integrals. We then write the last expression as

                                                     ˜
               dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx)
                           + K (t, x)N(dt, dx).


When we want to particularly emphasise the domains of integration
with respect to x, we will use an equivalent notation

          dY (t) = G(t)dt + F (t)dB(t) +                                    ˜
                                                                     H(t, x)N(dt, dx)
                                                             |x|<1

                       +                K (t, x)N(dt, dx).
                                |x|≥1


Clearly Y is a semimartingale.
 Dave Applebaum (Sheffield UK)                   Lecture 4                     December 2011   21 / 58
We will assume that the random variable Y (0) is F0 -measurable, and
then it is clear that Y is an adapted process.
We can often simplify complicated expressions by employing the
notation of stochastic differentials to represent Lévy-type stochastic
integrals. We then write the last expression as

                                                     ˜
               dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx)
                           + K (t, x)N(dt, dx).


When we want to particularly emphasise the domains of integration
with respect to x, we will use an equivalent notation

          dY (t) = G(t)dt + F (t)dB(t) +                                    ˜
                                                                     H(t, x)N(dt, dx)
                                                             |x|<1

                       +                K (t, x)N(dt, dx).
                                |x|≥1


Clearly Y is a semimartingale.
 Dave Applebaum (Sheffield UK)                   Lecture 4                     December 2011   21 / 58
We will assume that the random variable Y (0) is F0 -measurable, and
then it is clear that Y is an adapted process.
We can often simplify complicated expressions by employing the
notation of stochastic differentials to represent Lévy-type stochastic
integrals. We then write the last expression as

                                                     ˜
               dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx)
                           + K (t, x)N(dt, dx).


When we want to particularly emphasise the domains of integration
with respect to x, we will use an equivalent notation

          dY (t) = G(t)dt + F (t)dB(t) +                                    ˜
                                                                     H(t, x)N(dt, dx)
                                                             |x|<1

                       +                K (t, x)N(dt, dx).
                                |x|≥1


Clearly Y is a semimartingale.
 Dave Applebaum (Sheffield UK)                   Lecture 4                     December 2011   21 / 58
We will assume that the random variable Y (0) is F0 -measurable, and
then it is clear that Y is an adapted process.
We can often simplify complicated expressions by employing the
notation of stochastic differentials to represent Lévy-type stochastic
integrals. We then write the last expression as

                                                     ˜
               dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx)
                           + K (t, x)N(dt, dx).


When we want to particularly emphasise the domains of integration
with respect to x, we will use an equivalent notation

          dY (t) = G(t)dt + F (t)dB(t) +                                    ˜
                                                                     H(t, x)N(dt, dx)
                                                             |x|<1

                       +                K (t, x)N(dt, dx).
                                |x|≥1


Clearly Y is a semimartingale.
 Dave Applebaum (Sheffield UK)                   Lecture 4                     December 2011   21 / 58
Let M = (M(t), t ≥ 0) be an adapted process which is such that
MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).
For example, it is sufficient to take M to be adapted and
left-continuous.
For these processes we can define an adapted process
Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic
differential
                                                      ˜
    dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx)
                + M(t)K (t, x)N(dt, dx),


and we will adopt the natural notation,

                                dZ (t) = M(t)dY (t).



 Dave Applebaum (Sheffield UK)          Lecture 4          December 2011   22 / 58
Let M = (M(t), t ≥ 0) be an adapted process which is such that
MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).
For example, it is sufficient to take M to be adapted and
left-continuous.
For these processes we can define an adapted process
Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic
differential
                                                      ˜
    dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx)
                + M(t)K (t, x)N(dt, dx),


and we will adopt the natural notation,

                                dZ (t) = M(t)dY (t).



 Dave Applebaum (Sheffield UK)          Lecture 4          December 2011   22 / 58
Let M = (M(t), t ≥ 0) be an adapted process which is such that
MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).
For example, it is sufficient to take M to be adapted and
left-continuous.
For these processes we can define an adapted process
Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic
differential
                                                      ˜
    dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx)
                + M(t)K (t, x)N(dt, dx),


and we will adopt the natural notation,

                                dZ (t) = M(t)dY (t).



 Dave Applebaum (Sheffield UK)          Lecture 4          December 2011   22 / 58
Let M = (M(t), t ≥ 0) be an adapted process which is such that
MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).
For example, it is sufficient to take M to be adapted and
left-continuous.
For these processes we can define an adapted process
Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic
differential
                                                      ˜
    dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx)
                + M(t)K (t, x)N(dt, dx),


and we will adopt the natural notation,

                                dZ (t) = M(t)dY (t).



 Dave Applebaum (Sheffield UK)          Lecture 4          December 2011   22 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example (Lévy Stochastic Integrals)
Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô
decomposition:

           X (t) = bt + Ba (t) +             ˜
                                           x N(t, dx) +           xN(t, dx),
                                   |x|<1                  |x|≥1

for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each
Fji = aji L, H i = K i = x i L. Then we can construct processes with the
stochastic differential
                                 dY (t) = L(t)dX (t)                   (0.3)
We call Y a Lévy stochastic integral.
In the case where X has finite variation, the Lévy stochastic integral Y
can also be constructed as a Lebesgue-Stieltjes integral, and this
coincides (up to a set of measure zero) with the prescription (0.3).



 Dave Applebaum (Sheffield UK)         Lecture 4                      December 2011   23 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Example: The Ornstein Uhlenbeck Process (OU Process)
                                                       t
                           Y (t) = e−λt y0 +               e−λ(t−s) dX (s)                   (0.4)
                                                   0

where y0 ∈ Rd is fixed, is a Markov process. The condition

                                        log(1 + |x|)ν(dx) < ∞
                                |x|>1

is necessary and sufficient for it to be stationary. There are important
applications to volatility modelling in finance which have been
developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,
every self-decomposable random variable can be naturally embedded
in a stationary Lévy-driven OU process.
See lecture 5 for more details.


 Dave Applebaum (Sheffield UK)                Lecture 4                       December 2011    24 / 58
Stochastic integration has a plethora of applications including filtering,
stochastic control and infinite dimensional analysis. Here’s some
motivation from finance. Suppose that X (t) is the value of a stock at
time t and F (t) is the number of stocks owned at time t. Assume for
now that we buy and sell stocks at discrete times
0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio
at time T is:
                                            n
                      V (T ) = V (0) +            F (tj )(X (tj+1 ) − X (tj )),
                                           j=0

and in the limit as the times become infinitesimally close together, we
have the stochastic integral
                                                            t
                                V (T ) = V (0) +                F (s)dX (s).
                                                       0




 Dave Applebaum (Sheffield UK)                   Lecture 4                         December 2011   25 / 58
Stochastic integration has a plethora of applications including filtering,
stochastic control and infinite dimensional analysis. Here’s some
motivation from finance. Suppose that X (t) is the value of a stock at
time t and F (t) is the number of stocks owned at time t. Assume for
now that we buy and sell stocks at discrete times
0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio
at time T is:
                                            n
                      V (T ) = V (0) +            F (tj )(X (tj+1 ) − X (tj )),
                                           j=0

and in the limit as the times become infinitesimally close together, we
have the stochastic integral
                                                            t
                                V (T ) = V (0) +                F (s)dX (s).
                                                       0




 Dave Applebaum (Sheffield UK)                   Lecture 4                         December 2011   25 / 58
Stochastic integration has a plethora of applications including filtering,
stochastic control and infinite dimensional analysis. Here’s some
motivation from finance. Suppose that X (t) is the value of a stock at
time t and F (t) is the number of stocks owned at time t. Assume for
now that we buy and sell stocks at discrete times
0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio
at time T is:
                                            n
                      V (T ) = V (0) +            F (tj )(X (tj+1 ) − X (tj )),
                                           j=0

and in the limit as the times become infinitesimally close together, we
have the stochastic integral
                                                            t
                                V (T ) = V (0) +                F (s)dX (s).
                                                       0




 Dave Applebaum (Sheffield UK)                   Lecture 4                         December 2011   25 / 58
Stochastic integration has a plethora of applications including filtering,
stochastic control and infinite dimensional analysis. Here’s some
motivation from finance. Suppose that X (t) is the value of a stock at
time t and F (t) is the number of stocks owned at time t. Assume for
now that we buy and sell stocks at discrete times
0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio
at time T is:
                                            n
                      V (T ) = V (0) +            F (tj )(X (tj+1 ) − X (tj )),
                                           j=0

and in the limit as the times become infinitesimally close together, we
have the stochastic integral
                                                            t
                                V (T ) = V (0) +                F (s)dX (s).
                                                       0




 Dave Applebaum (Sheffield UK)                   Lecture 4                         December 2011   25 / 58
Stochastic integration against Brownian motion was first developed by
Wiener for sure functions. Itô’s groundbreaking work in 1944 extended
this to random adapted integrands.The generalisation of the integrator
to arbitrary martingales was due to Kunita and Watanabe in 1967and
the further step to allow semimartingales was due to P.A.Meyer and
the Strasbourg school in the 1970s.




 Dave Applebaum (Sheffield UK)   Lecture 4               December 2011   26 / 58
Stochastic integration against Brownian motion was first developed by
Wiener for sure functions. Itô’s groundbreaking work in 1944 extended
this to random adapted integrands.The generalisation of the integrator
to arbitrary martingales was due to Kunita and Watanabe in 1967and
the further step to allow semimartingales was due to P.A.Meyer and
the Strasbourg school in the 1970s.




 Dave Applebaum (Sheffield UK)   Lecture 4               December 2011   26 / 58
Stochastic integration against Brownian motion was first developed by
Wiener for sure functions. Itô’s groundbreaking work in 1944 extended
this to random adapted integrands.The generalisation of the integrator
to arbitrary martingales was due to Kunita and Watanabe in 1967and
the further step to allow semimartingales was due to P.A.Meyer and
the Strasbourg school in the 1970s.




 Dave Applebaum (Sheffield UK)   Lecture 4               December 2011   26 / 58
Stochastic integration against Brownian motion was first developed by
Wiener for sure functions. Itô’s groundbreaking work in 1944 extended
this to random adapted integrands.The generalisation of the integrator
to arbitrary martingales was due to Kunita and Watanabe in 1967and
the further step to allow semimartingales was due to P.A.Meyer and
the Strasbourg school in the 1970s.




 Dave Applebaum (Sheffield UK)   Lecture 4               December 2011   26 / 58
Itô’s Formula



We begin with the easy case - Itô’s formula for Poisson stochastic
integrals of the form
                                                  t
                      W i (t) = W i (0) +                 K i (t, x)N(dt, dx)                   (0.5)
                                              0       A

for 1 ≤ i ≤ d, where t ≥ 0, A is bounded below and each K i is
predictable. Itô’s formula for such processes takes a particularly simple
form.




 Dave Applebaum (Sheffield UK)               Lecture 4                           December 2011    27 / 58
Itô’s Formula



We begin with the easy case - Itô’s formula for Poisson stochastic
integrals of the form
                                                  t
                      W i (t) = W i (0) +                 K i (t, x)N(dt, dx)                   (0.5)
                                              0       A

for 1 ≤ i ≤ d, where t ≥ 0, A is bounded below and each K i is
predictable. Itô’s formula for such processes takes a particularly simple
form.




 Dave Applebaum (Sheffield UK)               Lecture 4                           December 2011    27 / 58
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)
Koc4(dba)

Weitere ähnliche Inhalte

Was ist angesagt?

On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved versionZheng Mengdi
 
Topic: Fourier Series ( Periodic Function to change of interval)
Topic: Fourier Series ( Periodic Function to  change of interval)Topic: Fourier Series ( Periodic Function to  change of interval)
Topic: Fourier Series ( Periodic Function to change of interval)Abhishek Choksi
 
Moment closure inference for stochastic kinetic models
Moment closure inference for stochastic kinetic modelsMoment closure inference for stochastic kinetic models
Moment closure inference for stochastic kinetic modelsColin Gillespie
 
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spaces
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spacesOn fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spaces
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spacesAlexander Decker
 
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)IJERA Editor
 
Phases of fuzzy field theories
Phases of fuzzy field theoriesPhases of fuzzy field theories
Phases of fuzzy field theoriesJuraj Tekel
 
Comparison Theorems for SDEs
Comparison Theorems for SDEs Comparison Theorems for SDEs
Comparison Theorems for SDEs Ilya Gikhman
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equationsIlya Gikhman
 
Signal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier TransformsSignal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier TransformsArvind Devaraj
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrenceBrian Burns
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment HelpEdu Assignment Help
 
Fourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lFourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lPepa Vidosa Serradilla
 
Free Ebooks Download ! Edhole
Free Ebooks Download ! EdholeFree Ebooks Download ! Edhole
Free Ebooks Download ! EdholeEdhole.com
 
Analysis and numerics of partial differential equations
Analysis and numerics of partial differential equationsAnalysis and numerics of partial differential equations
Analysis and numerics of partial differential equationsSpringer
 

Was ist angesagt? (20)

Kt2418201822
Kt2418201822Kt2418201822
Kt2418201822
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-object
 
SLC 2015 talk improved version
SLC 2015 talk improved versionSLC 2015 talk improved version
SLC 2015 talk improved version
 
Topic: Fourier Series ( Periodic Function to change of interval)
Topic: Fourier Series ( Periodic Function to  change of interval)Topic: Fourier Series ( Periodic Function to  change of interval)
Topic: Fourier Series ( Periodic Function to change of interval)
 
Moment closure inference for stochastic kinetic models
Moment closure inference for stochastic kinetic modelsMoment closure inference for stochastic kinetic models
Moment closure inference for stochastic kinetic models
 
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spaces
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spacesOn fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spaces
On fixed point theorems in fuzzy 2 metric spaces and fuzzy 3-metric spaces
 
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
Fixed Point Results In Fuzzy Menger Space With Common Property (E.A.)
 
Phases of fuzzy field theories
Phases of fuzzy field theoriesPhases of fuzzy field theories
Phases of fuzzy field theories
 
Comparison Theorems for SDEs
Comparison Theorems for SDEs Comparison Theorems for SDEs
Comparison Theorems for SDEs
 
Hw4sol
Hw4solHw4sol
Hw4sol
 
Stochastic Schrödinger equations
Stochastic Schrödinger equationsStochastic Schrödinger equations
Stochastic Schrödinger equations
 
Signal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier TransformsSignal Processing Introduction using Fourier Transforms
Signal Processing Introduction using Fourier Transforms
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrence
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment Help
 
Fourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 lFourier series of odd functions with period 2 l
Fourier series of odd functions with period 2 l
 
Free Ebooks Download ! Edhole
Free Ebooks Download ! EdholeFree Ebooks Download ! Edhole
Free Ebooks Download ! Edhole
 
hone_durham
hone_durhamhone_durham
hone_durham
 
Analysis and numerics of partial differential equations
Analysis and numerics of partial differential equationsAnalysis and numerics of partial differential equations
Analysis and numerics of partial differential equations
 
stochastic processes assignment help
stochastic processes assignment helpstochastic processes assignment help
stochastic processes assignment help
 

Andere mochten auch

Fund performance-management
Fund performance-managementFund performance-management
Fund performance-managementSerhat Yucel
 
Principles of-quantitative-investment-management
Principles of-quantitative-investment-managementPrinciples of-quantitative-investment-management
Principles of-quantitative-investment-managementSerhat Yucel
 
Probability, Random Variables and Stochastic Pocesses - 4th
Probability, Random Variables and Stochastic Pocesses - 4thProbability, Random Variables and Stochastic Pocesses - 4th
Probability, Random Variables and Stochastic Pocesses - 4thCHIH-PEI WEN
 

Andere mochten auch (7)

Fund performance-management
Fund performance-managementFund performance-management
Fund performance-management
 
Principles of-quantitative-investment-management
Principles of-quantitative-investment-managementPrinciples of-quantitative-investment-management
Principles of-quantitative-investment-management
 
Koc5(dba)
Koc5(dba)Koc5(dba)
Koc5(dba)
 
Koc3(dba)
Koc3(dba)Koc3(dba)
Koc3(dba)
 
Koc1(dba)
Koc1(dba)Koc1(dba)
Koc1(dba)
 
Probability, Random Variables and Stochastic Pocesses - 4th
Probability, Random Variables and Stochastic Pocesses - 4thProbability, Random Variables and Stochastic Pocesses - 4th
Probability, Random Variables and Stochastic Pocesses - 4th
 
Probability And Random Variable Lecture 1
Probability And Random Variable Lecture 1Probability And Random Variable Lecture 1
Probability And Random Variable Lecture 1
 

Ähnlich wie Koc4(dba)

11.[95 103]solution of telegraph equation by modified of double sumudu transf...
11.[95 103]solution of telegraph equation by modified of double sumudu transf...11.[95 103]solution of telegraph equation by modified of double sumudu transf...
11.[95 103]solution of telegraph equation by modified of double sumudu transf...Alexander Decker
 
Laplace transform
Laplace transformLaplace transform
Laplace transformjoni joy
 
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docxMATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docxandreecapon
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de númerosmmasdeu
 
Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Nikita V. Artamonov
 
Fixed point theorems for four mappings in fuzzy metric space using implicit r...
Fixed point theorems for four mappings in fuzzy metric space using implicit r...Fixed point theorems for four mappings in fuzzy metric space using implicit r...
Fixed point theorems for four mappings in fuzzy metric space using implicit r...Alexander Decker
 
International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equationsTahia ZERIZER
 
Differential equation & laplace transformation with matlab
Differential equation & laplace transformation with matlabDifferential equation & laplace transformation with matlab
Differential equation & laplace transformation with matlabRavi Jindal
 
Existence of Extremal Solutions of Second Order Initial Value Problems
Existence of Extremal Solutions of Second Order Initial Value ProblemsExistence of Extremal Solutions of Second Order Initial Value Problems
Existence of Extremal Solutions of Second Order Initial Value Problemsijtsrd
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2Teng Li
 
Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Alexander Decker
 
11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...Alexander Decker
 
Doering Savov
Doering SavovDoering Savov
Doering Savovgh
 
Tree distance algorithm
Tree distance algorithmTree distance algorithm
Tree distance algorithmTrector Rancor
 
Final Present Pap1on relibility
Final Present Pap1on relibilityFinal Present Pap1on relibility
Final Present Pap1on relibilityketan gajjar
 

Ähnlich wie Koc4(dba) (20)

11.[95 103]solution of telegraph equation by modified of double sumudu transf...
11.[95 103]solution of telegraph equation by modified of double sumudu transf...11.[95 103]solution of telegraph equation by modified of double sumudu transf...
11.[95 103]solution of telegraph equation by modified of double sumudu transf...
 
Laplace transform
Laplace transformLaplace transform
Laplace transform
 
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docxMATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de números
 
Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014
 
Fixed point theorems for four mappings in fuzzy metric space using implicit r...
Fixed point theorems for four mappings in fuzzy metric space using implicit r...Fixed point theorems for four mappings in fuzzy metric space using implicit r...
Fixed point theorems for four mappings in fuzzy metric space using implicit r...
 
International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI)
 
D021018022
D021018022D021018022
D021018022
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equations
 
Congrès SMAI 2019
Congrès SMAI 2019Congrès SMAI 2019
Congrès SMAI 2019
 
Differential equation & laplace transformation with matlab
Differential equation & laplace transformation with matlabDifferential equation & laplace transformation with matlab
Differential equation & laplace transformation with matlab
 
Existence of Extremal Solutions of Second Order Initial Value Problems
Existence of Extremal Solutions of Second Order Initial Value ProblemsExistence of Extremal Solutions of Second Order Initial Value Problems
Existence of Extremal Solutions of Second Order Initial Value Problems
 
InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2InfEntr_EntrProd_20100618_2
InfEntr_EntrProd_20100618_2
 
Linear response theory
Linear response theoryLinear response theory
Linear response theory
 
Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...Fixed point theorem of discontinuity and weak compatibility in non complete n...
Fixed point theorem of discontinuity and weak compatibility in non complete n...
 
11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...11.fixed point theorem of discontinuity and weak compatibility in non complet...
11.fixed point theorem of discontinuity and weak compatibility in non complet...
 
Doering Savov
Doering SavovDoering Savov
Doering Savov
 
Tree distance algorithm
Tree distance algorithmTree distance algorithm
Tree distance algorithm
 
Final Present Pap1on relibility
Final Present Pap1on relibilityFinal Present Pap1on relibility
Final Present Pap1on relibility
 

Kürzlich hochgeladen

The Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdfThe Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdfGale Pooley
 
Log your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignLog your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignHenry Tapper
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfGale Pooley
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Pooja Nehwal
 
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )Pooja Nehwal
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...ssifa0344
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designsegoetzinger
 
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdfFinTech Belgium
 
Stock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdfStock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdfMichael Silva
 
00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptxFinTech Belgium
 
The Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdfThe Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdfGale Pooley
 
The Economic History of the U.S. Lecture 20.pdf
The Economic History of the U.S. Lecture 20.pdfThe Economic History of the U.S. Lecture 20.pdf
The Economic History of the U.S. Lecture 20.pdfGale Pooley
 
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Pooja Nehwal
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...Call Girls in Nagpur High Profile
 
WhatsApp 📞 Call : 9892124323 ✅Call Girls In Chembur ( Mumbai ) secure service
WhatsApp 📞 Call : 9892124323  ✅Call Girls In Chembur ( Mumbai ) secure serviceWhatsApp 📞 Call : 9892124323  ✅Call Girls In Chembur ( Mumbai ) secure service
WhatsApp 📞 Call : 9892124323 ✅Call Girls In Chembur ( Mumbai ) secure servicePooja Nehwal
 
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptxFinTech Belgium
 

Kürzlich hochgeladen (20)

The Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdfThe Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdf
 
Log your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaignLog your LOA pain with Pension Lab's brilliant campaign
Log your LOA pain with Pension Lab's brilliant campaign
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdf
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
 
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designs
 
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
 
Stock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdfStock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdf
 
Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024
 
00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx
 
The Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdfThe Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdf
 
The Economic History of the U.S. Lecture 20.pdf
The Economic History of the U.S. Lecture 20.pdfThe Economic History of the U.S. Lecture 20.pdf
The Economic History of the U.S. Lecture 20.pdf
 
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
Independent Call Girl Number in Kurla Mumbai📲 Pooja Nehwal 9892124323 💞 Full ...
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
 
WhatsApp 📞 Call : 9892124323 ✅Call Girls In Chembur ( Mumbai ) secure service
WhatsApp 📞 Call : 9892124323  ✅Call Girls In Chembur ( Mumbai ) secure serviceWhatsApp 📞 Call : 9892124323  ✅Call Girls In Chembur ( Mumbai ) secure service
WhatsApp 📞 Call : 9892124323 ✅Call Girls In Chembur ( Mumbai ) secure service
 
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx
02_Fabio Colombo_Accenture_MeetupDora&Cybersecurity.pptx
 

Koc4(dba)

  • 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 4: Stochastic Integration and Itô’s Formula David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 8th December 2011 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 1 / 58
  • 2. Stochastic Integration We next give a rather rapid account of stochastic integration in a form suitable for application to Lévy processes. Let X = M + C be a semimartingale. The problem of stochastic integration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0 The second integral can be well-defined using the usual Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a continuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  • 3. Stochastic Integration We next give a rather rapid account of stochastic integration in a form suitable for application to Lévy processes. Let X = M + C be a semimartingale. The problem of stochastic integration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0 The second integral can be well-defined using the usual Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a continuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  • 4. Stochastic Integration We next give a rather rapid account of stochastic integration in a form suitable for application to Lévy processes. Let X = M + C be a semimartingale. The problem of stochastic integration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0 The second integral can be well-defined using the usual Lebesgue-Stieltjes approach. The first one cannot - indeed if M is a continuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  • 5. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 6. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 7. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 8. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 9. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 10. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 11. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 12. Refer to the martingale part of the Lévy-Itô decomposition. Define a “martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}), for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownian motion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  • 13. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 14. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 15. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 16. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 17. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 18. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 19. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 20. We’re going to unify the usual stochastic integral with the Poisson integral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0} where G(s) = F (s, 0). Of course, we need some conditions on the class of integrands:- Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebra with respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1) and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  • 21. We call P the predictable σ-algebra. A P-measurable mapping G : [0, T ] × E × Ω → R is then said to be predictable. The definition clearly extends naturally to the case where [0, T ] is replaced by R+ . Note that by (1), if G is predictable then the process t → G(t, x, ·) is adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it is clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  • 22. We call P the predictable σ-algebra. A P-measurable mapping G : [0, T ] × E × Ω → R is then said to be predictable. The definition clearly extends naturally to the case where [0, T ] is replaced by R+ . Note that by (1), if G is predictable then the process t → G(t, x, ·) is adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it is clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  • 23. We call P the predictable σ-algebra. A P-measurable mapping G : [0, T ] × E × Ω → R is then said to be predictable. The definition clearly extends naturally to the case where [0, T ] is replaced by R+ . Note that by (1), if G is predictable then the process t → G(t, x, ·) is adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it is clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  • 24. We call P the predictable σ-algebra. A P-measurable mapping G : [0, T ] × E × Ω → R is then said to be predictable. The definition clearly extends naturally to the case where [0, T ] is replaced by R+ . Note that by (1), if G is predictable then the process t → G(t, x, ·) is adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it is clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  • 25. We call P the predictable σ-algebra. A P-measurable mapping G : [0, T ] × E × Ω → R is then said to be predictable. The definition clearly extends naturally to the case where [0, T ] is replaced by R+ . Note that by (1), if G is predictable then the process t → G(t, x, ·) is adapted, for each x ∈ E. If G satisfies (1) and is left continuous then it is clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  • 26. Define H2 (T , E) to be the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 E It can be shown that H2 (T , E) is a real Hilbert space with respect to T the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  • 27. Define H2 (T , E) to be the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 E It can be shown that H2 (T , E) is a real Hilbert space with respect to T the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  • 28. Define H2 (T , E) to be the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 E It can be shown that H2 (T , E) is a real Hilbert space with respect to T the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  • 29. Define H2 (T , E) to be the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 E It can be shown that H2 (T , E) is a real Hilbert space with respect to T the inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  • 30. Define S(T , E) to be the linear space of all simple processes in H2 (T , E),where F is simple if for some m, n ∈ N, there exists 0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1 where each Fj,k is a bounded Ftj -measurable random variable. Note that F is left continuous and B(E) ⊗ Ft measurable, hence it is predictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  • 31. Define S(T , E) to be the linear space of all simple processes in H2 (T , E),where F is simple if for some m, n ∈ N, there exists 0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1 where each Fj,k is a bounded Ftj -measurable random variable. Note that F is left continuous and B(E) ⊗ Ft measurable, hence it is predictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  • 32. Define S(T , E) to be the linear space of all simple processes in H2 (T , E),where F is simple if for some m, n ∈ N, there exists 0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1 where each Fj,k is a bounded Ftj -measurable random variable. Note that F is left continuous and B(E) ⊗ Ft measurable, hence it is predictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  • 33. Define S(T , E) to be the linear space of all simple processes in H2 (T , E),where F is simple if for some m, n ∈ N, there exists 0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1 where each Fj,k is a bounded Ftj -measurable random variable. Note that F is left continuous and B(E) ⊗ Ft measurable, hence it is predictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  • 34. Define S(T , E) to be the linear space of all simple processes in H2 (T , E),where F is simple if for some m, n ∈ N, there exists 0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borel subsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1 where each Fj,k is a bounded Ftj -measurable random variable. Note that F is left continuous and B(E) ⊗ Ft measurable, hence it is predictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  • 35. One of Itô’s greatest achievements was the definition of the stochastic integral IT (F ), for F simple, by separating the “past” from the “future” within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1 so on each time interval [tj , tj+1 ], Fj,k encapsulates information obtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  • 36. One of Itô’s greatest achievements was the definition of the stochastic integral IT (F ), for F simple, by separating the “past” from the “future” within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1 so on each time interval [tj , tj+1 ], Fj,k encapsulates information obtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  • 37. One of Itô’s greatest achievements was the definition of the stochastic integral IT (F ), for F simple, by separating the “past” from the “future” within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1 so on each time interval [tj , tj+1 ], Fj,k encapsulates information obtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  • 38. Lemma For each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 E Proof. E(IT (F )) = 0 is a straightforward application of linearity and independence. The second result is quite messy - we lose nothing important by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  • 39. Lemma For each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 E Proof. E(IT (F )) = 0 is a straightforward application of linearity and independence. The second result is quite messy - we lose nothing important by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  • 40. Lemma For each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 E Proof. E(IT (F )) = 0 is a straightforward application of linearity and independence. The second result is quite messy - we lose nothing important by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  • 41. m m So let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1 Now fix j and split the second sum into three pieces - corresponding to p < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  • 42. m m So let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1 Now fix j and split the second sum into three pieces - corresponding to p < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  • 43. m m So let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1 Now fix j and split the second sum into three pieces - corresponding to p < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  • 44. m m So let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1 Now fix j and split the second sum into three pieces - corresponding to p < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  • 45. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of B(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0. Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  • 46. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of B(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0. Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  • 47. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent of B(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0. Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  • 48. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 49. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 50. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 51. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 52. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 53. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 54. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1 We deduce from Lemma 1 that IT is a linear isometry from S(T , E) into L2 (Ω, F, P),and hence it extends to an isometric embedding of the whole of H2 (T , E) into L2 (Ω, F, P). We continue to denote this extension as IT and we call IT (F ) the (Itô) stochastic integral of F ∈ H2 (T , E). When convenient, we will use the Leibniz notation T IT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  • 55. T So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we can find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 E The limit is taken in the L2 -sense and is independent of the choice of approximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  • 56. T So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we can find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 E The limit is taken in the L2 -sense and is independent of the choice of approximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  • 57. T So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we can find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 E The limit is taken in the L2 -sense and is independent of the choice of approximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  • 58. T So for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, we can find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 E The limit is taken in the L2 -sense and is independent of the choice of approximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  • 59. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 60. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 61. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 62. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 63. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 64. The following theorem summarises some useful properties of the stochastic integral. Theorem If F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  • 65. Proof. (1) and (2) are easy. For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then each process (It (Fn ), t ≥ 0) is clearly adapted. Since each It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence (Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the required result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  • 66. Proof. (1) and (2) are easy. For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then each process (It (Fn ), t ≥ 0) is clearly adapted. Since each It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence (Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the required result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  • 67. Proof. (1) and (2) are easy. For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then each process (It (Fn ), t ≥ 0) is clearly adapted. Since each It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence (Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the required result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  • 68. Proof. (1) and (2) are easy. For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then each process (It (Fn ), t ≥ 0) is clearly adapted. Since each It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence (Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the required result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  • 69. Proof. (1) and (2) are easy. For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then each process (It (Fn ), t ≥ 0) is clearly adapted. Since each It (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence (Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and the required result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  • 70. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 71. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 72. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 73. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 74. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 75. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 76. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 77. (4) Let F ∈ S(T , E) and (without loss of generality) choose 0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F ) and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1 The result now follows by the continuity of Es in L2 . Indeed, let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have ||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  • 78. We can extend the stochastic integral IT (F ) to integrands in P2 (T , E).This is the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1. If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not necessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  • 79. We can extend the stochastic integral IT (F ) to integrands in P2 (T , E).This is the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1. If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not necessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  • 80. We can extend the stochastic integral IT (F ) to integrands in P2 (T , E).This is the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1. If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not necessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  • 81. We can extend the stochastic integral IT (F ) to integrands in P2 (T , E).This is the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1. If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not necessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  • 82. We can extend the stochastic integral IT (F ) to integrands in P2 (T , E).This is the linear space of all equivalence classes of mappings F : [0, T ] × E × Ω → R which coincide almost everywhere with respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1. If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but not necessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  • 83. Poisson Stochastic Integrals Let A be an arbitrary Borel set in Rd − {0} which is bounded below, and introduce the compound Poisson process P = (P(t), t ≥ 0), where each P(t) = A xN(t, dx). Let K be a predictable mapping, then generalising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤T as a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  • 84. Poisson Stochastic Integrals Let A be an arbitrary Borel set in Rd − {0} which is bounded below, and introduce the compound Poisson process P = (P(t), t ≥ 0), where each P(t) = A xN(t, dx). Let K be a predictable mapping, then generalising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤T as a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  • 85. Poisson Stochastic Integrals Let A be an arbitrary Borel set in Rd − {0} which is bounded below, and introduce the compound Poisson process P = (P(t), t ≥ 0), where each P(t) = A xN(t, dx). Let K be a predictable mapping, then generalising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤T as a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  • 86. Poisson Stochastic Integrals Let A be an arbitrary Borel set in Rd − {0} which is bounded below, and introduce the compound Poisson process P = (P(t), t ≥ 0), where each P(t) = A xN(t, dx). Let K be a predictable mapping, then generalising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤T as a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  • 87. In particular, if H satisfies the square-integrability condition given above, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 A The definition (0.2) can, in principle, be used to define stochastic integrals for a more general class of integrands than we have been considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process of intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  • 88. In particular, if H satisfies the square-integrability condition given above, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 A The definition (0.2) can, in principle, be used to define stochastic integrals for a more general class of integrands than we have been considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process of intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  • 89. In particular, if H satisfies the square-integrability condition given above, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 A The definition (0.2) can, in principle, be used to define stochastic integrals for a more general class of integrands than we have been considering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson process of intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  • 90. Lévy-type stochastic integrals ˆ We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this subsection. We say that an Rd -valued stochastic process Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written in the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1 where for each 1 1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is predictable. B is an m-dimensional standard Brownian motion and N is an independent Poisson random measure on R+ × (Rd − {0}) with ˜ compensator N and intensity measure ν, which we will assume is a Lévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  • 91. Lévy-type stochastic integrals ˆ We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this subsection. We say that an Rd -valued stochastic process Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written in the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1 where for each 1 1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is predictable. B is an m-dimensional standard Brownian motion and N is an independent Poisson random measure on R+ × (Rd − {0}) with ˜ compensator N and intensity measure ν, which we will assume is a Lévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  • 92. Lévy-type stochastic integrals ˆ We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this subsection. We say that an Rd -valued stochastic process Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written in the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1 where for each 1 1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is predictable. B is an m-dimensional standard Brownian motion and N is an independent Poisson random measure on R+ × (Rd − {0}) with ˜ compensator N and intensity measure ν, which we will assume is a Lévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  • 93. Lévy-type stochastic integrals ˆ We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this subsection. We say that an Rd -valued stochastic process Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written in the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1 where for each 1 1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is predictable. B is an m-dimensional standard Brownian motion and N is an independent Poisson random measure on R+ × (Rd − {0}) with ˜ compensator N and intensity measure ν, which we will assume is a Lévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  • 94. Lévy-type stochastic integrals ˆ We take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout this subsection. We say that an Rd -valued stochastic process Y = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be written in the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1 where for each 1 1 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K is predictable. B is an m-dimensional standard Brownian motion and N is an independent Poisson random measure on R+ × (Rd − {0}) with ˜ compensator N and intensity measure ν, which we will assume is a Lévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  • 95. We will assume that the random variable Y (0) is F0 -measurable, and then it is clear that Y is an adapted process. We can often simplify complicated expressions by employing the notation of stochastic differentials to represent Lévy-type stochastic integrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx). When we want to particularly emphasise the domains of integration with respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1 Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  • 96. We will assume that the random variable Y (0) is F0 -measurable, and then it is clear that Y is an adapted process. We can often simplify complicated expressions by employing the notation of stochastic differentials to represent Lévy-type stochastic integrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx). When we want to particularly emphasise the domains of integration with respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1 Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  • 97. We will assume that the random variable Y (0) is F0 -measurable, and then it is clear that Y is an adapted process. We can often simplify complicated expressions by employing the notation of stochastic differentials to represent Lévy-type stochastic integrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx). When we want to particularly emphasise the domains of integration with respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1 Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  • 98. We will assume that the random variable Y (0) is F0 -measurable, and then it is clear that Y is an adapted process. We can often simplify complicated expressions by employing the notation of stochastic differentials to represent Lévy-type stochastic integrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx). When we want to particularly emphasise the domains of integration with respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1 Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  • 99. We will assume that the random variable Y (0) is F0 -measurable, and then it is clear that Y is an adapted process. We can often simplify complicated expressions by employing the notation of stochastic differentials to represent Lévy-type stochastic integrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx). When we want to particularly emphasise the domains of integration with respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1 Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  • 100. Let M = (M(t), t ≥ 0) be an adapted process which is such that MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary). For example, it is sufficient to take M to be adapted and left-continuous. For these processes we can define an adapted process Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic differential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx), and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  • 101. Let M = (M(t), t ≥ 0) be an adapted process which is such that MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary). For example, it is sufficient to take M to be adapted and left-continuous. For these processes we can define an adapted process Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic differential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx), and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  • 102. Let M = (M(t), t ≥ 0) be an adapted process which is such that MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary). For example, it is sufficient to take M to be adapted and left-continuous. For these processes we can define an adapted process Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic differential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx), and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  • 103. Let M = (M(t), t ≥ 0) be an adapted process which is such that MJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary). For example, it is sufficient to take M to be adapted and left-continuous. For these processes we can define an adapted process Z = (Z (t), t ≥ 0) by the prescription that it have the stochastic differential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx), and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  • 104. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 105. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 106. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 107. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 108. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 109. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 110. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 111. Example (Lévy Stochastic Integrals) Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itô decomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1 for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose each Fji = aji L, H i = K i = x i L. Then we can construct processes with the stochastic differential dY (t) = L(t)dX (t) (0.3) We call Y a Lévy stochastic integral. In the case where X has finite variation, the Lévy stochastic integral Y can also be constructed as a Lebesgue-Stieltjes integral, and this coincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  • 112. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 113. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 114. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 115. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 116. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 117. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0 where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1 is necessary and sufficient for it to be stationary. There are important applications to volatility modelling in finance which have been developed by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly, every self-decomposable random variable can be naturally embedded in a stationary Lévy-driven OU process. See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  • 118. Stochastic integration has a plethora of applications including filtering, stochastic control and infinite dimensional analysis. Here’s some motivation from finance. Suppose that X (t) is the value of a stock at time t and F (t) is the number of stocks owned at time t. Assume for now that we buy and sell stocks at discrete times 0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio at time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0 and in the limit as the times become infinitesimally close together, we have the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  • 119. Stochastic integration has a plethora of applications including filtering, stochastic control and infinite dimensional analysis. Here’s some motivation from finance. Suppose that X (t) is the value of a stock at time t and F (t) is the number of stocks owned at time t. Assume for now that we buy and sell stocks at discrete times 0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio at time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0 and in the limit as the times become infinitesimally close together, we have the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  • 120. Stochastic integration has a plethora of applications including filtering, stochastic control and infinite dimensional analysis. Here’s some motivation from finance. Suppose that X (t) is the value of a stock at time t and F (t) is the number of stocks owned at time t. Assume for now that we buy and sell stocks at discrete times 0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio at time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0 and in the limit as the times become infinitesimally close together, we have the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  • 121. Stochastic integration has a plethora of applications including filtering, stochastic control and infinite dimensional analysis. Here’s some motivation from finance. Suppose that X (t) is the value of a stock at time t and F (t) is the number of stocks owned at time t. Assume for now that we buy and sell stocks at discrete times 0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolio at time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0 and in the limit as the times become infinitesimally close together, we have the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  • 122. Stochastic integration against Brownian motion was first developed by Wiener for sure functions. Itô’s groundbreaking work in 1944 extended this to random adapted integrands.The generalisation of the integrator to arbitrary martingales was due to Kunita and Watanabe in 1967and the further step to allow semimartingales was due to P.A.Meyer and the Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58
  • 123. Stochastic integration against Brownian motion was first developed by Wiener for sure functions. Itô’s groundbreaking work in 1944 extended this to random adapted integrands.The generalisation of the integrator to arbitrary martingales was due to Kunita and Watanabe in 1967and the further step to allow semimartingales was due to P.A.Meyer and the Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58
  • 124. Stochastic integration against Brownian motion was first developed by Wiener for sure functions. Itô’s groundbreaking work in 1944 extended this to random adapted integrands.The generalisation of the integrator to arbitrary martingales was due to Kunita and Watanabe in 1967and the further step to allow semimartingales was due to P.A.Meyer and the Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58
  • 125. Stochastic integration against Brownian motion was first developed by Wiener for sure functions. Itô’s groundbreaking work in 1944 extended this to random adapted integrands.The generalisation of the integrator to arbitrary martingales was due to Kunita and Watanabe in 1967and the further step to allow semimartingales was due to P.A.Meyer and the Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58
  • 126. Itô’s Formula We begin with the easy case - Itô’s formula for Poisson stochastic integrals of the form t W i (t) = W i (0) + K i (t, x)N(dt, dx) (0.5) 0 A for 1 ≤ i ≤ d, where t ≥ 0, A is bounded below and each K i is predictable. Itô’s formula for such processes takes a particularly simple form. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 27 / 58
  • 127. Itô’s Formula We begin with the easy case - Itô’s formula for Poisson stochastic integrals of the form t W i (t) = W i (0) + K i (t, x)N(dt, dx) (0.5) 0 A for 1 ≤ i ≤ d, where t ≥ 0, A is bounded below and each K i is predictable. Itô’s formula for such processes takes a particularly simple form. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 27 / 58