Tải bản đầy đủ (.pdf) (54 trang)

Thesis title skorokhod embedding

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (375.37 KB, 54 trang )

HANOI PEDAGOGICAL UNIVERSITY 2
DEPARTMENT OF MATHEMATICS
--------------------------

NGUYEN THI THU HA

THESIS TITLE: SKOROKHOD EMBEDDING
BACHELOR THESIS

Major: APPLIED MATHEMATICS

Hanoi, 5/ 2019


HANOI PEDAGOGICAL UNIVERSITY 2
DEPARTMENT OF MATHEMATICS
--------------------------

NGUYEN THI THU HA

THESIS TITLE: SKOROKHOD EMBEDDING
BACHELOR THESIS

Major: APPLIED MATHEMATICS

Supervisor: Assoc. Prof. NGO HOANG LONG

Hanoi, 5/ 2019


Thesis Acknowledgement



I would like to express my gratitude to the teachers of the Department
of Mathematics, Hanoi Pedagogy University 2, the teachers in the applied mathematics group as well as the teachers involved. The lecturers
have imparted valuable knowledge and facilitated for me to complete the
course and the thesis.
In particular, I would like to express my deep respect and gratitude to
Assoc.Prof.Ngo Hoang Long, who has direct guidance, help me complete
this thesis. I also want to thank to Dr. Nguyen Duy Tan for his valuable
advice and assistance in the course of my degree.
Due to time, capacity and conditions are limited, so the thesis cannot
avoid errors. So, I look forward to receiving valuable comments from
teachers and friends.

Hanoi, May 06th 2018
Student

Nguyen Thi Thu Ha


Thesis Assurance

I assure that the data and the results of this thesis are true and not
identical to other topics. I also assure that all the help for this thesis
has been acknowledged and that the results presented in the thesis has
been identified clearly.

Hanoi, May 06th 2018
Student

Nguyen Thi Thu Ha



Contents
Preface

1

1 Stochastic processes

2

1.1

Conditional expectation . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.2

Discrete-time Martingales . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.2.1

Stopping times . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.2.2


Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.2.3

Optional stopping . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1.2.4

Doob’s inequalities . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.2.5

Martingale convergence theorem . . . . . . . . . . . . . . . . . .

8

Continuous-time Martingale . . . . . . . . . . . . . . . . . . . . . . . .

11

1.3.1

Stochastic processes


. . . . . . . . . . . . . . . . . . . . . . . .

11

1.3.2

Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . .

12

1.3.3

Continuous-time Martingales . . . . . . . . . . . . . . . . . . . .

15

1.3.4

Markov properties of Brownian motion . . . . . . . . . . . . . .

19

1.3.5

Continuous semimartingales . . . . . . . . . . . . . . . . . . . .

21

1.3


2 Stochastic analysis
2.1

2.2

2.3

25

Stochastic integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.1.1

Stochastic integral in M2 ([a, b]) . . . . . . . . . . . . . . . . . .

25

2.1.2

Stochastic integral with limit are stopping times . . . . . . . . .

29

2.1.3

Stochastic integral in L2 ([a, b]) . . . . . . . . . . . . . . . . . . .


30

2.1.4

Itˆo integral of m dimensions . . . . . . . . . . . . . . . . . . . .

30

Itˆo’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

2.2.1

In the one-dimensional case . . . . . . . . . . . . . . . . . . . .

31

2.2.2

In the case m dimensions (m ≥ 2) . . . . . . . . . . . . . . . . .

32

Some applications of Itˆo’s formula . . . . . . . . . . . . . . . . . . . . .

34

i



CONTENTS

3 Skorokhod embedding

37

3.1

Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.2

Construction of the embedding . . . . . . . . . . . . . . . . . . . . . .

42

3.3

Some applications of Skorokhod embedding . . . . . . . . . . . . . . . .

45

Conclusion

47

References


47

ii


Preface
Stochastic analysis is a mathematics field having many applications, especially in
financial mathematics and banking, insurance. Stochastic analysis’s definitions are
filtration, martingale, stochastic integral appear in reality naturally.
Skorokhod embedding is one of good result of stochastic analysis. Not only is it a
tool to research many theory problems, but also a tool to solve application problems
in reality. Skorokhod embedding was stated and solved by a Russian mathematician
whose name is Anatoliy Skorokhod in 1961.
It was stated that suppose Y is a random variable with mean zero and finite variance,
W is a Brownian motion, there exits a stopping time T such that WT has the same
law as Y . Therefore, with the desire of learning more about it, I chose Thesis Title
“Skorokhod embedding” for my graduation thesis.
Let us describe the content of this thesis.
In Chapter 1, we begin with the definition and basic properties of some important
concepts of Stochastic process.
In Chapter 2, we consider Stochastic integral in some space. Next, we introduce the
notation of Itˆo’s integral of m dimensions and Itˆo’s formula. After that, we prove some
important inequality for stochastic integral by Itˆo’s formula.
In chapter 3, firstly, we will review some objection using in contruction of Skorokhod
embedding after that. Finally, we present some applications of Skorokhod embedding.

1



Chapter 1

Stochastic processes
1.1

Conditional expectation

Definition 1.1.1. Let (Ω, G, P) be a probability space and X a random variable with
E(|X|) < ∞. Let F be a sub-σ-algebra of G. Then there exists a random variable Y
such that
1. Y is F measurable.
2. E(|Y |) < ∞.
3. for every A ∈ F, we have E[Y ; A] = E[X; A] or

XdP =
A

Y dP
A

Then Y is called the conditional expectation of X given F and denote as E[X|F].
Remark 1.1.2. Y exists uniquely.
Proposition 1.1.3. If X is F measurable, E[X|F] = X.
Proposition 1.1.4. If X is independent of F, E[X|F] = EX.
Proposition 1.1.5. E[E[X|F]] = EX.
Proposition 1.1.6. (1)If X ≥ Y are both integrable, then
E[X|F] ≥ E[Y |F], a.s.(almost surely)
(2)If X and Y are integrable and a ∈ R, then
E[aX + Y |F] = aE[X|F] + E[Y |F].


2


1.2. DISCRETE-TIME MARTINGALES

Proposition 1.1.7. If g is convex and X and g(X) are integrable, then
E[g(X)|F] ≥ g(E[X|F]), a.s.
Proposition 1.1.8. If X and XY are integrable and Y is measurable with respect to
F, then
E[XY |F] = Y E[X|F].
Proposition 1.1.9. If ε ⊂ F ⊂ G are σ-algebras, then
E[E[X|F]|ε] = E[X|ε] = E[E[X|ε]|F].
Proposition 1.1.10. If c is a constant, E[c|F] = c.
Proposition 1.1.11. If F = {∅, Ω}, E[X|F] = EX.
Proposition 1.1.12. If H is a sub-σ-algebra of G and independent of F and X, then
E[X|σ(F, H)] = E[X|F] ,a.s.
Example 1.1.13. Let (Ω, G, P) be a probability space, and {A1 , A2 , ..., An } be a finite
collection of pairwise disjoint sets whose union is Ω, P(Ai ) > 0 for all i, and F is the
σ-algebra generated by the Ai ’s, then
n

E[X|F] =


i=1

1.2
1.2.1





1
P(Ai )

XdP IAi .
Ai

Discrete-time Martingales
Stopping times

Definition 1.2.1. Let (Ω, F, P) be a probability space. {Fn : n ≥ 0} is called a
filtration if it is an increasing family of sub-σ-algebras of F, that is, F0 ⊆ F1 ⊆ F2 ⊆
· · · ⊆ F.
We define F∞ := σ(∪n Fn ) ⊂ F.
Definition 1.2.2. Let (Fn ) be a filtration. A random mapping T from Ω to {0, 1, 2, . . .}
is called a stopping time if for each n, {T ≤ n} = {ω : T (ω) ≤ n} ∈ Fn .
Proposition 1.2.3. (1) Fixed times n are stopping times.
(2) If T1 and T2 are stopping times, then so are T1 ∧ T2 and T1 ∨ T2 .
3


1.2. DISCRETE-TIME MARTINGALES

(3) If Tn is an increasing sequence of stopping times, then so is T = supn Tn .
(4) If Tn is a decreasing sequence of stopping times, then so is T = inf n Tn .
(5) If T is a stopping time, then so is T + n.
we define
FT = {A : A ∩ (T ≤ n) ∈ Fn for all n}.
Example 1.2.4. If T ≡ k ∈ N, T is a stopping time clearly.

1.2.2

Martingales

Definition 1.2.5. A process M = (Mn : n ≥ 0) is called adapted to the filtration (Fn )
if for each n, Mn is Fn -measurable.
Definition 1.2.6. A process M is called a martingale (relative to ((Fn ) , P)) if
i) M is adapted to Fn .
ii) E[|Mn |) < ∞, ∀n.
iii)E[Mn |Fn−1 ] = Mn−1 , a.s.(∀n ≥ 1).
A supermartingale (relative to ((Fn ) , P)) is defined similarly, except that (iii) is replaced by E[Mn |Fn−1 ] ≤ Mn−1 , a.s.(n ≥ 1), and a submartingale is defined with (iii)
replaced by E[Mn |Fn−1 ] ≥ Mn−1 , a.s.(n ≥ 1).
Example 1.2.7. Let Fi = σ {X1 , X2 , .., Xi } and (Xi ) be a sequence of mean zero
n

independent random variables and Sn =

Xi , then Mn = Sn is a martingale because
i=1

1. ∀n ≥ 0, Mn is Fn -measurable clearly.
n

2. EXi = 0 implies that ∀n ≥ 0, E|Mn | = E|[

Xi ]| = 0 < ∞.

i=1

3.

E[Mn |Fn−1 ] = Mn−1 + E[Mn − Mn−1 |Fn−1 ]
= Mn−1 + E[Mn − Mn−1 ] = Mn−1 .
Example 1.2.8. Let Fi = σ {X1 , X2 , .., Xi } and (Xi ) be a sequence of independent
random variables with mean zero and variance one, Sn is as in the previous example
and Mn = Sn2 − n. Then Mn = Sn2 − n is a martingale because
1. ∀n ≥ 0, Mn is Fn -measurable clearly.
2. We have Sn2 − n ≤ Sn2 , so
n

E|Mn | =

E|Sn2

− n| ≤

E|Sn2 |

Xi ]2 | + n < ∞

+ n = E|[
i=1

4


1.2. DISCRETE-TIME MARTINGALES

3.
2
E Sn2 |Fn−1 = E (Sn − Sn−1 )2 |Fn−1 + 2Sn−1 E [Sn |Fn−1 ] − Sn−1

2
= 1 + Sn−1
2
− n = Mn−1 .
Then E [Mn |Fn−1 ] = E [Sn2 − n|Fn−1 ] = 1 + Sn−1

1.2.3

Optional stopping

Theorem 1.2.9. If T is a stopping time with respect to Fn that is bounded by a positive
real K and Mn a martingale, then EMT = EM0 .
K

Proof. we write EMT =

K

E[MT ; T = k] =
k=0

E[Mk ; T = k].
k=0

We have (T = k) is Fj measurable if j ≥ k, so
E[Mk ; T = k] = E[Mk+1 ; T = k] = E[Mk+2 ; T = k] = ... = E[MK ; T = k]
K

Thus EMT =


E[MK ; T = k] = EMK = EM0 .
k=0

This completes the proof.
Corollary 1.2.10. If T is a stopping time bounded by K and Mn is a submartingale,
then EMT ≤ EMK .
Corollary 1.2.11. If T is a stopping time bounded by K, A ∈ FT , and Mn is a
submartingale, then E[MT ; A] ≤ E[MK ; A].
Proposition 1.2.12. If T1 ≤ T2 are stopping times bounded by K and Mn is a martingale, then E[MT2 |FT1 ] = MT1 , a.s.
Proof. Suppose A ∈ FT1 . We want to show E[MT1 |A] = E[MT2 |A].
Define a new stopping time T3 by

 T (ω), if ω ∈ A
1
T3 (ω) =
 T2 (ω), if ω ∈
/ A.
It is easy to check that T3 is a stopping time, so EMT3 = EMK = EMT2 implies
E[MT1 ; A] + E[MT2 ; Ac ] = E[MT2 ].
Subtracting E[MT2 ]; Ac ] from each side completes the proof.
Proposition 1.2.13. Suppose Xk is a submartingale with respect to an increasing
sequence of σ-algebra Fk . Then we can write Xk = Mk + Ak such that Mk is a
5


1.2. DISCRETE-TIME MARTINGALES

martingale adapted to the Fk and Ak is a sequence of random variable with Ak being
Fk−1 measurable and A0 ≤ A1 ≤ ...
Proof. Let ak = E[Xk |Fk−1 ] − Xk−1 for k = 1, 2, ...

Since Xk is a submartingale, each ak ≥ 0.
k

Let Ak =

ai . The fact that the Ak are increasing and measurable with respect to
i=1

Fk−1 is clear.
Set Mk = Xk − Ak .Then
E[Mk+1 − Mk |Fk ] = E[Xk+1 − Xk |Fk ] − ak+1 = 0,
or Mk is a martingale.
Corollary 1.2.14. Suppose Mk is a submartingale, and N1 ≤ N2 are bounded stopping
times. Then
E[MN2 |FN1 ] ≥ MN1 .
1.2.4

Doob’s inequalities

Theorem 1.2.15. Let X = (Xn , Fn )n≥0 be a submartingale. Then for all λ > 0,we
have
λP max Xk ≥ λ ≤ E Xn I[max Xk ≥λ]
k≤n

k≤n

≤ EXn+ ,

(1.1)


λP min Xk ≤ −λ ≤ E Xn I[min Xk >−λ] − EX0 ≤ EXn+ − EX0 ,

(1.2)

λP max Xk ≥ λ ≤ 3 max E |Xk | .

(1.3)

k≤n

k≤n

k≤n

k≤n

Proof. We put

T =




 min {k ≤ n : Xk ≥ n} , on


 n, on

max Xk ≥ λ
k≤n


max Xk < λ .
k≤n

We get T is a stopping time bounded by n. Thus, applying Corollary 1.2.10, we have
EXn ≥ EXT = E(XT I[max Xk ≥λ] ) + E(XT I[max Xk <λ] )
k≤n

k≤n

≥ λP max Xk ≥ λ + E Xn I[max Xk <λ] .
k≤n

k≤n

6


1.2. DISCRETE-TIME MARTINGALES

Hence
λP max Xk ≥ λ ≤ EXn − E Xn I[max Xk <λ]
k≤n

k≤n

= E Xn I[max Xk ≥λ] .
k≤n

Moreover Xn I[max Xk ≥λ] ≤ Xn+ . Therefore, (1.1) is proved.

k≤n

Now we define a stopping time σ as follow



 min {k ≤ n : Xk ≤ −λ} , on
σ=


 n, on min Xk > −λ

min Xk ≤ −λ
k≤n

k≤n

Similarly, we have
EX0 ≤ EXσ = E(Xσ I[min Xk ≤−λ] ) + E(Xσ I[min Xk >−λ] )
k≤n

k≤n

≤ −λP min Xk ≤ −λ + E Xn I[min Xk >−λ] .
k≤n

k≤n

Moreover Xn I[min Xk ≤−λ] ≤ Xn+ . These imply that (1.2) is proved. Finally, since
k≤n


P max |Xk | ≥ λ ≤ P max Xk ≥ λ + P min |Xk | ≤ −λ ,
k≤n

k≤n

k≤n

Appying (1.1) and (1.2) we have
λP max |Xk | ≥ λ ≤ 2EXn+ − EX0 ≤ 3 max E |Xk | .
k≤n

k≤n

Theorem 1.2.16. Assume that (Xn , Fn )n≥0 is a non-negative submartingale, p > 1,
E|Xi |p < ∞, ∀i ≤ n. Put Xn∗ = max |Xj | and Xn
0≤j≤n

E (Xn∗ )p
Proof. Since Xn∗ ≤

n



p
p−1

|Xj |, Xn∗ ∈ Lp .


i=1

7

p

= E(|Xj |p )

p

.E(|Xn |)p

1/p

Then


1.2. DISCRETE-TIME MARTINGALES

We have


E(Xn∗ )p

=



pa


p−1

P (Xn∗

pap−1 E |Xn | I{Xn∗ ≥a} /a da

> a)da ≤

o

0


ap−2 Xn I{Xn∗ ≥a} dPda.

=p
0



Applying Fubini’s theorem, we get



E (Xn∗ )p ≤ p


ap−2 I{Xn∗ ≥a} dadP

Xn 

0



 X∗



n

ap−2 dadP =

Xn 

=p

p
E Xn (Xn∗ )p−1
p−1

0



p−1
1
p

(E(Xn∗ )p ) p (E|Xn |p ) p
p−1


(appying H¨older’s inequality for last inequality).
Then we divide both sides of this inequality by (E(Xn∗ )p )
1

(E(Xn∗ )p ) p ≤

1.2.5



and we get

1
p
(E|Xn |p ) p .
p−1

It implies that
E (Xn∗ )p

p−1
p

p
p−1

p

.E(|Xn |)p .


Martingale convergence theorem

Assume that (Xn , Fn )n≥1 is a adapted sequence. For a < b arbitrary, we define the
sequence of stopping times
T0 = 0,
T1 = min {n > 0 : Xn ≤ a} ,
T2 = min {n > T1 : Xn ≥ b} ,
...
T2m−1 = min {n > T2m−2 : Xn ≤ a} ,
T2m = min {n > T2m−1 : Xn ≥ b} ,

8


1.2. DISCRETE-TIME MARTINGALES

and we put Tk = ∞ if minimum of the set is ∅. Moreover, for each n ≥ 1, put

 0 if T > n,
2
βn (a, b) =
 max {m : T2m < n} if T2 ≤ n.
Then βn (a, b) is the number of times cutting from bottom up [a, b] of the sequence
X1 , X2 , ..., Xn .
Theorem 1.2.17. Assume that (Xn , Fn )n≥1 is a submartingale. Then for all n ≥ 1,
Eβn (a, b) ≤

E(Xn − a)+
.

b−a

Theorem 1.2.18. If (Xn , Fn )n≥1 is a submartingale such that sup E (|Xn |)
n

< ∞, then Xn converges to X∞ almost surely and E |X∞ | < ∞.
Proof. Assume that P [lim sup Xn > lim inf Xn ] > 0.
Since
{lim sup Xn > lim inf Xn } =



a
{lim sup Xn > b > a > lim inf Xn } ,

Thus, there exists a, b ∈ Q such that
P [lim sup Xn > b > a > lim inf Xn ] > 0.

(1.4)

Let βn (a, b) be the number of times cutting from bottom up (a, b) of X1 , X2 , ..., Xn and
put β∞ (a, b) = lim βn (a, b).
n

From (1.4), P [β∞ (a, b) = ∞] > 0, then
Eβ∞ (a, b) = ∞.

(1.5)


On the other hand,we have
E (Xn − a)+
E (Xn+ ) + |a|
Eβn (a, b) ≤

,
b−a
b−a
Now, appying monotone convergence theorem, we get
sup E (Xn+ ) + |a|
Eβ∞ (a, b) = lim Eβn (a, b) ≤
n

n

b−a

< ∞,

This is contrary to (1.5). Hence lim Xn = X∞ exists almost surely.
9


1.2. DISCRETE-TIME MARTINGALES

By Fatou’s lemma, we have E |X∞ | ≤ lim inf E |Xn | ≤ sup E |Xn | < ∞.
n

Remark 1.2.19. If (Xn ) is a submartingale,
E Xn+ ≤ E |Xn | = 2E Xn+ − E(Xn ) ≤ 2E Xn+ − E(X1 ),

hence sup E |Xn | < ∞ ⇔ sup E (Xn+ ) < ∞.
n

n

Corollary 1.2.20. If (Xn , Fn ) is a non-positive submartingale, then lim Xn exists and
n

is finite almost surely.
Corollary 1.2.21. If (Xn , Fn ) is a non-negative martingale, then lim Xn exists almost
n

surely.
Corollary 1.2.22. If (Xn , Fn ) is a martingale, and sup E |Xn |p < ∞ with p > 1
arbitrary. Then Xn converges in Lp almost surely.

n

Proof. Since sup E |Xn |p < ∞, sup E |Xn | < ∞.
n

n

Applying theorem 1.2.18, we have Xn converges almost surely.
On the other hand, applying theorem 1.2.16, we have
E

max |Xn |p

0≤n≤N




p
p−1

p

E|XN |p ≤

p
p−1

p

sup E|Xn |p ,
n

Appying monotone convergence theorem, we get
E sup |Xn |p

< ∞.

n

In addition, we have
P [|Xn | > λ] ≤ λ−p E|Xn |p ≤ λ−p E(sup |Xn |p ) → 0 as λ → ∞.
n

Then

sup E |Xn |p I{|Xn |>λ} ≤ sup E sup |Xm |p I{|Xn |>λ}
n

n

→ 0 as λ → ∞.

m

Therefore (|Xn |p ) is uniform integrability. We also have (Xn ) converges almost surely.
These imply that Xn converges in Lp .
Corollary 1.2.23. If X = (Xn , Fn ) is a submartingale and uniform integrability, then
there exists integrable random variable X∞ such that Xn converges to X∞ in L1 as

10


1.3. CONTINUOUS-TIME MARTINGALE

n → ∞ almost surely. Moreover, X = (Xn , Fn )1≤n≤∞ with F∞ = σ (∪n Fn ) is a
submartingale.
Proof. Since (Xn ) is uniform integrability, sup E |Xn | < ∞.
n

From theorem 1.2.18, we have Xn converges to X∞ almost surely.
Since (Xn ) is uniform integrability,we have Xn converges to X∞ in mean.
For all A ∈ Fn and for all m ≥ n, we have
E (IA |Xm − X∞ |) → 0, as m → ∞.
Thus lim


m→∞ A

Since

Xm dP = X∞ dP.
A

Xm dP
A

is non-decreasing,
m≥n

Xn dP ≤
A

Xm dP ≤
A

X∞ dP,
A

Therefore Xn ≤ E (X∞ |Fn ) almost surely for all n ≥ 1.
Corollary 1.2.24. Let (Mn , Fn ) is a martingale, and sup E |Mn |p < ∞ with p > 1
n

arbitrary. If Mn → M∞ in L1 , then Mn = E [M∞ |Fn ] .
Proof. If j < n, we have Mj = E [Mn |Fj ].
If A ∈ Fj ,we have E [Mj ; A] = E [Mn ; A] → E [M∞ ; A].
This is true for all A ∈ Fj , so Mn = E [M∞ |Fn ] .


1.3

Continuous-time Martingale

1.3.1

Stochastic processes

Let (Ω, F, P) be a probability space.
Definition 1.3.1.

• A collection of σ-algebra (Ft )t≥0 such that Ft ⊂ F for each t

and Ft ⊂ Fs if s ≥ t ≥ 0 is called a filtration.
• A filtration (Ft )t≥0 is called right continuous if Ft = ∩ Fs for all t ≥ 0.
s≥t

• A filtration (Ft )t≥0 is called complete if F0 contain all A ⊂ Ω such that A ⊂ B ∈ F
and P(B) = 0.
• A filtration (Ft )t≥0 is called to satisfy the usual conditions if it is right continuous
and complete.
11


1.3. CONTINUOUS-TIME MARTINGALE

Definition 1.3.2. A collection of random variables (Xt )t∈I takes value in Rd is called
a stochastic process with index set I and state space Rd .
Index set I can be R+ or [a, b] or positive integer numbers set.

• If I is subset of positive integer numbers set, (Xt )t∈I is called discrete-time stochastic process; If I is subset of R+ , (Xt )t∈I is called continuous-time stochastic process.
• When t ∈ I is fixed then function Ω
When we fix ω ∈ Ω, function I

ω → Xt (ω) ∈ Rd is a random variable;

t → Xt (ω) ∈ Rd is called a path of stochastic

process X with respect to ω.
Definition 1.3.3. Stochastic process X is called
• continuous (right continuous, left continuous) if almost surely ω ∈ Ω, function
t → Xt (ω) is continuous (right continuous, left continuous) on [0; ∞);
• cadlag if it is right continuous and almost surely ω ∈ Ω, lim Xs (ω) exists and is
s↑t

finite for all t > 0;
• integrable if Xt is integrable for all t ≥ 0;
• adapted to filtration (Ft ) if Xt is Ft measuarable for each t ≥ 0;
• measuarable if function R+ × Ω

(t, ω) → Xt (ω) ∈ Rd is B(R+ ) × F/B(Rd )

measurable;
Definition 1.3.4. The minimal augmented filtration generated by X to be smallest
filtration that is right continuous and complete and with respect to which the process
X is adapted.
Definition 1.3.5. Assume that (Xt )t≥0 and (Yt )t≥0 are stochastic process.
• Y is called a version of X if P[Xt = Yt ] = 1 for all t ≥ 0;
• X and Y are called to be indistinguishable if P[Xt = Yt for all t ≥ 0] = 1.
1.3.2


Brownian motion

Let (Ω, F, P) be a probability space and (Ft )t≥0 be a filtration that not necessarily
satisfying the usual conditions.
Definition 1.3.6. A stochastic process (B(t))t≥0 is called a Brownian motion with
respect to (Ft )t≥0 and probability measure P if
12


1.3. CONTINUOUS-TIME MARTINGALE

1. B0 = 0;
2. B(.) has continuous paths;
3. B(t) − B(s) is independent of Fs for all 0 ≤ s < t;
4. B(t) − B(s) has distribution N (0, t − s).

Figure 1.1: Simulation of a typical Brownian motion path

Remark 1.3.7. A one-dimensional Brownian motion started at 0 is called a standard
Brownian motion.
Definition 1.3.8. Suppose (B 1 (t), ..., B n (t)) are n independent Brownian motions.
Then B = ((B 1 (t), ..., B n (t))T , t ≥ 0) is a n-dimensional Brownian motion.
Proposition 1.3.9. Suppose B is a Brownian motion and a > 0 then so is Yt =
aB(t/a2 ).
Proof. We have Y0 = aB(0) = 0 and Yt has continuous paths clearly.
Put Gt = Ft/a2 . Then Yt is a Gt measurable.
If s < t,
Yt − Ys = a(B(t/a2 ) − B(s/a2 ))
is independent of Fs/a2 , so it is independent of Gs .

We have Yt − Ys has normal distribution with mean zero and
V ar(Yt − Ys ) = a2 V ar(B(t/a2 ) − B(s/a2 )) = a2 (t/a2 − s/a2 ) = t − s.

13


1.3. CONTINUOUS-TIME MARTINGALE

Remark 1.3.10. A stochastic process X is Gaussian or jointly normal if all its finitedimensional distributions are jointly normal, ie, if for each n ≥ 1 and t1 < t2 < ... < tn ,
the collection of random variables Xt1 , ..., Xtn is a jointly normal collection.
Proposition 1.3.11. If B is a Brownian motion, then B is a Gaussian process.
Proof. Suppose B is a Brownian motion and let 0 = t0 < t1 < ... < tn .
We define
Bt − Bti−1
, i = 1, 2, ..., n.
Xi = √i
ti − ti−1
Then Xi is independent of Fti−1 , so independent of X1 , ..., Xi−1 .
Moreover, Xi is a mean-zero random variable with variance one.
We can write

j

(ti − ti−1 )1/2 Xi , j = 1, ..., n.

Btj =
i=1

and so (Bt1 , ..., Btn ) is jointly normal. It implies that Brownian motion is a Gaussian
process.

Remark 1.3.12. The law of a finite collection of jointly normal random variables is
determined by their means and covatiances. Let B be a Brownian motion. Then if
s ≤ t, the covariance of Bs and Bt as following
t − s = V ar(Bt − Bs ) = V arBt + V arBs − 2Cov(Bs , Bt )
= t + s − 2Cov(Bs , Bt )
It implies that s = Cov(Bs , Bt ).
We can rewrite as Cov(Bs , Bt ) = s ∧ t.
Theorem 1.3.13. If B is a process such that all the finite-dimensional distributions
are jointly normal, EBs = 0 for all s, Cov(Bs , Bt ) = s when s ≤ t, and the paths of Bt
are continuous, then B is a Brownian motion.
Proof. For Ft we take the filtration generated by B.
If s = t, then V arBt = Cov(Bt , Bt ) = t.
In particular, V arB0 = 0, and since EB0 = 0, then B0 = 0,a.s. We have
V ar(Bt − Bs ) = V arBt + V arBs − 2Cov(Bs , Bt )
= t + s − 2s = t − s.

14


1.3. CONTINUOUS-TIME MARTINGALE

If r ≤ s < t, we have
Cov(Bt − Bs , Br ) = Cov(Bt , Br ) − Cov(Bs , Br ) = r − r = 0.
Therefore, Bt − Bs is independent of Br . It implies that Bt − Bs is independent of
Fs .
Remark 1.3.14. If B is a Brownian motion with respect to the filtration generated by
B, then it is also a Brownian motion with respect to the minimal augmented filtration.
1.3.3

Continuous-time Martingales


Definition 1.3.15. A random variable T : Ω → [0, ∞] is called a stopping time if
{T ≤ t} ∈ Ft for all t.
T is called a finite stopping time if T < ∞ almost surely.
T called a bounded stopping time if there exists K ∈ [0, ∞) satisfies T ≤ K almost
surely.
For each stochastic process X and stopping time T , we denote
XT (ω) = XT (ω) (ω).
Proposition 1.3.16. Assume that the filtration (Ft ) satisties the usual conditions.
1. T is a stopping time if and only if {T < t} ∈ Ft for all t.
2. If T = t almost surely then T is a stopping time.
3. If S and T are stopping times, then so are S ∧ T, S ∨ T .
4. If (Tn )n≥1 is sequence of stopping times, then so are sup Tn and inf Tn .
n

n

5. If s ≥ 0 and S is a stopping time, then so is T = S + s.
Proposition 1.3.17. Assume that T is a finite stopping times, put
Tn (ω) =

k
(k + 1)
(k + 1)
if n ≤ T (ω) <
.
n
2
2
2n


Then (Tn )n≥1 is a sequence of stopping times converges to T almost surely. (Tn ) is
called the discrete approach sequence of T .
For each Borel set A, put TA = inf {t > 0 : Xt ∈ A} .
15


1.3. CONTINUOUS-TIME MARTINGALE

Proposition 1.3.18. Suppose (Ft ) satifies the usual conditions and the stochastic
process (Xt ) is adapted to (Ft ) and has continuous paths.
1. If A is open, then TA is a stopping time.
2. If A is closed, then TA is a stopping time.
Proof. (1) If A is open set, we have
{TA < t} =



q∈Q+ ,q
{Xq ∈ A} ∈ Ft .

where Q+ is the set of non-negative rational numbers. (2) If A is a closed set, put
An =

x : d(x, A) = inf d(x, y) < n−1 .
y∈A

Each An is open and TAn is a stopping time (by (1)).
In addition, (An ) decrease, (TAn ) increase and is bouned by TA .

Put T = sup TAn , then we have T is a stopping time and T ≤ TA .
n

On the other hand, since X has continuous paths, on {T < ∞},
XT = lim XAn .
n

Moreover XTAn ∈ An ⊂ Am for n ≥ m.
Let n → ∞, we get XT ∈ Am for all m ≥ 1. It implies that XT ∈ ∩ Am = A, so
m≥1

T ≥ TA on {T < ∞}. Therefore, we get T = TA ,a.s., so TA is a stopping time.
Let T be a stopping time. We put
FT = {A ∈ F : A ∩ {T ≤ t} ∈ Ft for all t > 0} .
FT is σ-algebra contains all of events that are known by time T .
Proposition 1.3.19. Suppose (Ft ) is a filtration satisfying the usual conditions.
1. FT is a σ-algebra.
2. If S ≤ T , FS ⊂ FT .
3. If FT + = ∩ε>0 FT +ε , then FT + = FT .
4. If Xt has right continuous paths, then XT is FT measurable.
Let (Ft )t∈I be a filtration, not necessarily satisfying the usual conditions.
Definition 1.3.20. Stochastic process (Mt )t∈I is called a martingale with respect to
the filtration (Ft ) and the probability measure P if
16


1.3. CONTINUOUS-TIME MARTINGALE

1. E[|Mt |] < ∞ for all t;
2. Mt is Ft measurable for all t;

3. E[Mt |Fs ] = Ms ,a.s., for all t > s.
Remark 1.3.21.

• Part (2) of the definition can be rephrased as Mt is adapted to

Ft .
• In part (3), if “=” is replaced by “>”, Mt is a submartingale; if “=” is replaced
by “ <”, Mt is a supermartingale.
Example 1.3.22. Let Mt = Bt , where Bt is a Brownian motion with respect to (Ft ).
To show Mt is a martingale we have
• E[|Mt |] < ∞ and M is Ft measurable for all t clearly.
• E[Mt |Fs ] = Ms + E[Bt − Bs |Fs ] = Ms + E[Bt − Bs ] = Ms .
Example 1.3.23. Let Mt = Bt2 − t, where Bt is a Brownian motion with respect to
(Ft ). To show Mt is a martingale we have
• E[|Mt |] < ∞ and Mt is Ft measurable for all t clearly.

E[Mt |Fs ] = E[(Bt − Bs + Bs )2 |Fs ] − t
= Bs2 + E[(Bt − Bs )2 |Fs ] + 2E[Bs (Bt − Bs )|Fs ] − t
= Bs2 + E[(Bt − Bs )2 ] + 2Bs E[(Bt − Bs )|Fs ] − t
= Bs2 + E[(Bt − Bs )2 ] + 2Bs E[(Bt − Bs )] − t
= Bs2 + (t − s) − t = Ms .
Theorem 1.3.24. (Doob’s inequalities) Assume that Mt is a martingale or nonnegative submartingale with paths that are right continuous with left limits. Then
1. For a > 0,
P(sup |Ms | ≥ a) ≤ E|Mt |/a.
s≤t

2. If 1 < p < ∞, then
E[sup |Ms |]p ≤
s≤t


Proof.

• Case 1: Mt is a martingale.

Put Dn = {kt/2n : 0 ≤ k ≤ 2n }.
17

p
p−1

p

E|Mt |p .


1.3. CONTINUOUS-TIME MARTINGALE
(n)

and Nk

(n)

= Mkt/2n , Gk = Fkt/2n .
(n)

(n)

Then (Nk , Gk )k≥0 is a discrete-time martingale.
Put An = { sup |Ms | > λ}. Appling Doob’s inequality for discrete-time mars≤t,s∈Dn


tingales, we have
(n)

P(An ) =

(n)
P(maxn |Nk |
k≤2

> λ) ≤

E N2n

=

λ

E |Mt |
.
λ

Since (An ) is a increasing sequence and Mt is right continuous, then
∪ An = {sup |Ms | > λ}.
n

s≤t

we have
P(sup |Ms | > λ) = P(∪ An ) = lim P (An ) ≤ E [|Mt |] /λ.
n→∞


n

s≤t

Applying this inequality with λ = a −

and let

→ 0 we obtain (1). To prove

(2), we apply Doob’s inequality for discrete-time martingales, we obtain
E sup

k≤2n

(n)
Nk

p



p

p
p−1
(n)

E

p

M is right continuous, so sup Nk

(n)
N2n

p

=

p
p−1

p

E [|Mt |p ] .

increases to sup |Ms |p .

k≤2n

s≤t

Applying Fatou’s lemma, we obtain (2).
• Case 2:Mt is a non-negative submartingale is nearly identical case 1.

Theorem 1.3.25. Let (Ft ) be a filtration satisfying the usual conditions. If Mt is a
martingale or non-negative submartingale whose paths are right continuous, sup E[Mt2 ] <
t≥0


∞, and T is a finite stopping time, then EMT ≥ EM0 .
Proof. We do the case Mt is a submartingale, the martingale case being similar.
By Doob’s inequality, E[sup Ms2 ] ≤ 4EMt2 .
s≤t

Letting t → ∞, by Fatou’s lemma, E[sup Mt2 ] < ∞.
t≥0

Suppose that T < K, a.s., for some real number K. Define Tn by proposition 1.3.17,
(n)

let Nk

(n)

= Mk/2n , Gk = Fk/2n and Sn = 2n Tn .
(n)

Applying Doob’s optional stopping theorem for the submartingale Nk , we have
(n)

EM0 = EN0

(n)

≤ ENSn = EMTn .
18



1.3. CONTINUOUS-TIME MARTINGALE

M is right continuous, so MTn → MT ,a.s.
The random variable |MTn | are bounded by 1+sup Mt2 , so EMTn → EMT (by dominated
t≥0

convergence).
We apply the above to the stopping time T ∧ K to get EMT ∧K ≥ EM0 .The random
variables MT ∧K are bounded by 1 + sup Mt2 , so by dominated convergence, we get
t≥0

EMT ≥ EM0 when letting K → ∞.
Theorem 1.3.26. Suppose (Xt , Ft ) is a submartingale and the filtration (Ft )t≥0 satisfies the usual conditions. Then the process X has a version that is right continuous
if and only if function t → E[Xt ] is right continuous. Moreover, if there exists this
right continuous version then it can chosen such that it has left limit and is adapted to
(Ft )t≥0 .
Theorem 1.3.27. Suppose (Xt , Ft ) is a submartingale with right continuous paths and
C = sup E Xt+ < ∞. Then X∞ (ω) = lim Xt (ω) exists for all ω ∈ Ω almost surely
t≥0

t→∞

and E[|X∞ |] < ∞.
1.3.4

Markov properties of Brownian motion

Theorem 1.3.28. (Markov property)
Let (Ft ) be a filtration, not necessarily satisfying the usual conditions and B be a
Brownian motion with respect to (Ft ). If u is a fixed time, then Xt = Bt+u − Bu is a

Brownian motion independent of Fu .
Proof. We put Gt = Ft+u .
Then X has continuous paths, is zero at time 0 and is adapted to (Gt ) clearly.
Moreover, we have Xt − Xs = Bt+u − Bs+u , so Xt − Xs is a mean zero normal random
variable with variance (t + u) − (s + u) = t − s that is independent of Fs+u = Gs .
When the fixed times u is replaced by finite stopping times, we have the strong
Markov property.
Theorem 1.3.29. Let (Ft ) be a filtration, not necessarily satisfying the usual conditions and B be a Brownian motion with respect to (Ft ). If T is a finite stopping time,
then Xt = Bt+T − BT is a Brownian motion independent of FT .
Proof. Firstly, for m ≥ 1, t1 < ... < tm , f is a bounded continuous function on Rm and

19


×