Wiener-Hopf Monte-Carlo simulation techniques ... - Geoffrey Boutard

Sep 9, 2011 - to a number of applications in mathematical finance. This placement has been realised at ... 3.4.1 Another definition of a Poisson process in this case . . 25 ... 3, we consider Levy processes and study there path variations. Next, in ...... Equation Ψ(iζ) + q = 0 has infinitely many solutions, all of which are real ...
501KB taille 2 téléchargements 257 vues
Wiener-Hopf Monte-Carlo simulation techniques with applications in mathematical finance Geoffrey Boutard Univeristy of Rennes 1 ENS Cachan Antenne de Bretagne September 9, 2011

Abstract This report deals with the Wiener-Hopf factorisation of Levy processes and some numerical work associated thereby which has relevance to a number of applications in mathematical finance. This placement has been realised at Prob-L@B, the Probability Laboratory of the University of Bath, under the direction of Andreas E. Kyprianou (Professor of Probability).

1

Contents 1 Introduction

3

2 Stochatic processes and Brownian motion 2.1 Kolmogorov extension Theorem . . . . . . 2.2 Continuity of the pre-Brownian motion . . 2.3 Martingales and Brownian motion . . . . 2.4 Girsanov’s Theorem . . . . . . . . . . . .

. . . .

. . . .

. . . .

4 5 6 12 15

3 Poisson processes 3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Poisson process . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Existence of a Poisson process . . . . . . . . . . . . . . . . 3.4 One-dimensional Poisson process . . . . . . . . . . . . . . 3.4.1 Another definition of a Poisson process in this case 3.4.2 Construction of a one-dimensional Poisson process 3.4.3 Martingales . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

18 18 20 22 25 25 26 30

4 L´ evy processes 4.1 Compound Poisson process . . . . 4.2 Infinitely divisible distribution . . 4.2.1 Poisson process . . . . . . . 4.2.2 Compound Poisson process 4.2.3 Linear Brownian motion . .

. . . . .

. . . . .

33 33 34 36 37 38

5 The 5.1 5.2 5.3

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . . .

L´ evy-Itˆ o decomposition and path variation 40 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Proof of the Itˆ o-L´evy decomposition . . . . . . . . . . . . . . 44 Path variation . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Simulations 6.1 The Wiener-Hopf factorisation . . . 6.2 Wiener-Hopf Monte-Carlo simulation 6.3 β-class of L´evy processes . . . . . . . 6.4 Results . . . . . . . . . . . . . . . . .

. . . . . . technique . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

55 55 55 57 59

7 Conclusion

62

A Programs in Matlab

63

2

1

Introduction

L´evy processes have been introduced by the French mathematician Paul L´evy and are the support of a wide variety of applied and theoretical stochastic models. Here, we use the Wiener Hopf factorisation to simulate the first exit problem. This problem has many applications in mathematical finance. In the first and second section, we give two examples of very important Levy process, the Brownian Motion and the Poisson Process. In the section 3, we consider Levy processes and study there path variations. Next, in section 4, we give the Wiener Hopf factorisation and show how we use it to simulate the first exit problem.

3

2

Stochatic processes and Brownian motion

A stochastic process on a probability space (Ω, F, P) is a family of random variables (Xt )t∈T , where the index set T can be any set. After, we will take T = R+ . For each ω in Ω, we will call the function t 7→ Xt (ω) the path of ω. We will say that the paths are continuous, or that the process is continuous (respectively right-continuous) if the functions t 7→ Xt (ω) are continuous (respectively right-continuous) for each ω. Definition 2.1 (Brownian motion). A Brownian motion is a stochastic process (Bt )t∈R+ with the following properties. (i) (Independent increments) The increments on disjoint intervals are independent : if 0 ≤ t1 < t2 · · · < tn , then the random variables Bt2 −Bt1 , . . . , Btn − Btn−1 are independent. (ii) (Stationnaray increments) If s < t, then the increment Bt − Bs of the process on the interval (s, t] is normally distributed with mean 0 and variance t − s. (iii) The process start a.s. at 0 : B0 = 0 with probability one. (iv) The paths of the process Bt are all continuous. We can already give some properties about the Brownian motion (Bt )t≥0 . Remark 2.1. Let (Bt )t≥0 be a Brownian process. • If s < t, then Bt − Bs is independent of σ(Bu , u ≤ s). • For t ≥ 0, Bt is normally distributed with mean 0 and variance t. Indeed, let (Bt )t≥0 be a Brownian motion, • Let s < t and u < s. As (Bt )t≥0 verify (i) and (iii) of the Definition 2.1, we can apply (i) with t1 = 0, t2 = u, t3 = s and t4 = t. So, Bt − Bs , Bs − Bu and Bu − B0 = Bu are independent. Then, Bt − Bs is independent of σ(Bu , u ≤ s). • We apply (ii) with s = 0 and t > 0, so Bt − B0 = Bt is normally distributed with mean 0 and variance t. Next, we need the Kolmogorov extension Theorem to prove that a such stochastic process exists. 4

2.1

Kolmogorov extension Theorem

Let T an index set and for t ∈ T let (Ωt , Ft ) be a Q measurable Qnspace. If n v = {t1 , . . . , tn } ⊂ T with t1 < · · · < tn , the space ( i=1 Ωti , i=1 Fti ) is denoted by (Ωv , Fv ). If u = {ti1 , . . . , tik } is a nonempty subset of v and y = (y(t1 ), . . . , y(tn )) ∈ Ωv , the k-tuple (y(ti1 ), . . . , y(tik )) is denoted yu . If Pv is a probability measure on Fv , the projection of Pv on Fu is the probability measure πu (Pv ) on Fu defined by ∀B ∈ Fu , [πu (Pv )](B) = Pv (y ∈ Ωv : yu ∈ B). Q Similarly, if Q is a probability measure on t∈T Ft , the projection of Q on Fv is defined by ! Y ∀B ∈ Fv , [πv (Q)](B) = Q y ∈ Ft : yv ∈ B . t∈T

Theorem 2.1.1 (Kolmogorov extension Theorem). For each t in an arbitrary index set T , let Ωt = R and Ft the Borel sets of R. Assume that for each finite nonempty subset v of T , we are given a probability measure Pv on Fv . Assume the Pv are consistent, that is, πu (Pv ) = Pu for each nonempty u ⊂ v. Y Then, there is a unique probability measure P on F = Ft such that t∈T

πv (P) = Pv for all v.

With this Theorem, we can prove the existence of a stochastic process which verifies the points (i), (ii) and (iii) of the Definition 2.1. This stochastic process is called a pre-Brownian motion. +

+

Theorem 2.1.2. There exists a probability P on the measurable space (RR , B(R)R ) such that, for this probability, the coordinate functions Xt statisfy the points (i), (ii) and (iii) of the Definition 2.1. Proof. Let 0 = t1 < t2 < · · · < ti < ti+1 < · · · < tn , denote by S the set + {t1 . . . t2 } and by FS the σ-field of Ω = RR generated by the coordinates Xt1 , . . . , Xtn . Let πS be the probability on (Ω, FS ) which makes X0 = 0 a.s. and the Xtk − Xtk−1 independent and normally distributed with mean 0 and variances tk − tk−1 , for k = 2, . . . , n. This probability exists as the random varriables Xtk − Xtk−1 are can be expressed as a linear transformation of the random variables variabes Xt . Next, we check that this family of probability defined on S is consistent. Let Si be the subset of S obtained by removing the element ti for one i > 1. 5

We know that for πS the random variables Xtk − Xtk−1 are independent and normally distributed with mean 0 and variances tk − tk−1 , for k = 2, . . . , n. As the cylinder sets generate the Borel sets of R, we have to prove that for πS the random variables Xtn − Xtn−1 , . . . , Xti+1 − Xti−1 , . . . , Xt2 − Xt1 are independent and normally distributed with mean 0 and variances tn − tn−1 , . . . , ti+1 −ti−1 , . . . , t2 −t1 . As Xti+1 −Xti−1 is the sum of the two independent and normally distributed random variables Xti+1 − Xti and Xti − Xti−1 , it is true. Indeed, the characteristic function of a random variable X which is normally distributed with mean 0 and variance λ is given by ΦX (s) = e−

λs2 2

for s ∈ C.

Here, Xti+1 − Xti and Xti − Xti−1 are independent so ΦXti+1 −Xt

i−1

(s) = ΦXti+1 −Xt (s)ΦXti −Xt i

= e− = e

(ti+1 −ti )s2 2

e−

(t −t )s2 − i+1 2i−1

(ti −ti−1 2

i−1 )s2

(s),

,

.

As the the characteristic function of random variable gives its law, Xti+1 − Xti−1 is normally distributed with mean 0 and variance ti+1 − ti−1 Remark 2.1.1. If a stochastic process (Xt )t≥0 is a pre-Brownian motion, then (−Xt )t≥0 is too.

2.2

Continuity of the pre-Brownian motion

We created a stochastic process wich statisfies the properties (i), (ii) and (iii) of the Definition 2.1 but we have no infomation about the continuity of the path. We will prove that there exists a continuous version (Bt )t≥0 of the process (Xt )t≥0 , that is, a process with a continuous path such that, for each t ≥ 0, P(Xt = Bt ) = 1.

Lemma 2.2.1. Let 0 = t0 < · · · < tn . If the process (Xt )t≥0 statisfies the properties (i), (ii) and (iii) of the Defintion 2.1, we have, for a > 0,   P max Xtk > a ≤ 2P(Xtn > a), k=0,...,n

and

P



max |Xtk | > a

k=0,...,n



6

≤ 2P(|Xtn | > a).

Proof. Let T be defined on {maxk=0,...,n Xtk > a} as the first time ti such that Xti > a. On the set {maxk=0,...,n Xtk ≤ a} we take T = +∞. By the symetry of zero mean gaussian distributions, we have P(X > 0) = 21 . Hence, P



 n X max Xtk > a = P(T = ti ),

k=0,...,n

i=0 n−1 X

= 2

i=0

P(T = ti )P(Xtn − Xti > 0) + P(T = tn ).

Since, {T = ti } ∈ σ(Xt0 , . . . , Xti ) and Xtn −Xti is independent of σ(Xt0 , . . . , Xti ), we have P



max Xtk > a

k=0,...,n



= 2 ≤ 2

n−1 X

i=0 n−1 X

P(T = ti , Xtn − Xti > 0) + P(T = tn ), P(T = ti , Xtn > a) + P(T = tn , Xtn > a),

i=0

≤ 2P(Xtn > a). We apply this inequality to (−Xt )t≥0 , hence   P min Xtk < −a ≤ 2P(Xtn < −a). k=0,...,n

Thus, P



max |Xtk | > a

k=0,...,n



= P( max Xtk > a, min Xtk < −a), k=0,...,n

k=0,...,n

≤ 2(P((Xtn > a) + P(Xtn < −a))), ≤ 2P(|Xtn | > a).

7

Corollary 2.2.1. If the process (Xt )t≥0 statisfies properties (i), (ii) and (iii) of the Defintion 2.1 and J is a countable subset of [0, N ], then, for a > 0,   P sup |Xt | > a ≤ 2P(|XN | > a). t∈J

Proof. Let J be a countable, we can write it as a sequence {tn : n ∈ N}. For n ≥ 1, define sn = max{tk : k = 1, . . . , n}. Let n ≥ 1, by the previous lemma, we have,   P max |Xtk | > a ≤ 2P(|Xsn | > a), k=1,...,n

≤ 2P(max{|Xsn |, |XN |} > a), ≤ 4P(|XN | > a).

Hence, using the Fatou’s lemma,     P max |Xtk | > a = P lim inf ( max |Xtk | > a) n→∞ k=1,...,n t∈J   ≤ lim inf P max |Xtk | > a , n→∞

k=1,...,n

≤ 4P(|XN | > a).

Lemma 2.2.2. Let (Yn )n≥1 be a sequence of random variables. Then Yn → Y a.s. iff for every δ > 0, ! +∞ [ {ω : |Yk (ω) − Y (ω)| ≥ δ} → 0 as n → +∞. P k=n

Proof. Let Bnδ := {ω : |Yn (ω) − Y (ω)| ≥ δ} and Bδ := lim supn Bnδ = +∞ +∞ ∩+∞ n=1 ∪k=n Bkδ . As P is a finite measure and ∪k=n Bkδ ↓ Bδ , we have ! +∞ [ Bkδ = P(Bδ ). lim P n→+∞

k=n

We have also, {Yn 9 Y } = 8

+∞ [

p=1

B1/p .

Thus as Bδ1 ⊂ Bδ2 for δ1 > δ2 . Yn → Y a.s.

iff

P(Bδ ) = 0 for all δ > 0, ! ∞ [ Bkδ → 0 as n → +∞, for all δ > 0, P

iff

k=n

Theorem 2.2.1. Let (Xt )t≥0 be the process constructed in the Theorem 2.1.2. For every N > 0 the restriction of the process (Xt )t≥0 to the dyadic rationals is a.s. uniformly continuous on [0,N] Proof. By rescaling the real line we can always assume that N = 1. Let S be the set of all dyadic rationals in [0,1]. We want to prove that ∀ε > 0, ∃δ > 0/∀(s, t) ∈ S 2 , (|s − t| < δ ⇒ |Xt − Xs | < ε a.s.) which is equivalent to lim

n→+∞

sup t,s∈S,|t−s|< 21n

|Xt − Xs |

!

= 0 a.s.

Let Yn = supt,s∈S,|t−s|< 1n |Xt − Xs |, as (Yn )n∈N is nondecreasing and 2 nonnegative, it is sufficient (by the previous lemma) to show that ∀a > 0, lim P(Yn > a) = 0. n→+∞

For k = 0, . . . , 2n − 1, let Zn,k =

sup ] t∈S∩[ 2kn , k+1 2n

Xt − X

k 2n

.

Let s and t such than |t − s| < 21n . Let k1 be an integer such that k2 k2 +1 t ∈ [ 2kn1 , k12+1 n ] and, k2 such that s ∈ [ 2n , 2n ] (we have |k2 − k1 | ∈ {0, 1}). So |Xs − Xt | ≤ |Xt − X k1 | + |X k1 − X k2 | + |X k2 − Xs |, 2n

2n

≤ Zn,k1 + Zn,k1 + Zn,k2 , ≤ 3

max

k=0,...,2n −1

Zn,k .

9

2n

2n

Thus, Yn ≤ 3

max

k=0,...,2n −1

Zn,k .

The process (Xt )t≤0 verifies the points (i), (ii), ans (iii) of the Defintion 2.1, so the random variables Zn,k are identically distributed and ! 2n −1   X [ P max Zn,k > a = P P(Zn,k > a) = 2n P(Zn,0 > a). {Zn,k > a} ≤ k

k=0

k

Using the previous corollary, P(Zn,0 > a) = P

sup t∈S∩[0, 21n ]

|Xt | > a

!

≤ 2P( X

1 2n

> a).

The next lemma shows that this last quantity converges to 0 as n → +∞ since X 1n is normally distributed with mean 0 and variance 21n . 2

Lemma 2.2.3. Let a > 0, β > 0 and assume that Xβ is a normally distributed random variable with mean 0 and variance β. Then 1 P(|Xβ | > a) = 0. β→0 β lim

Proof. For each β > 0, let Xβ be a normally distributed random variable with mean 0 and variance β. Hence, Z 2 1 1 −x √ e 2β dx, P(|Xβ | > a) = β β 2πβ |x|>a Z +∞ 2 2 −x √ = e 2β dx, β 2πβ a Z +∞ 2 2 √ = e−y dy, a β π √ 2β Z +∞ 2 √ e−y dy, ≤ β π √a 2β

= as, if β is small enough then

√a 2β

2 − √a √ e 2β , β π

> 1. In conclusion,

1 P(|Xβ | > a) = 0. β→0 β lim

10

Theorem 2.2.2. There exists a process satisfying the conditions of the Definition 2.1 Proof. Let Yn be the random variables defined in the proof of the previous Theorem and A = {Yn → 0}. A is a measurable set and P(A)=1. On A, we can define, for all t ≤ 0, Bt (ω) as the limit of the random variables Xs (ω) as s approaches t along the dyadic rationals and on the complement of A, we take Bt (ω) = 0 for all t. If ω ∈ / A, the function t 7→ Bt (ω) is clearly continuous. Then, by the previous Theorem, the process (Bt )t≥0 is continuous. Indeed, let ε > 0 and t ≥ 0 and s ≥ 0. By the definition of Bt (ω), there exists δ > 0 such that for a dyadic rational u such that |t − u| < δ and a dyadic rational v such that |s − v| < δ we have |Bt (ω) − Bs (ω)| ≤ |Bt (ω) − Xu (ω)| + |Xu (ω) − Xv (ω)| + |Bt (ω) − Xv (ω)|, ≤ 2ε + |Xu (ω) − Xv (ω)|.

By the previous Theorem, if there exists δ1 such that if |u − v| < δ1 , we have |Xu (ω) − Xv (ω)| < ε. So, taking |t − s| < δ1 − 2δ (we can suppose that δ < δ21 ), we have, |Bt (ω) − Bs (ω)| ≤ 3ε. So, the process (Bt )t≥0 is continuous. We will now prove that the process (Bt )t≥0 is a version of the process (Xt )t≥0 . Let t ≥ 0, there is a sequence (tn )n≥0 of dyadic rationals which converges to t. The random variables Xtn converge a.s. to Bt as P (Xtn → Bt ) = P(A) = 1, thus Xtn converge in probability to Bt . By Chebyshev’s inequality, we have for all a > 0 P(|Xtn − Xt | > a) ≤

tn − t , a2

hence Xtn converge in probability to Xt . By the uniqueness of limits, we have Bt =Xt a.s. Theorem 2.2.3. The paths of a Brownian motion are almost surely nowhere differentiable. This Theorem will be usefull to study the path variation of procsses. The proof of this Theorem is given by [1] (page 408). 11

2.3

Martingales and Brownian motion

In this section, (Bt )t≥0 will be a Brownian motion. Definition 2.3.1 (Martingale). Let (Ω, F, P) be a probability space and (Ft )t≥0 be a nondecreasing family of sub-σ-fields of F. A family of random variables (Xt )t≥0 is a martingale with respect to (Ft )t≥0 iff (i) for all t ≥ 0, Xt is Ft -measurable and absolutely integrable, and (ii) if 0 ≤ s < t, then E[Xt |Fs ] = Xs a.s. If Ft = σ(Xs , s ≤ t), we will say that (Xt )t≥0 is a martingale without specifying the family of σ-fields. Theorem 2.3.1. The processes (Bt )t≥0 , (Bt2 − t)t≥0 , (Bt3 − 3tBt )t≥0 and exp(αBt − 21 α2 t)t≥0 , where α is a complex number, are martingales. Proof. Let Bt = σ(Bs , s ≤ t). For each t ≥ 0, Bt is Bt -measurable and integrable, Z |x| − x2 √ e 2t dx, E(|Bt |) = 2πt R Z x − x2 √ e 2t dx, = 2 2πt R+   2 +∞ −2t − x2t , = √ e 2πt 0 r 2t = < +∞. π If s < t then the random variable Bt − Bs is independent of Bs and is normally distributed with mean 0 and variance t − s, E[Bt − Bs |Bs ] = E[Bt − Bs ] = E[Bt−s ] = 0 a.s. So, as Bs is Bs -measurable, E[Bt |Bs ] = Bs a.s. and (Bt )t≥0 is a martingale. For each t ≥ 0, Bt2 − t is Bt -measurable and integrable, E(|Bt |2 ) = E(Bt2 ) = t < +∞. If s < t then the random variable Bt − Bs is independent of Bs , and has mean 0 and variance t − s. As x 7→ x2 is measurable, (Bt − Bs )2 is independent of Bs too, so we have, 2 E[(Bt − Bs )2 |Bs ] = E[(Bt − Bs )2 ] = E[Bt−s ] = t − s a.s.

12

Moreover, Bs is Bs -measurable, so E[Bt Bs |Bs ] = Bs E[Bt |Bs ] a.s. and E[Bs2 |Bs ] = Bs2 a.s. As (Bt )t≥0 is a martingale, we have also E[Bt |Bs ] = Bs . Thus, t − s = E[(Bt − Bs )2 |Bs ]

= E[Bt2 − 2Bt Bs + Bs2 |Bs ],

= E[Bt2 + Bs2 |Bs ] − 2Bs E[Bt |Bs ],

= E[Bt2 + Bs2 |Bs ] − 2Bs2 , = E[Bt2 − Bs2 |Bs ],

Hence, E[Bt2 − t|Bs ] = Bs2 − s.

and (Bt2 − t)t≥0 is a martingale.

For each t ≥ 0, Bt3 − 3tBt is Bt -measurable and integrable, Z |x3 − 3tx| − x2 3 √ E(|Bt − 3tBt |) = e 2t < +∞, 2πt R 2

2

as |x3 | = ox→±∞ (e−x ) and |x| = ox→±∞ (e−x ) If s < t then the random variable (Bt − Bs )3 is independent of Bs , E[(Bt − Bs )3 |Bs ] = E[(Bt − Bs )3 ],

= E[(Bt−s )3 ], Z 2 x3 − x p = e 2(t−s) , 2π(t − s) R = 0,

as the integrand is an odd function. Hence, using that (Bt )t≥0 and (Bt2 − t)t≥0 are martingales 0 = E[(Bt − Bs )3 |Bs ]

= E[Bt3 − 3Bt2 Bs + 2Bt Bs2 − Bs3 |Bs ],

= E[Bt3 − Bs3 |Bs ] − 3Bs E[Bt2 |Bs ] + 3Bs2 E[Bt |Bs ], = E[Bt3 − Bs3 |Bs ] − 3(t − s)Bs .

Thus, E[Bt3 − 3tBt |Bs ] = Bs3 − 3sBs , 13

and (Bt3 − 3tBt )t≥0 is a martingale. 1

2

For each t ≥ 0, let Xt = eαBt − 2 α t ans Ft = σ(Xs , s ≤ t). So, Xt is Ft -measurable. As (Bt )t≥0 is a Brownian motion, for t ≥ 0, Bt is normally distribuated with mean 0 and variance t. It follow that, 1

E(|Xt |) = =

2

e− 2 α t √ 2πt − 21 α2 t

e √

2πt Z

Z Z

x2

eαx− 2t dx, R 2



e

−( √x − α√ t )2 + α2 t 2t

2

dx,

R

1 2 √ e−u du π R = 1 < +∞.

=

1

2

Moreover, E(X0 ) = 1, so Xt is integrable for t ≥ 0 (E(X0 ) = e− 2 α t ). Let 0 ≤ s < t, Bt − Bs is independent of F∫ as exp is measurable. 1

1

2

2

E[Xt − Xs |Fs ] = E[eαBt − 2 α t − eαBs − 2 α s |Fs ], 1

= E[Xs (eα(Bt −Bs )− 2 α

2 (t−s)

= Xs (E[e

α(Bt −Bs )− 12 α2 (t−s)

= Xs (E[e

α(Bt−s )− 12 α2 (t−s)

− 1)|Fs ],

] − 1),

] − 1).

The random variable Bt−s is normally distribuated with mean 0 and 1 2 variance t − s and we saw that E[eα(Bt−s )− 2 α (t−s) ] = 1, so E[Xt |Fs ] = Xs . 1

2

In conlusion, (eαBt − 2 α t )t≥0 is a martingale.

14

2.4

Girsanov’s Theorem

Define for all set A in Bt = σ(Bs , s ≤ t) and real α, h i 1 2 Pα (A) = E eαBt − 2 α t 1A .

This is a probability measure as, i h i h 1 2 1 2 Pα (∅) = 0 ≤ Pα (A) = E eαBt − 2 α t 1A ≤ Pα (R) = E eαBt − 2 α t = 1,

S and P for (An )n≥0 a sequence of pairwise disjoint sets we have 1 n An = n 1An , so using the the fact that P is a measure and the Fubini’s Theorem, !   [ 1 2 α P = E eαBt − 2 α t 1Sn An , An n

= E

∞ X

e

αBt − 21 α2 t

n=1

=

∞ X

n=1

=

X

1A n

!

,

  1 2 E eαBt − 2 α t 1An ,

Pα (An ).

n

Theorem 2.4.1 (Girsanov’s Theorem). Under Pα , the stochastic process (Bt − αt)t≥0 is a Brownian motion. Proof. Let A ∈ Bs , for s < t

1

2

i h 1 2 Pα (A) = E eαBt − 2 α t 1A , h h ii 1 2 = E E eαBt − 2 α t 1A |Bs , ii h h 1 2 = E 1A E eαBt − 2 α t |Bs , h i 1 2 = E eαBs − 2 α s 1A ,

as (eαBt − 2 α t )t≥0 is a martingale.

15

(1)

Let A = {Bs ∈ (a, b)}, then

h i αBs − 21 α2 s P (A) = E e 1A , Z b 1 2 x2 1 eαx− 2 α s √ = e− 2s dx, 2πs a Z b 1 1 2 √ e− 2s (x−αs) dx = 2πs a α

Thus, Pα (Bs ∈ (a, b)) = P(Bs − αs ∈ (a, b)). We can prove by induction that Pα (Bt1 − B0 ∈ (a1 , b1 ), Bt2 − Bt1 ∈ (a2 , b2 ), . . . , Btn − Btn−1 ∈ (an , bn )) =

Z

b1 a1

1 − 1 (x−αt1 )2 √ dx . . . e 2t1 2πt1

Z

bn an

1 1 (x−α(tn −tn−1 ))2 − p dx e 2(tn −tn−1 ) 2π(tn − tn−1 )

Indeed, for n = 2, as Bt − Bs is independent of Bs and normally distrubuted with mean 0 and variance t − s, Pα (Bs − B0 ∈ (a, b), Bt − Bs ∈ (c, d)) "

= E 1Bs ∈(a,b) 1Bt −Bs ∈(c,d) e

αBt − 21 α2 t

1

2s

1

2s

eαBs − 2 α eαBs − 2 α

#

,

h ii h 1 2 1 2 = E 1Bs ∈(a,b) eαBs − 2 α s E 1Bt −Bs ∈(c,d) eα(Bt −Bs )− 2 α (t−s) |Bs ,

i h i h 1 2 1 2 = E 1Bs ∈(a,b) eαBs − 2 α s E 1Bt−s ∈(c,d) eα(Bt−s )− 2 α (t−s) , = Pα (Bs ∈ (a, b))Pα (Bt−s ∈ (c, d)), =

Z

b a



1 1 2 e− 2s (x−αs) dx 2πs

Z

d c

1 p

2π(t − s)

e

1 − 2(t−s) (x−α(t−s))2

dx

Thus in the space (Ω, Bt , Pα ), the random variables Bt verifies all the points of the Definition 2.1 and 1 2 dPα = eαBt − 2 α t dP |Bt

16

(2)

The equality (1) proves that the family of probability defined by (2) is consistent, so by the Kolmogorov extension Theorem there exists a probability space (Ω, B, P) such that P is an extension of Pα . We always write Pα this extension. Under Pα , the stochastic process (Bt )t≥0 is a Brownian motion. We say that Pα is the law of a Brownian motion with a drift α as Pα (Bs ∈ (a, b)) = P(Bs − αs ∈ (a, b)).

17

3

Poisson processes

3.1

Example

Looking trees in a natural forest from above we have a two-dimensional vision of the forest. A tree can be describe by a cross and the position of crosses mays seem haphazardly distributed without obvious pattern. Poisson processes are models of such phenomena. Let Ai be sets which don’t overlap when we put them on our two-dimensional forest and for each i denote by N (Ai ) the number of cross in Ai . Thus N (Ai ) are integer-valued random variables, independent if the Ai don’t overlap. We can also assume that they have a Poisson distribution. Definition 3.1.1. Let µ ∈ R+ \ {0}. A random varaible X has the Poisson distribution P (µ) if its possible values are positive integers and if for each integer n, µn −µ P (X = n) = e . n! We define P (0) (resp. P(+∞)) as the distribution concentrated at 0 (resp. +∞). We give here an heuristics proof of the choice of the Poisson distribution for N (Ai ). For each t nonnegative, let At be a circle of radius t. Define pn (t) = P (N (At ) = n) , and qn (t) = P (N (At ) ≤ n) . As N (At ) increases with t, qn (t) decreases. Thanks to monotonicity, qn (t) has only jump discontinuities and is differentiable almost everywhere. As pn (t) = qn (t) − qn−1 (t), pn (t) has the same properties. Let Rr1 ,r2 (r1 < r2 ) be the open annulus of which r1 is the interior radius and r2 is the exterior radius. Define µ (t) = E [N (At )] . Then, µ (t + h) − µ (t) = E [N (Rt,t+h )] . For a small h, we can suppose that there is 0 or 1 points in the ring Rt,t+h , so N (Rt,t+h ) = 1{N (Rt,t+h )=1} and E [N (Rt,t+h )] is equal to the probability 18

that there is a point in Rt,t+h . Thus, the probability that N (At ) jumps from n to n + 1 is approximately equal to µ (t + h) − µ (t). Moreover, if we suppose that the number of points in Rt,t+h is independent of the number of points in At , then, pn (t) (µ (t + h) − µ (t)) = P (N (At ) = n, N (Rt,t+h ) = 1) ,

= P (N (At ) ≤ n, N (At+h ) ≥ n + 1) ,

= P (N (At ) ≤ n) − P (N (At ) ≤ n, N (At+h ) ≤ n) ,

= P (N (At ) ≤ n) − P (N (At+h ) ≤ n) ,

= qn (t) − qn (t + h) . Letting h → 0,

dqn dµ = pn , dt dt As p0 = q0 and pn (t) = qn (t) − qn−1 (t), we have, −



dq0 dµ dpn dµ = p0 and − = (pn−1 − pn ) . dt dt dt dt

Thus,

d (ln (p0 ) + µ) = 0. dt Since p0 (0) = 1 (A0 = ∅), and µ (0) = 0, −

p0 (t) = e−µ(t) . Then, for n ≥ 1, we have −

dµ d (pn eµ ) = pn−1 eµ , dt dt

thus, as pn (0) = 0 for n ≥ 1, pn (t) = e

−µ(t)

Z

t

pn−1 (s) eµ (s)

0

Next, we show by induction on n that, pn (t) = e−µ(t)

19

µ (t)n . n!

dµ (s) ds. ds

For n = 1, p1 (t) = e

−µ(t)

= e−µ(t)

Z

Z

t

p0 (s) eµ(s)

0 t 0

dµ (s) ds, ds

dµ (s) ds, ds

µ (t)1 . = eµ (t) 1! n

Let n ≥ 1 and suppose that pn (t) = e−µ(t) µ(t) n! . Thus, pn+1 (t) = e

−µ(t)

= e−µ(t) = eµ (t)

Z

Z

t

pn (s) eµ(s)

0 t 0

dµ (s) ds, ds

µ (t)n dµ (s) ds, n! ds

µ (t)n+1 . (n + 1)!

In conclusion, N (At ) has a Poisson distribution P (µ (t)). This “justifies” the choice of the Poisson distribution for N (A) where A is a set include in the state space. In the example of the forest, the state space is R2 .

3.2

Poisson process

We give here a rigorous definition of a Poisson process on a state space S, we take S = Rd and we use the borel set S as σ−field and µ a measure on S. Definition 3.2.1. A poisson process on S is a random countable subset Π of S, such that (i) for any disjoint measurable subsets A1 , A2 ,, . . . , An of S, the random variables N (A1 ) := Card{Π ∩ A1 }, . . . , N (An ) := Card{Π ∩ An } are independent, and (ii) N (A) has the Poisson distribution P (µ), where µ = µ (A) lies in 0 ≤ µ ≤ +∞. The function µ : S → R+ ∪ {+∞} is called the intensity measure (or mean measure) of the Poisson process Π.

20

Remark 3.2.1. If µ (A) is finite, Π ∩ A is a finite set a.s., if µ (A) = 0, Π ∩ A is a empty a.s. and if µ (A) = ∞, Π ∩ A is a countably infinite set a.s. Indeed, if µ (A) = 0 (respectively µ (A) = +∞), then N (A) has a distribution P (0) (respectively P (+∞)), hence, P ({Π ∩ A} = ∅) = P (Card{Π ∩ A} = 0) = P (N (A) = 0) = 1, respectively, P (Card{Π ∩ A} = ∞) = P (N (A) = ∞) = 1. Then if 0 < µ (A) = 0 < +∞, then N (A) has a Poisson distribution, hence, P (Card{Π ∩ A} < ∞) = P (N (A) ∈ N) = 1, Next, we give some properties of the Poisson distribution and the Poisson process Theorem 3.2.1 (Superposition Theorem). Let (Xj )j∈N be a sequence of independent random variables and assume that Xj has a Poisson distribution with parameter µj ≥ 0. If, σ=

∞ X j=1

then S=

∞ X j=1

µj < ∞,

Xj < ∞ a.s.

ans S has a Poisson distribution with paramater σ. P∞ In the other hand, if j=1 µj = ∞ then S diverges a.s. Proof. Let Sn be a random variable defined by Sn =

n X

Xj .

j=1

As the random variables P Xj are independent, Sn has a Poisson distribution with parameter σn := nj=1 µj . 21

Next, let r ≥ 0, then P(Sn ≤ r) =

r X

e−σn

i=0

σni . i!

As P is a probability measure and the sequence of events {Sn ≤ r} decreases when n increases, P(S ≤ r) = =

lim P(Sn ≤ r),

n→∞

lim

n→∞

If σ < ∞, then P(S ≤ r) = and,

r X

r X

e−σn

i=0

e−σ

i=0

σni . i!

σi , i!

σi , i! hence, S is finite a.s. and S has a Poisson distribution with parameter σ. If σ = ∞, then r X σi e−σn n → 0 as n → ∞, i! P(S = r) = e−σ

i=0

so that, for all r,

P(S > r) = 1. Hence S diverges a.s.

3.3

Existence of a Poisson process

Let (S, S, µ) be a σ-finite space. Theorem 3.3.1. There exists a Poisson process on S having µ as its intensity measure. Proof. First, suppose that S is finite. For each r ≥ 1, let N and Xr be random variables. Thanks to the Kolmogorov extension Theorem there exists a probability space (Ω, F, P) where the random variable N has a Poisson distribution P (µ (S)), where the distribution of Xr is p=

µ µ (S)

22

and where all the random variables are independent. Write for each n ≥ 1, Π = {X1 , . . . , XN }, and N (A) = Card (Π ∩ A) .

The random variable Π is a random finite subset of S. Let A1 , . . . , Ak be pairwise disjoint measurable sets of S. We write A0 the complement of their union. Using the balls in boxes result, for m1 , . . . , mk integers, we have, P (N (A1 ) = m1 , . . . , N (Ak ) = mk ) +∞ X

=

m=m1 +···+mk +∞ X

=

m=m1 +···+mk

P (N (A1 ) = m1 , . . . , N (Ak ) = mk |N = m) P (N = m), m! µ (S)m −µ(S) p (A0 )m0 p (A1 )m1 . . . p (Ak )mk e , m0 !m1 ! . . . mk ! m!

where m0 = m − m1 − · · · − mk . i )) Then, as p (A0 ) = µ(S\(∪A = µ(S)

µ(S)−µ(A1 )−...µ(Ak ) , µ(S)

P (N (A1 ) = m1 , . . . , N (Ak ) = mk ) k

+∞ X

=

m=m1 +···+mk

=

mi !

k Y µ (Ai )mi i=1

=

i=1

k Y µ (Ai )mi i=1

=

µ (A0 )m0 µ(S)−µ(A1 )−···−µ(Ak ) Y µ (Ai )mi µ(Ai ) e e , m0 ! mi !

mi !

k Y µ (Ai )mi i=1

mi !

e

µ(Ai )

eµ(Ai )

! !

+∞ X

m=m1 +···+mk

µ (A0 )m0 µ(S)−µ(A1 )−···−µ(Ak ) e m0 !

!

,

! +∞ X (µ (S) − µ (A1 ) − . . . µ (Ak ))m µ(S)−µ(A1 )−···−µ(Ak ) , e m! m=0 {z } | =1

eµ(Ai ) .

23

In conclusion, for any disjoint measurable subsets A1 , A2 ,, . . . , Ak of S, the random variables N (A1 ), . . . , N (Ak ) are independent, and N (Ai ) has the Poisson distribution P (µ (Ai )). Hence Π is a Poisson process with intensity measure µ. Suppose now that S is σ-finite, then there exists Bn measurable subsets of S such that [ ∀i 6= j, Bi ∩ Bj = ∅, Bn = S and ∀n ≥ 1, µ(Bn ) < ∞. n

For each n ≥ 1, we apply the finite case to (Bn , S ∩ Bn , µn ) where µn (.) := µ (. ∩ Bn ) . For each n ≥ 1, there exist a probability space (Ωn , Fn , Pn ) and a Poisson process Πn on S ∩ Bn . Thanks to the Kolmogorov extension Theorem there exists a common probability space (Ω, F, P) where the random variables Πn are independent. Define, [ Π= Πn , n

and prove that Π is a Poisson process with intensity measure µ. As the sets Bn are disjoint, the random countable subsets Πn are disjoint, hence, for all measurable set A, N (A) := Card(Π∩A) =

∞ X

n=1

Card(Π∩Bn ∩A) =

∞ X

n=1

Card(Πn ∩A) =

∞ X

Nn (A),

n=1

where Nn (A) := Card(Πn ∩ A). As the random variables Πn are independent poisson processes, by the countable P∞ additivity Theorem, N (A) has a Poisson distribution with parameter n=1 µn (A) = µ(A). Moreover, for any integer n ≥ 1 and any disjoint measurable subsets A1 , A2 ,, . . . , Ak of S, the random variables Nn (A1 ), . . . , Nn (Ak ) are independent. Hence the random variables N (A1 ), . . . , N (Ak ) are independent. In conclusion, Π is a Poisson process with mean mesure µ.

24

The previous Theorem can be expressed as, Theorem 3.3.2. Let (S, S, µ) be a measurable space with S σ-finite. Then, there exists a function N : S × Ω → Z+ ∪ {+∞} such that, • for each ω ∈ Ω, N (.) (ω) is a measure on S, • for A and B disjoint in S, N (A) and N (B) are independent and, • for A in S, N (A) has a Poisson distribution P (µ (A)) . This process is called a Poisson random measure.

3.4 3.4.1

One-dimensional Poisson process Another definition of a Poisson process in this case

We study the one-dimentional Poisson processes on R+ . As B (R+ ) equals σ((a, b], a, b ∈ Q+ ), each measurable set of R+ can be write as a countable union of disjoint intervals (a, b]. Assume that for each A ∈ B (R+ ), RN (A) has a Poisson distribution P (λ|A|), where λ > 0 is fixed and |A| = A dx. Here the intensity measure µ is defined by µ (A) = λ|A|. The real λ is called the rate of the Poisson process. Moreover, (a, b] = (0, b] − (0, a], so we can study Nt := N (At ) with At = (0, t]. Let N0 := 0. Theorem 3.4.1. The stochastic process (Nt )t∈R+ verifies (i) N0 = 0 a.s., (ii) if 0 ≤ t1 < t2 · · · < tn , then the random variables Nt2 − Nt1 , . . . , Ntn − Ntn−1 are independent, (iii) for s < t, Nt − Ns has a Poisson disribution with parameter λ (t − s) with λ > 0 fixed, (iv) the paths of the process Nt are all right-continuous. Proof. The property (i) is clear. Let 0 ≤ t1 < t2 · · · < tn then Nt2 − Nt1 = N ((t1 , t2 ]), . . . , Ntn − Ntn−1 = N ((tn−1 , tn ]) are independent as the sets (t1 , t2 ], . . . , (tn−1 , tn ] are disjoint. Nt − Ns = N ((s, t]) has a Poisson distribution P(λ(t−s)). The paths of the process Nt are all right-continuous as Nt increases by jumps of 1. 25

Conversely, if a stochastic process (Nt )t∈R+ verifies the points of the previous Theorem, then it is defined as a Poisson process. Let Π = (Πi )i∈N = (inf{t ≥ 0 : Nt = i})i∈N ⊆ R+ . Then, Π is a random countable subset of R+ . Let A to be a Borel set of R+ and N (A) := Card{Π ∩ A}. Thus, Nt = N ((0, t]) and, as (Nt )t∈R+ verifies the points of the previous Theorem, N ((a, b]) = Nb − Na has a Poisson distribution P(λ(b − a)) (with a < b). A is a Borel set of R+ , so we can write A as a countable union of disjoint intervals (an , bn ]. As the intervals are disjoint, the random variables N ((a1 , b1 ]) ,. . . , P N ((an , bn ]) are independent. Thus N (A) has a Poisson distribution P (λ n (bn − an )). Let A1 , . . . , An disjoint measurable sets of R+ . We do the proof with n = 2 and A1 = (a1 , b2 ] ∪ (a3 , b3 ], A2 = (a2 , b2 ] with a1 < b1 < a3 < b1 < a2 < b3 (the same holds with any n and any Borel sets). As (Nt )t∈R+ verifies the points of the previous Theorem, the random variables N ((a1 , b1 ]), N ((a2 , b2 ]) and N ((a3 , b3 ]), thus N (A2 ) and N (A1 ) = N ((a1 , b1 ])+N ((a3 , b3 ]) are independent. Thus, Π is a Poisson process. We will use this equivalence in the next section to create a Poisson process. 3.4.2

Construction of a one-dimensional Poisson process

For each i in N let Xi be a random variable. Thanks to the Kolmogorov extension Theorem there exists a probability space Ω, F, Pλ where the random variables Xi are independent and have an exponential distribution with mean 1/λ (λ ∈ R+ \ {0}). Define for each t positive, ! n X Xi ≤ t , Nt = max n ∈ N : i=1

and N0 = 0. Thus Nt is a random variable and Nt ∈ N a.s. Theorem 3.4.2. For each t ≥ 0, Nt has a Poisson distribution.

26

Proof. For each n ∈ N, λ

P (Nt = n) = P

n X

λ

i=0 n X

= Pλ = P

i=0 n X

λ

i=0

Xi ≤ t,

n+1 X

!

Xi > t ,

i=0

Xi ≤ t

!

Xi ≤ t

!

− Pλ −P

λ

n X

i=0 n+1 X i=0

Xi ≤ t,

n+1 X i=0

!

Xi ≤ t ,

!

Xi ≤ t ,

As the random variables Xi are independent and identically ditributed, P n i=0 Xi has a gamma distribution Γ (n, λ). Thus, Z

t

λn xn−1 e−λx dx − (n − 1)! 0   Z t d λn n −λx = x e dx, n! 0 dx (λt)n −λt e . = n!

Pλ (Nt = n) =

Z

t

0

λn+1 n −λx x e dx, n!

Thus, (Nt )T ≥0 has a Poisson distribution. We can remark that Nt =

X

n∈N

1Pni=1 Xi ≤t .

Theorem 3.4.3. (Nt )t≥0 is a Poisson process. Proof. N0 = 0 a.s. and the previous form of Nt shows that the paths of the process are right-continuous. Define for each t, Sn :=

n X

Xk .

k=1

Let s < t, then Nt − Ns = max{t ∈ N : SNs +1 − s + XNs +2 + · · · + XNs +i ≤ t − s}. For all integer n ≥ 2, the random variables SNs +1 − s, XNs +2 , . . . , XNs +n are independent and identically distributed with an exponential distribution 27

E (λ). Indeed, for (zi )i=1,...,n ⊂ R, Pλ (SNs +1 − s ≤ z1 , XNs +2 , ≤ z2 . . . , XNs +n ≤ zn ) , = =

X

n≥0 X

n≥0

Pλ (Sk+1 − s ≤ z1 , Xk+2 ≤ z, 2 . . . , Xk+n ≤ zn , Ns = k), Pλ (Sk ≤ s, s < Sk+1 ≤ s + z1 , Xk+2 ≤ z2 , . . . , Xk+n ≤ zn ),

As the Xi are independent and identically distributed, we have, Pλ (SNs +1 − s ≤ z1 , XNs +2 ≤ z2 , . . . , XNs +n ≤ zn ) , =

X

n≥0

=

X

n≥0

Pλ (Sk ≤ s, s < Sk+1 ≤ s + z1 ) Pλ (Xk+2 ≤ z2 ) . . . Pλ (Xk+n ≤ zn ), λ

P (Sk ≤ s, s < Sk+1 ≤ s + z1 )

n  Y i=2

1−e

−λzi



.

But, as Sk and Xk+1 are independent, Pλ (Sk ≤ s, t < Sk+1 ≤ s + z1 ) = Pλ (Sk ≤ s, s < Sk + Xk+1 ≤ s + z1 ) ,  Z s+z1 −x Z s λk −λy k−1 −λx = λe dy dx, x e 0 (k − 1)! s−x Z s   λk xk−1 e−λx e−λ(s−x) − e−λ(s+z1 −x) dx, = 0 (k − 1)!   (λs)k = e−λs 1 − e−λz1 k! Thus, Pλ (SNs +1 − s ≤ z1 , XNs +2 ≤ z2 . . . , XNs +n ≤ zn ) , =

X

n≥0

=

n   (λs)k Y 1 − e−λzi , e−λs 1 − e−λz1 k!

n  Y

1 − e−λzi

n  Y

 1 − e−λzi .

i=1

=

i=2

i=1

X

Pλ (Ns = k),

n≥0

28

As Nt − Ns = max{t ∈ N : SNs +1 − s + XNs +2 + · · · + XNs +i ≤ t − s} and as SNs +1 − t, XNs +2 ,. . . , XNs +n are independent and identically distributed with a exponential distribution, by the previous Theorem, Nt − Ns has a poisson distribution P (λ (t − s)). Moreover, we saw that for all integer k, Pλ (Sk+1 − s ≤ z1 , Xk+2 ≤ z2 , . . . , Xk+n ≤ zn , Ns = k) n   Y 1 − e−λzi Pλ (Ns = k) . = i=1

Thus, for all integer n ≥ 2, the random variables Ns , SNs +1 − s, XNs +2 ,. . . , XNs +n are independent. So, Nt − Ns is independent of Ns . Let 0 ≤ t1 < t2 < · · · < tn < tn+1 , we prove that the random variables Nt1 , Nt2 − Nt1 , . . . Ntn+1 − Ntn are independent. The previous calculations show that the result is true for n = 2. Let n ≥ 2, for (zi )i=1,...,n+1 ⊂ R+ ,  Pλ Nt1 = z1 , Nt2 − Nt1 = z2 , . . . , Ntn+1 − Ntn = zn+1

= Pλ Nt1 = z1 , Nt2 − Nt1 = z2 , Nt3 − Nt1 = z2 + z3 , . . . , Ntn+1 − Nt1 = z2 + · · · + zn+1

= Pλ (Nt1 = z1 ) Pλ Nt2 − Nt1 = z2 , Nt3 − Nt2 = z3 , . . . , Ntn+1 − Ntn = zn+1 = Pλ (Nt1 = z1 )Pλ (Nt2 − Nt1 = z2 , Nt3 + Nt1 − (Nt2 − Nt1 ) = z3 , . . . , (Ntn+1 − Nt1 ) − (Ntn − Nt1 ) = zn+1 )





 = Pλ (Nt1 = z1 ) Pλ Nt2 −t1 = z2 , Nt3 −t1 − Nt2 −t1 = z3 , . . . , Ntn+1 −t1 − Ntn −t1 = zn+1 .

We can do again this steps with t˜i = ti − t1 , i ≥ 2. Thus,  Pλ Nt1 = z1 , Nt2 − Nt1 = z2 , . . . , Ntn+1 − Ntn = zn+1

= Pλ (Nt1 = z1 ) Pλ (Nt2 −t1 = z2 ) Pλ Nt3 −t1 − Nt2 −t1 = z3 , . . . , Ntn+1 −t1 − Ntn −t1 = zn+1

= Pλ (Nt1 = z1 ) Pλ (Nt2 − N t1 = z2 ) Pλ Nt3 − Nt2 = z3 , . . . , Ntn+1 − Ntn = zn+1



= Pλ (Nt1 = z1 ) Pλ (Nt2 − N t1 = z2 ) Pλ Nt3 − Nt2 = z3 , . . . , Ntn+1 −t2 − Ntn −t2 = zn+1 = ... = Pλ (Nt1 = z1 ) Pλ (Nt2 − N t1 = z2 ) Pλ (Nt3 − Nt2 = z3 ) . . . Pλ Ntn+1 − Ntn = zn+1

29







3.4.3

Martingales

Let (Nt )t≥0 to be a Poisson process with law Pλ Theorem 3.4.4. Let Ft = σ (Ns , s ≤ t) and λ ∈ R+ . Then (Nt − λt)t≥0  and xNt e−λt(x−1) t≥0 , where x is a positive real, are martingales.

Proof. We begin with the integrability (the Ft -measurability is obvious). For each t ≥ 0, E λ x Nt



=

+∞ X

xk

k=0

= e−λt = e

(λt)k −λt e , k!

+∞ X (xλt)k

k=0 λt(x−1)

k!

,

< ∞.

So, the integrability is verified. Let s < t, Nt − Ns is independent of Fs , Eλ [Nt − Ns |Fs ] = Eλ [Nt − Ns ] = Eλ [Nt−s ] = λ (t − s) . Thus, (Nt − λt)t≥0 is a martingale. Then, i i h h Eλ xNt e−λt(x−1) − xNs e−λs(x−1) |Fs = xNs −λt(x−1) Eλ x(Nt −Ns ) e−λ(t−s)(x−1) − 1|Fs ,  h i  = xNs e−λs(x−1) Eλ xNt −Ns e−λ(t−s)(x−1) − 1 ,  h i  = xNs e−λs(x−1) Eλ xNt−s e−λ(t−s)(x−1) − 1 , = 0,

as Nt − Ns has a Poisson distribution P (t − s),   Eλ xNt−s = eλ(t−s)(x−1) .  Thus xNt e−λt(x−1) t≥0 is a martingale.  We will study the martingale xNt e−λt(x−1) t≥0 . Define for all set A in Ft , h i Q (A) = Eλ xNt e−λt(x−1) 1A .

This is a probability measure as, i h h i Q (∅) = 0 ≤ Q (A) = Eλ xNt e−λt(x−1) 1A ≤ Q (R+ ) = Eλ xNt e−λt(x−1) = 1, 30

S and P for (An )n≥0 a sequence of pairwise disjoint sets we have 1 n An = n 1An , so in the same way as the Brownian motion, ! X [ Q Q (An ) . An = n

n

Theorem 3.4.5 (The Girsanov’s Theorem). Under Q, the stochastic process (Nt )t≥0 is a Poisson process with rate λx. Proof. Let A ∈ Fs , for s < t i h Q (A) = Eλ xNt e−λt(x−1) 1A , h h ii = Eλ Eλ xNt e−λt(x−1) 1A |Fs , h ii h = Eλ 1A Eλ xNt e−λt(x−1) |Fs , i h = Eλ xNs e−λs(x−1) 1A ,

(3)

 as xNt e−λt(x−1) t≥0 is a martingale. Let A = {Ns = n}, then i h Q (A) = Eλ xNs e−λs(x−1) 1A , = xn e−λs(x−1)

(λs)n −λs e , n!

(λxs)n −λxs e , n! = Pλx (Ns = n) . =

It seems that Q is the law of a Poisson process with rate λx. Let s < t and A and B in Ft , then

31

Q (Ns − N0 = m, Nt − Ns = p) = Eλ

"

# Ns e−λs(x−1) x 1{Ns =m,Nt −Ns =p} xNt e−λt(x−1) N −λs(x−1) , x se

ii h h = Eλ 1{Ns =m} xNs e−λs(x−1) Eλ 1{Nt −Ns =p} xNt−s e−λ(t−s)(x−1) |Fs ,

h i h i = Eλ 1{Ns =m} xNs e−λs(x−1) Eλ 1{Nt−s =p} xNt−s e−λ(t−s)(x−1) , = Q (Ns = m) Q (Nt−s = p) , = Pλx (Ns = m) Pλx (Nt−s = p) , =

(λxs)m −λxs (λ (t − s) x)p −λ(t−s)x e e . m! p!

Thus in the space (Ω, Ft , Q), the random variables Nt verify all the points of the Theorem 3.4.1 and dQ = xNs e−λs(x−1) dPλ |Ft

(4)

Hence, in the same way as for the Brownian motion, there exists an extension of the probability measure Q such that the stochastic process (Nt )t≥0 is a Poisson process with rate λx.

32

4

L´ evy processes

We saw two important examples of processes X with independent and stationary increments. For each t ≥ 0, the random variable Xt had a particular law. We will define a class of processes where the law of Xt is not specified. In this case, X is called a L´evy process. Definition 4.1 (L´evy process). A process X = (Xt )t≥0 defined on a probability space (Ω, F, P) is said to be a L´evy process if it possesses the following properties. (i) Stationary increments : for 0 ≤ s ≤ t, Xt − Xs is equal in distribution to Xt−s . (ii) Independent increments : for 0 ≤ s ≤ t, Xt − Xs is independent of {Xu : u ≤ s}. (iii) P (X0 = 0) = 1. (iv) The paths of X are P−almost surely right-continuous with left limits. From a Poisson process, we can defined a class of L´evy processes called compound Poisson process.

4.1

Compound Poisson process

Definition 4.1.1 (Compound Poisson Process). Let N = (Nt )t≥0 be a Poisson process with rate λ > 0 and (ξi )i≥1 be a sequence of independent and identically distributed random variables, independent of the process N , with common law F having no atom at zero. The process X defined by Xt =

Nt X i=1

with the convention that

P0 1

ξi , t ≥ 0,

= 0, is called a compound Poisson process.

Property 4.1.1. The above process is a L´evy process. Proof. As N is a L´evy process, the paths of the process X are right-continuous with left limits and ! 0 X P (X0 = 0) = P ξi = 0 = P (0 = 0) = 1. i=1

33

Let 0 ≤ s < t < ∞, Xt = Xs +

Nt X

ξi = X s +

NX t −Ns

ξNs +i .

i=1

i=Ns +1

As N is a L´evy process and the random variables ξi are independent and identically distributed, NX t −Ns i=1

Nt−s d

ξNs +i =

X

d

ξi = Xt−s .

i=1

Hence, Xt −Xs is equal to an independent copy of Xt−s and X has stationary independent increments. Next, we introduce the notion of an infinitely divisible distribution.

4.2

Infinitely divisible distribution

Definition 4.2 (Infinitely divisible distribution). We say that a real-valued random variable Θ has an infinitely divisible distribution if for each integer n there exists a sequence of independent and indentically distributed random variables Θ1,n , Θ2,n , . . . , Θn,n such that d

Θ = Θ1,n + Θ2,n + · · · + Θn,n , d

where = is equality in distribution. Alternatively, we could have expressed this relation in terms of probability laws. That is to say, the law µ of a real-valued random variable is infinitely divisible if for each integer n there exists another law µn of a real-valued ∗n random variable such that µ = µ∗n n where µn denotes the n-fold convolution of µn . Remark 4.1. If X is a L´evy process, then for each t ≥ 0, Xt has an infinitely divisible distribution. Indeed, for each t ≥ 0 and n ∈ N,     Xt = Xnt/n − X(n−1)t/n + X(n−1)t/n − X(n−2)t/n +· · ·+ X2t/n − Xt/n + Xt/n . (5) As the process X has stationary and independent increments, each part of the previous sum is an independent copy of Xt/n . Hence, Xt has an infinitely divisible distribution. 34

In view of the L´evy-Khintchine formula, which will give a characterisation of the infinitely divisible distributions, we talk about the characteristic exponent of a random variable. Suppose that Θ has a characteristic exponent  Ψ (u) := − log E eiuΘ .

We can notice that Θ has an infinitely divisible distribution if for all n ≥ 1, there exists a characteristic exponent of a probability distibution, says Ψn , such that for all u ≥ 0, Ψ (u) = nΨn (u) .

Theorem 4.1 (L´evy-Khintchine formula). A probability law µ of a realvalued random variable is infinitely divisible with characteristic exponent Ψ, Z eiθx µ (dx) = eΨ(θ) for θ ∈ R, R

if and only if there exists a triple (a, σ, Π), where a ∈ R,  σ ≥ 0, and Π is R a measure concentraded on R \ {0} satisfying R 1 ∧ x2 Π (dx) < ∞, such that Z   1 Ψ (θ) = iaθ + σ 2 θ2 + 1 − eiθx + iθx1|x| 0,  !2  k X (n) . lim E sup Xt − Mt k→∞

t≤T

n=1

Proof. By linearity of conditional we can note that for each Pk expectation, (i) is a martingale. Moreover, it is a integer k ≥ 1, the process M i=1 square integrable martingale. Indeed, by independence of the zero mean (j) (i) (j) (i) martingales M (i) , for i 6= j we have E(Mt Mt ) = E(Mt )E(Mt ) = 0. Hence  !2   Z k k k 2  X X X (i) (i) = E Mt E Mt x2 F (dx) < ∞, tλn = i=1

i=1

i=1

R

P by the assumption (11). The process ki=1 M (i) is the finite sum of rightcontinuous with left limits processes so it is also right-continuous with left limits. 47

Let T > 0. We will show that the sequence X (k) X (k)

sequence with respect to ||.|| where the process by k X (k) (i) Xt = Mt .



k≥1

is a Cauchy

is defined for 0 ≤ t ≤ T

i=1

X (k)

Indeed, for each k, the process is a zero mean, square integrable, almost surely right-continuous with left limits martingale, so this process is in the space M2T . For k ≥ l, with the same computation than above, ||X

(k)

(l) 2

− X || = E



(k) XT



 (l) 2 XT



=T

k X

λn

n=l

Z

x2 F (dx). R

Thanks to (11) the last expression tends to 0 when k, l → ∞. As the space M2T is an Hilbert space, there exists a process X = (Xt )0≤t≤T in M2T such that lim ||X − X (n) || = 0. n→∞ Next, we prove that the process X is a L´evy process. Thanks to the Doob’s maximal inequality, we have !      (k) 2 (k) 2 XT − XT 0 ≤ lim E sup Xt − Xt = 0, ≤ 4 lim E k→∞

k→∞

0≤t≤T

Hence finite-dimensional distributions of X (k) converges to those of X. Let 0 ≤ s < t ≤ T , as X (k) is a L´evy process, E(eiθ(Xt −Xs ) ) = =

(k)

lim E(eiθ(Xt

k→∞

(k)

−Xs )

),

(k)

lim E(eiθ(Xt−s ) ),

k→∞ iθ(Xt−s )

= E(e

).

Thus the process X has stationaray increments. Let 0 ≤ t1 < t2 · · · < tn ≤ T , then E(eiθ((Xtn −Xtn−1 )+···+(Xti −Xti−1 )+···+(Xt2 −Xt1 )) ) =

iθ((X

(k)

−X

(k)

)+···+(X

(k)

tn−1 tn ti lim E(e k→∞ (k) (k) Q iθ(Xt −Xt ) i i−1 ), = lim ni=2 E(e k→∞ Q n iθ(Xti −Xti−1 ) ), = i=2 E(e

48

(k) (k) (k) )+···+(Xt −Xt )) 2 1 i−1

−Xt

),

hence the random variables Xt2 − Xt1 , . . . , Xtn − Xtn−1 are independent. Thus the process X has stationaray and independent increments. As it is the limit of a sequence of right-continuous with left limits processes, X is right-continuous with left limits. It is clear that X0 = 0 a.s. Thus X is a L´evy process. Moreover, as we know the expression of the characteristic exponent of a compound Poisson with drift, for all t ≥ 0 ! Z k k     Y X (i) (k) λi Fi (dx) . E eiθMt E eiθXt = exp −t (1 − eiθx + iθx) = R

i=1

i=1

Hence, E(eiθXt ) = e−tΨ(θ) . In this proof we constructed a process X defined on [0, T ], we will prove that this process is independent of T . Then, if X is defined on [0, T ], we will write it X T . Let T1 < T2 , 0≤E

sup t≤T1



XtT1



XtT2

2

!1/2

≤E

  (k) 2 sup XtT1 − Xt

t≤T1

!1/2

+E

sup t≤T1



(k) Xt

hence, taking limits as k → ∞, supt≤T1 (XtT1 − XtT2 ) = 0. Thus X T2 = X T1 almost surely on [0, T1 ] and we can say that X does not depend of the time horizon T . Thanks to this Theorem, we can proof the L´evy-Itˆ o decomposition. Proof of the L´evy-Itˆ o decomposition. We can apply the previous Theorem with λn = Π({x : 2−(n+1) < |x| < 2−n }) and Fn = λ−1 n Π(dx)|{x:2−(n+1) 0} − λn t i

(n)

Nt

(n,−) Mt

=

X i=1

(n)

(n) |ξi |1{ξ(n) 0} , i

n=1 i=1

n=1

|x|

k N t X X

k X

(n)

λn Fn (dx) =

k N t X X

n=1 i=1

n=1 (k,+)

(n)

|ξi |1{ξ(n) 0, d

(Xg(n,λ) , X g(n,λ) ) = (V (n, λ), J(n, λ)), where V (n, λ) and J(n, λ) are defined iteratively for n ≥ 1 as (n)

(n)

V (n, λ) = V (n −  1, λ) + Sλ + Iλ ,

(n)

J(n, λ) = max J(n − 1, λ), V (n − 1, λ) + Sλ (0)

(0)

(j)



,

and V (0, λ) = J(0, λ) = 0. Here, Sλ = Iλ = 0, {Sλ : j ≥ 1} are independent and identically distributed sequence of random variables with (j) common distribution equal to that of X e1 /λ and {Iλ : j ≥ 1} are another independent and identically distributed sequence of random variables with common distribution equal to that of X e1 /λ Thanks to (12), the pair (V (n, n/t), J(n, n/t)) converges in distribution to (Xt , X t ). In conclusion to simulate a good approximation of (Xt , X t ), we need only to simulate independent independent and identically distributed (1) (1) copies of the distribution of Sn/t := Sn/t and In/t := In/t . Then given a suitably nice function F , using the standard Monte-Carlo methods one estimates for large k E(F (Xt , X t )) ∼ =

k 1 X F (V (m) (n, n/t), J (m) (n, n/t)), k

(13)

m=1

(m) (n, n/t), J (m) (n, n/t))

where (V copies of (V (n, n/t), J(n, n/t)).

are independent and identically distributed

56

6.3

β-class of L´ evy processes

Recall that the L´evy-Khintchine representation tells us that the caracteristic exponent of a L´evy process X is given by Z   1 2 2 Ψ (θ) = iaθ + σ θ + 1 − eiθx + iθx1|x|0} c1

e α2 β 2 x e−α1 β1 x + 1 c , 2 {x 0, βi > 0 ci ≥ 0, and λ ∈ (0, 3) (i ∈ {1, 2}). One of the interests of this class is that the paths of processes on these class can be of unbounded variation when at least one of the conditions σ 6= 0, λ1 ∈ (2, 3) or λ2 ∈ (2, 3) holds, and bounded variation whell all conditions σ = 0, λ1 ∈ (0, 1) or λ2 ∈ (0, 2) hold. The processes of this class can also have infinite activity in the jump component if λ1 , λ2 ∈ (1, 3) and finite activity if λ1 or λ2 ∈ / (1, 3). After, B is the Beta function defined by B(x, y) =

Γ(x)Γ(y) , Γ(x + y)

and ψ is the digamma function defined by ψ(x) =

d ln(Γ(x)). dx

By changing the constants, the L´evy-Khintchine formula can be expressed like Z  1 2 2 eizx − 1 − izx Π (dx) , Ψ (z) = −iµz + σ z − 2 R

We can compute the characteristic exponent of this class of L´evy processes

57

Property 6.1. If λi ∈ (0, 3) \ {1, 2} then     1 2 2 c1 c2 iz iz Ψ(z) = iρz + σ z − B α1 − , 1 − λ1 − B α2 + , 1 − λ2 + γ, 2 β1 β1 β2 β2 (14) where c2 c1 B (α1 , 1 − λ1 ) + B (α2 , 1 − λ2 ) , γ = β1 β2 c1 ρ = B (α1 , 1 − λ1 ) (ψ(1 + α1 − λ1 ) − ψ(α1 )) β1 c2 B (α2 , 1 − λ2 ) (ψ(1 + α2 − λ2 ) − ψ(α2 )) . − β2 If λ1 or λ2 ∈ {1, 2} the characteristic exponent can be conputed using the following two integrals :     Z e−αβx iy iy 1 ixz (e − 1 − ixz) ψ α− − ψ(α) − 2 ψ ′ (α). dx = − −βx β β β 1−e (0,∞) and Z

(0,∞)

(e

ixz

    e−αβx 1 iy iy − 1 − ixz) dx = − (1 − α + ) ψ α − − ψ(α) (1 − e−βx )2 β β β −

iy(1 − α) ′ ψ (α). β2

The proof of this property is given in [7]. Next, we want to apply the Wiener-Hopf Monte-Carlo simulation and find the law of X e1 /λ and X e1 /λ where e1 is an independent random variable exponentially distributed with unit mean. We will study the equation, Ψ(iζ) + q = 0. Theorem 6.2. Equation Ψ(iζ) + q = 0 has infinitely many solutions, all of which are real and simple. They are enumerated by {(ξn+ )n≥0 } where for all integer n, ξn+ ≥ 0 and {(ξn− )n≥0 } where for all integer n, ξn− ≤ 0. Moreover, they are located as follows ξ0+ ∈ (0, β2 α2 ),

ξ0− ∈ (−β1 α1 , 0),

ξn+ ∈ (β2 (α2 + n − 1), β2 (α2 + n)) for n ≥ 1,

ξn− ∈ (β1 (−α1 − n), β1 (−α1 − n + 1)) for n ≥ 1. 58

Moreover, for x > 0, P(X e1 /λ

  X − ∈ dx) = −  c− ξ − eξk x  dx,

(15)

k k

k≥0

where c− 0 =

Y 1+

n≥1

ξ0− β1 (n−1+α1 )

1−

ξ0− − ξn

and c− k =

1+

ξk− β1 (k−1+α1 )

1−

ξk− ξ0−

Y

1+

ξk− β1 (n−1+α1 )

1−

n≥1,n6=k

ξk− − ξn

.

The law of X e1 /λ is given by P(−X e1 /λ ∈ dx) =

X

+

+ −ξk x c+ dx, k ξk e

k≥0

where c+ 0 =

Y 1+

n≥1

−ξ0+ β2 (n−1+α2 )

1−

ξ0+ + ξn

and c+ k =

1+

−ξk+ β2 (k−1+α2 )

1−

ξk+ ξ0+

Y

n≥1,n6=k

1+

−ξk+ β2 (n−1+α2 )

1−

ξk+ + ξn

.

The proof of this Theorem is given in [7].

6.4

Results

In this section, X is a L´evy process which is in the β-class with parameters (a, σ, α1 , β1 , c1 , α2 , β2 , c2 ) = (a, σ, 1, 1.5, 1.5, 1, 1, 1.5, 1.5, 1) such that X has jumps of infinite activity. The linear drift a is chosen arbitrarily to 1 but we can chose it such that Ψ(−i) = −r with r = 0.05, hence the process (exp(Xt −rt))t≥0 is a martingale. If σ > 0 then the process X is of unbounded variation and if σ = 0, it is of bounded variation. We consider the problem of pricing up-and-out barrier (b ≥ 0) call option with maturity equal to t ≥ 0, which is equivalent to computing the following expectation i h + π uo (s, t) = e−r E seXt − K 1{seXt