Stochastic Calculus - Problem set 3 - Fall 2002 Exercise 1 - a The

the only remaining term in the sum defining B is the ith one, and we have Bri = λiri = Ari. Exercise 1 - b. You can either see by induction what happens when you ...
81KB taille 4 téléchargements 277 vues
Stochastic Calculus - Problem set 3 - Fall 2002

Exercise 1 - a Pn The simplest way to do this is just to check the formula is correct. If we denote by B = j=1 λj rj lj , all we have to do is check that for each x ∈ Rn , we have Ax = Bx. Since (r1 , . . . , rn ) is a basis of n Rn , every x ∈ Rn can be written as a linear combination of the {ri }i=1 , and therefore we only need to check the formula for each ri . But this is straightforward, since for any i, j we have lj ri = δi,j , the only remaining term in the sum defining B is the ith one, and we have Bri = λi ri = Ari . Exercise 1 - b You can either see by induction what happens when you multiplyPthe expression found in part a) n several times by itself, or guess that the result should be At = j=1 λtj rj lj , and check it works doing exactly what we did for question a). Exercise 2 - a Since ut (j) = P (τ > t, Xt = j), being in state 3 at time t is not compatible with τ > t, and therefore for any t we have ut (3) = 0. Then for t ≥ 2 we can condition this probability on the value of X2 , to end up with: ut (j) =

3 X

P (τ > t, Xt = j|τ > t − 1, Xt−1 = k)P (τ > t − 1, Xt−1 = k)

k=1

According to the preceding remark, this sum is actually made of only two terms, the ones corresponding to k = 1 and k = 2. For each of this term, knowing that we were never in state 3 before time t − 1 and that at time t − 1 we are in state k, the probability of never being in state 3 before time t and ending up in state j at time t is only Pk,j . Therefore the relationship we find is ut (j) = P1,j ut−1 (1) + P2,j ut−1 (2), which we can write in matrix form as: Ut = AUt−1 where

1 A=

2 1 4

1 2 1 2



All we need now to get Ut is U1 , which is readily found to be U1 = expression for Ut is: Ut = At−1 U1

1 3

 1 ∗ 3 .

therefore the

and we have now to calculate At for each t. There are several ways to do this. One would be to find all eigenvalues, the right eigenvectors, then the left eigenvectors, and to apply the result of Exercise 1. Another way is to diagonalize the matrix A by finding the eigenvalues and right eigenvectors. If one wants to avoid calculating the eigenvectors, there is a way of doing this. Let us call PA (X) √ (X − λ1 )(X − λ2 ) the characteristic polynomial of A, where √ = det(XI − A) = λ1 = 41 (2 − 2) and λ2 = 14 (2 + 2). Now we can divide the polynomial X t by PA (X), of degree 2, to get: X t = Q(X)PA (X) + at X + bt If we plug in this equation λ1 and λ2 , we end up with solving the system: ( λt1 = at λ1 + bt λt2 = at λ2 + bt

1

and we find the solutions: √ √ √ ( 21−2t (1+ 2)[(2− 2)t −(2+ 2)t )] √ at = − 2+ 2 √ √ √ √   3 bt = 2− 2 −2t (2 − 2)t (2 + 2) − (2 − 2)(2 + 2)t But Hamilton-Cayley theorem states we have PA (A) = 0, therefore if we plug in A in the above equation, we get: √ √  √ √    −1−2t  2 (2 − √2)t + (2 + √2)t 2−1−2t −(2 −√ 2)t + (2 +√ 2)t t     A = at A + bt I = 3 2− 2 −2t −(2 − 2)t + (2 + 2)t 2−1−2t (2 − 2)t + (2 + 2)t ∗ Now we have Ut = At−1 U1 and by looking at U1 ’s definition we see U1 = 13 13 . It follows: √ √   1  1 2 2 −2t −(2 − 2)t + (2 + 2)t √ √   Ut = 4−t (2 − 2)t + (2 + 2)t 3 The law of total probabilities says that P (τ > t) = ut (1) + ut (2) = and therefore: h √ √ √ √ i 1 P (τ > t) = 4−t −(2 − 2)t ( 2 − 1) + (1 + 2)(2 + 2)t 3 This formula is valid for t > 0, and if we plug in t = 0, we find 23 , which is consistent with the fact that τ can be zero if we start in state 0, which happens with probability 13 . Now we can find an explicit formula for P (τ = t) = P (τ > t − 1) − P (τ > t). Exercise 2 - c I will prove the formula you may want to use to find E(τ ), but I will use a much simpler method to calculate it. We have to prove that: ∞ X

tP (τ = t) =

t=1

∞ X

P (τ ≥ t)

t=1

It is not very difficult to prove this, basically it comes down to a discrete integration by parts. We start by writing P (τ = t) = P (τ ≥ t) − P (τ ≥ t + 1) and, after a change of variables in the second sum, we end up with: ∞ X

tP (τ = t) =

t=1

∞ X

tP (τ ≥ t) −

t=1

∞ X

(t − 1)P (τ ≥ t)

t=2

The first term of the first sum is P (τ ≥ 1) and we find: ∞ X

tP (τ = t) = P (τ ≥ 1) +

t=1

∞ X

[t − (t − 1)] P (τ ≥ t) =

t=2

∞ X

P (τ ≥ t)

t=1

One can use this and the result of the previous question to sum the above series, and find t−1 the U1 and therefore P expectation P t−1of τ . Another way to do this, is to notice P∞thatt Ut = A −1 Ut = ( A )U1 . But the sum of the geometric series t=0 A is (I − A) (note that this series converges, both the eigenvalues of A being of absolute value less than 1), and it follows E(τ ) is just the sum of the two components of (I − A)−1 U1 . We find E(τ ) = 17 3 . Exercise 2 - d and e We want to find a matrix P (τ ≥ t|X1 = j). The first thing to notice is that ∗ recurrence for ft (j)= ∗ we have f1 = 1 1 1 and f2 = 1 1 0 , and ft (3) = 0 for any t ≥ 3. Let us now assume that t ≥ 3, and let’s apply the law of total probability with respect to the different values of X2 : P (τ ≥ t|X1 = j) =

3 X

P (τ ≥ t|X2 = k, X1 = j)P (X2 = k|X1 = j)

k=1

2

But for t ≥ 3 the third probability is zero. It is easy to see that P (τ ≥ t|X2 = k, X1 = j) = P (τ ≥ t − 1|X1 = k), and therefore: ft (j) = Pj,1 ft−1 (1) + Pj,2 ft−1 (2) and we can write those two relationships as a matrix recurrence: 1 1 Ft = 21 41 Ft−1 = BFt−1 2

2

 where Ft = ft (1) ft (2) . Therefore we have Ft = B t−2 F2 and we do exactly the same method to find: Ft =

Exercise 3 P∞ We have to find the value of f (k) = E [ t=0 z t v(Xt )|X0 = k]. First we isolate the first term of the sum, and factor out a z, not random, to get: "∞ # "∞ # X X t t−1 E z v(Xt )|X0 = k = v(k) + zE z v(Xt )|X0 = k t=0

t=1

Now we condition with respect to the values of X1 , using the law of total probabilities: "∞ # X X t−1 f (k) = v(k) + z E z v(Xt )|X1 = j, X0 = k P (X1 = j|X0 = k) j∈S

Once you notice that E

P∞

t=1

t=1

 z t−1 v(Xt )|X1 = j, X0 = k = f (j), you end up with: X f (k) = v(k) + z f (j)Pk,j j∈S

which can be written in matrix form as: f = v + zP.f with the solution f = (I − zP )−1 v. Exercise 4 - a First everything is easier if we find a good representation for the random walk Xt . If {Ui }i = 0∞ are independent identically distributed random variables with distribution PU = 13 (δ−1 + δ0 + δ1 ), then Xt has the same distribution as U0 + . . . + Ut . This point of view is more appropriate for martingales methods than the Markov Chain definition. First let’s check that Ft is a martingale. We have E [Ft+1 |Ft ] = E [Xt + Ut+1 |Ft ]. But Xt is Ft measurable, thus we can factor it out of the expectation, and Ut+1 is independent of Ft , therefore the conditional expectation comes down to the regular expectation, and we have: E [Xt + Ut+1 |Ft ] = Xt + E(Ut+1 ) = Xt To prove Gt is a martingale is not much more difficult. Expanding the square, we get:   2 2 E [Gt+1 |Ft ] = E (Xt + Ut+1 ) − (t + 1)|Ft 3   2 2 2 = E Xt + 2Ut+1 Xt + Ut+1 − (t + 1)|Ft 3   2 2 = Gt + E 2Ut+1 Xt + Ut+1 − |Ft 3 3

For the same above reasons, Xt is Ft -measurable, and Ut+1 is independent of Ft , therefore: 2 E [Gt+1 |Ft ] = 2Xt E [Ut+1 |Ft ] + E(Ut+1 )−

2 3

2 But E(Ut+1 ) = 0 and E(Ut+1 ) = 32 , and thus E [Gt+1 |Ft ] = Gt .

Exercise 4 - b First, let’s prove that τa is finite almost surely. The blocks (U0 , . . . , U2a ), (U2a+1 , . . . , U4a ), . . . are independent identically distributed and since P (U0 = 1, . . . , U2a = 1) = ( 31 )2a+1 , the following events are independent and have the same non zero probability B0 = (U0 , . . . , U2a ), B1 = (U2a+1 , . . . , U4a ), B2 = . . .. By applying one of the Borel-Cantelli lemma, we see that one of them (and actually infinitely many of them) has to happen. But as soon as one of them happens, the random walk reaches a. Therefore τa is finite almost surely. Now we know that Gt∧τa is a martingale, and since τa is finite almost surely, we know that almost surely limt→+∞ Gt∧τa = Gτa . Under the assumption that X0 = 0, we have a martingale starting at G0 = 0, and therefore E(Gt∧τa ) = E(G0 ) = 0. But this can be written as: 2 2 E(Xt∧τ ) − E(t ∧ τa ) = 0 a 3 Now we want t to go to infinity to find E(τa ). The left handside is easy to deal with, because 2 |Xt∧τ | ≤ a2 , and using Lebesgue’s theorem of dominated convergence, we get: a 2 lim E(Xt∧τ ) = E(Xτ2a ) a

t→+∞

But Xτ2a = a2 . The right handside of the above equation is an increasing non-negative function of t. We use Lebesgue’s monotone convergence theorem to find: lim E(t ∧ τa ) = E(τa )

t→+∞

Therefore we have E(τa ) = 23 a2 . Exercise 4 - c We define f (k) = E [τa |X0 = k]. We can use the law of total probabilities to condition this expectation with respect to the values of X1 : f (k) = E [τa |X1 = k − 1, X0 = k] P (x1 = k − 1|X0 = k) + E [τa |X1 = k, X0 = k] P (x1 = k|X0 = k) + E [τa |X1 = k + 1, X0 = k] P (x1 = k + 1|X0 = k) All of the right handside probabilities are equal to 13 , and each of the conditional expectation is equal to: E [τa |X1 = j, X0 = k] = E [τa |X0 = j] + 1 Therefore we end up with the equation: f (k) =

1 ((f (k − 1) + 1) + (f (k) + 1) + (f (k + 1) + 1)) 3

and we can simplify it as: 1 2 1 − f (k − 1) + f (k) − f (k + 1) = 1 3 3 3 On the other hand, we know that f (a) = f (−a) = 0. Let’s try to find a quadratic solution, as suggested, that is a function f such that f (x) = α + βk + γk 2 . We have 3 unknowns, α,β and 4

γ, and 3 equations, so chances are we can find a solution. It is intuiitively clear looking at the definition of f that β = 0, and then by plugging this form into our set of equations, we find after some easy calculations: 3 f (k) = (a2 − k 2 ) 2 which fortunately (take k = 0), is consistent with the result of question b). Exercise 4 - d This is exactly the same as question b), except the fact that now G0 = k 2 , and therefore the martingale properties imply: 2 2 E(Xt∧τ ) − E(t ∧ τa ) = k 2 a 3 taking the limit as t → +∞ the same way we did in b), proves that, again: f (k) =

3 2 (a − k 2 ) 2

5