Stochastic Calculus - Problem set 6 - Fall 2002
Exercise 1-a To simplify the notation, we will write Xi as Xti . the simplest way of finding the conditional law of X2 given X1 and X3 is to do a linear regression X2 − X1 = α(X3 − X1 ) + β + ξ where ξ is a Gaussian independent of X3 − X1 whose mean is zero. First, taking expectations on both sides forces β to be zero. To find α, we multiply by X3 − X1 and take the expectation to −t1 end up with α = tt32 −t . Now we write the above relationship as 1 X2 = X1 +
t 2 − t1 (X3 − X1 ) + ξ t 3 − t1
Therefore we get E[X2 |X1 , X3 ] = X1 +
t2 − t1 (X3 − X1 ) t3 − t 1
and V ar[X2 |X1 , X3 ] = V ar(ξ). But we have V ar(ξ) = (t2 − t1 ) − (t3 − t1 )
t2 − t1 t3 − t 1
2
We can simplify those two expressions to find the conditional distribution is: t3 − t 2 t2 − t1 (t3 − t2 )(t2 − t1 ) Xt1 + Xt3 ; N t3 − t 1 t3 − t1 t3 − t1 Exercise 1-b If we take t ∈ [tk,L ; tk+1,L ], then we can express X L+1 as a function of X L by writing X(tk,L ) + X(tk+1,L ) L+1 L X (t) = X (t) + X(t2k+1,L+1 ) − hk,L (t) 2 X(tk,L )+X(tk+1,L ) have the required property, and since the supports The Yk,L = X(t2k+1,L+1 ) − 2 of the hk,L don’t overlap, what’s true for t ∈ [tk,L ; tk+1,L ] is actually true for each t, and we have X L+1 (t) = X L (t) +
n−1 X
Yk,L hk,L (t)
k=0
Exercise 1-c If X is a standard Gaussian, then for any nonnegative we have Z ∞ Z ∞ x2 2 1 x − x2 1 1 e− 2 dx ≤ 2 √ P (|X| ≥ ) = 2 √ e 2 dx = 2 √ e− 2 2π 2π 2π Now if Y is a Gaussian N (µ, σ 2 ), X = result we get
Y −µ σ
is a standard Gaussian, and applying the previous r
P (|Y − µ| ≥ α) ≤
1
2 σ − α22 e 2σ πα
We obtain the required bound if σ ≤ α by noticing that
q
2 π
≤ 1.
Exercise 1-d There’s nothing to say but the simple fact that X n−1 α2 P (|Yk | ≥ α) ≤ ne− 2σ2 P ∪n−1 (|Y | ≥ α) ≤ k k=0 k=0
Exercise 1-e The first thing to notice is that 1 1 P |XtL+1 − XtL | > ∆t 4 for some t in [0, T ] = P max |XtL+1 − XtL | > ∆t 4 t∈[0,T ]
and by construction the maximum can only occur at one of the midpoints, and therefore is equal to Yk,L . The above probability therefore can be written as 1 − ∆t 22 1 P |Yk,L | > ∆t 4 for some k < ne 2σL
T 2 By plugging in the values of ∆t = 2TL and σL = 2L+2 , we see the argument of the exponential is exponentially decreasing with respect to L. This is far better than the linear decrease we are asked to prove, and therefore there is a β > 0 such that 1 P |XtL+1 − XtL | > ∆t 4 for some t in [0, T ] < e−βL
Exercise 2 -a This is straightforward, since the components of the multidimensional Brownian Motion are independent, its density is just the product of the marginal densities, and we find u(x, t) =
Exercise 2 -b Iw we write g(x, t) = have seen before
x2 +...+x2 kxk2 1 1 − 1 2t n − 2t = n e n e (2πt) 2 (2πt) 2
2
x √ 1 e− 2t 2πt
the 1-dimensional density of the Brownian Motion, then as we u(x, t) =
n Y
g(xi , t)
i=1
If we take the derivative with respect to t, we get n Y X ∂t g(xi , t) ∂t u(x, t) = g(xk , t) i=1
k6=i
Now we take the second derivative with respect to xi , which is easier since xi only appears in one term Y ∂xi xi u(x, t) = ∂xi xi g(xi , t) g(xk , t) k6=i
and it follows ∆u(x, t) =
n X
∂xi xi g(xi , t)
i=1
Y k6=i
2
g(xk , t)
Therefore, by factoring the difference, n X Y 1 1 (∂t − ∂xi xi )g(xi , t) g(xk , t) = 0 (∂t − ∆)u(x, t) = 2 2 i=1 k6=i
and u satisfies the heat equation. Exercise 2 - c As usual f (x, t) = Ex,t [f (x + ∆x, t + ∆t)] is a direct consequence of the tower property. In the Taylor expansion of f, let’s notice that by independence, the expectation of each cross-term Ex,t [∆xi ∆xj ] = 0 if i 6= j, and therefore only the diagonal terms remain n X
n
1X f (x + ∆x, t + ∆t) = f (x, t) + ∆t∂t f (x, t) + ∆xi ∂xi f (x, t) + (∆xi )2 ∂xi xi f (x, t) + O(∆t) 2 i=1 i=1 As we take the conditional expectation, the terms Ex,t [∆xi ] vanish, and we end up with 1 ∂t f + ∆f = 0 2 which is the backward n-dimensional heat equation, as expected from the duality argument. Exercise 2 - d A simple change of variables in the Brownian Motion density shows that u(x, t) = | det A|v(Ax, t) Since we know that ∂t u = 21 ∆u, by simply differentiating the above formula, we get a PDE for v. First we have n X ∂ v(Ax, t) = aj,i ∂xj v(ax, t) ∂xi j=1 and taking the second derivative yields ∂t v =
n X
bj,k ∂yj ∂yk v
j,k=1
where bj,k =
n X
aj,i ak,i
i=1
Exercise 3 Using the tower property, we see that f (x, y, t) = Ex,y,t [f (x + ∆x, y + ∆y, t + ∆t)] Then we do a Taylor expansion of the function inside the expectation, and we notice that ∆y = x∆t + O(∆t2 ). Therefore every cross derivative gives a term that’s of order greater or equal to O(∆t2 ), and we have as usual Ex,y,t [∆x] = 0 and Ex,y,t [∆y] = x∆t. Therefore we end up with the PDE 1 ∂t f + ∂xx f + x∂y f = 0 2
3