Binomial approximation of Brownian motion and its maximum

Jul 6, 2004 - approximating term can be explicitly computed, without using any simulation. .... Then, in Section 3, we compute the ''speed of convergence'' of the approximating ... 2. Brownian motion and binomial random walk. Firstly, we want to convince ..... In Petrov (1975, Theorem 12, page 124), the following non-.
270KB taille 2 téléchargements 220 vues
ARTICLE IN PRESS

Statistics & Probability Letters 69 (2004) 271–285

Binomial approximation of Brownian motion and its maximum Raffaella Carbone* Dipartimento di Matematica, Universita" degli Studi di Pavia, via Ferrata 1, 27100 Pavia, Italy Received 2 April 2004 Available online 6 July 2004

Abstract Motivated by some typical option pricing problems, we study how to estimate quantities of the form E½f ðBt ; supspt Bs Þ by replacing the Brownian motion ðBt ÞtX0 with a binomial random walk. The approximating term can be explicitly computed, without using any simulation. We investigate the rate of convergence of this approximation method and we study some applications, in particular the case of barrier options. r 2004 Elsevier B.V. All rights reserved. MSC: primary 60G50; secondary 91B28 Keywords: Brownian motion; Binomial random walk; Maximum and barrier options

1. Introduction We introduce a continuous Brownian motion ðBt ÞtX0 and define the related maximum process Bt ¼ sup0pspt Bs ; for any time t: We consider the problem of evaluating functions u of the form u: ½0; T  fðx; yÞAR2 : xpy; yX0g-R;

uðt; x; yÞ ¼ E½f ðBT ; BT Þ j Bt ¼ x; Bt ¼ y;

ð1Þ

where T is a constant and f is a proper two-variable function. The interest for this kind of problem comes, for instance, from some financial problems, connected with the evaluation of the price of lookback options. We denote by ðxt Þt the real valued process representing the price of the underlying asset. A lookback option c is an option whose *Tel.: +39-382-505627; fax: +39-382-505602. E-mail address: [email protected] (R. Carbone). 0167-7152/$ - see front matter r 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2004.06.020

ARTICLE IN PRESS 272

R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

payoff depends on the final price of the underlying asset and on the maximum (or the minimum) price of the same asset in the time interval ½0; T between the contract and the expiration dates. Essentially such an option c can be expressed as FðxT ; maxtA½0;T xt Þ; with F measurable function; some classical examples are Fðx; yÞ ¼ ðy xÞ or Fðx; yÞ ¼ ðK yÞþ ; K constant. A remarkable subclass of lookback options is the set of barrier options, i.e. options which cannot be exercised if the underlying asset’s price achieves some prescribed level in the time interval ½0; T: We have a barrier option when c ¼ F1 ðxT Þ1J ðmaxtA½0;T xt Þ; J real interval and F1 a measurable function representing a European option. For some examples and more details about these options, one can see Hull (2000). We consider the Black–Scholes model, so the process ðxt Þt is supposed to be a geometric Brownian motion. Then (see, for instance, Lamberton and Lapeyre, 1996), the value at time tA½0; T of a square integrable lookback option c ¼ FðxT ; maxtA½0;T xt Þ is given by E½c j xr ; rA½0; t (we can forget the discounting term). By the Markov property of the process ðxt ; maxrA½0;t xr Þt and by Girsanov theorem, this conditional expectation can be written in the same form as (1) (the transformation will obviously be determined by the drift and the diffusion coefficient of the geometric Brownian motion). This is why, in the sequel, we will always use only formulation (1) of the problem. Moreover, by using the Markov and scaling properties of the Brownian motion, one can easily see that the problem can be reduced to the evaluation of quantities of the form E½f ðB1 ; B1 Þ: Always in view of the applications we have described, the function f in (1) should be either a Lipschitz function or a function of the form f ðx; yÞ ¼ f1 ðxÞ1J ðyÞ; with f1 : R-R Lipschitz and 1J indicator function of the real interval J: The Lipschitz functions appear, for instance, in problems related to pricing call and put options on the maximum, while functions of the second form usually represent barrier options. So, since many options are expressed through functions which are not more regular than Lipschitz, we choose to avoid differentiability hypotheses on f : Obviously, since the joint density of the random vector ðBT ; BT Þ is known, it is possible to solve this problem by numerical integration methods or by simulation. Here we analyze another approach, based on the approximation by binomial (and trinomial) trees; this method uses probabilistic arguments, has an approximating term which can be easily computed, and guarantees a non-random bound for the error. We remind that binomial and trinomial trees are quite popular in financial applications, think about the Cox–Ross–Rubinstein model, but also see some more recent results cited in Section 3. We will see how to estimate the function u by replacing the Brownian motion with a binomial random walk ðSn ÞnX0 and its associated maximum pffiffiffi process Sn ¼ supmpn Sm ; more precisely, we will approximate E½f ðB1 ; B1 Þ by E½f ððSn ; Sn Þ= nÞ (with a time discretization step 1=n). The convergence of the approximating sequence is trivial, but the rate depends on the regularity of the function f : We will investigate this rate, which is important for the implementation of the approximating algorithm. In the next section, we study some properties of the discrete random walk ðSn ; Sn ÞnX0 and we p ffiffi ffi explain how to compute exactly E½f ððSn ; Sn Þ= nÞ by means of a backwards procedure which exploits the Markov property of the discrete process. We also compute the ‘‘cost’’ of the algorithm we describe: the number of operations turns out to be asymptotic with n3 =12: Then, in Section 3, we compute the ‘‘speed of convergence’’ of the approximating sequence and we find it is of order n 1=2 : Essentially, we prove that, for the functions f described above, there

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

273

exists a positive constant C0 such that  " !#   C  Sn Sn   0  p ffiffi ffi p ffiffi ffi E½f ðB ; B Þ

E f ; ppffiffiffi:  1 1   n n n We also underline that this rate of convergence cannot be improved, for a generic Lipschitz function, but we suggest a variation of the method, which can be applied in some cases, and have a better complexity and a convergence to equilibrium of order n 1 : In Section 4, we show some examples for the improvement of the rate of convergence of the algorithm. Then, we explain how all the previous results can be generalized to the case of a trinomial random walk. Finally, in Section 5, we consider some links with partial differential equations problems. We show that the function u in (1) is the solution of a partial differential equation problem (P) and that the method based on the approximation of the Brownian motion by a binomial random walk is someway equivalent to the numerical solution of the problem (P) by a finite difference method. This allows us to deduce the convergence of the latter from the probabilistic results in the previous sections.

2. Brownian motion and binomial random walk Firstly, we want to convince ourselves that all the values assumed by the function u can really * 1 ; B Þ: We have noticed that, since the Brownian motion is a Markov be written in the form E½fðB 1 homogeneous process uðt; x; yÞ ¼ E½f ðx þ B ; y3ðx þ B ÞÞ ¼ E½f ðB ; B Þ; T t

T t

0

T t

T t

Þ¼ Bt Þ has where f0 ðx ; yp t ;ffiffiffiffiffiffiffiffiffiffiffi ffi the ffiffi f ðx þ x ; y3ðx þ y ÞÞ: Then, we can use the scaling property (i.e. ðBp   * * same law as tðB1 ; B1 Þ), and claim that uðt; x; yÞ ¼ E½fðB1 ; B1 Þ where fðx; yÞ ¼ f0 ð T tðx; yÞÞ: Since, when f is Lipschitz, then also f* is (and similarly when f is of the form f ðx; yÞ ¼ f1 ðxÞ1ðM0 ;M1 Þ ), it is clear that the problem of estimating uðt; x; yÞ consists in approximating * So, from now on, without loss quantities of the form E½f ðB1 ; B1 Þ; (eventually replacing f with f). of generality, we can consider t ¼ 0 and T ¼ 1: We now introduce the binomial random walk which will approximate the Brownian motion. Let us consider a sequence ðXn ÞnX0 of independent identically distributed random variables, such that PfXn ¼ 1g ¼ PfXn ¼ 1g ¼ 1=2: For all n; let Sn be the sum of the first n terms 0

S0 ¼ 0

0

0

0

and Sn ¼ X1 þ ? þ Xn for nX1:

We fix the step of discretization for the time variable equal to 1=n; and it is quite natural to define the approximating process BðnÞ ¼ ðBðnÞ s ÞsX0 by BðnÞ s ¼

S½sn þ ðsn ½snÞX½snþ1 pffiffiffi : n

The sequence of processes ðBðnÞ ÞnX0 converges weakly to a standard one-dimensional Brownian motion by the Donsker’s invariance principle. In particular, we will concentrate on the convergence

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

274

pffiffiffi pffiffiffi of the pair ðSn ; Sn Þ= n to ðB1 ; B1 Þ; where the multiplication term 1= n can be interpreted as a step of discretization for the space (or price) variables. The discrete process ðSn ; Sn ÞnX0 : The homogeneous Markov chain ðSn ; Sn ÞnX0 has state space E ¼ fðl; mÞAZ2 j lpm; mX0g and transition probabilities  Þ ¼ ðl 0 ; m0 Þ j ðS ; S Þ ¼ ðl; mÞg PfðSnþ1 ; Snþ1 n n 8 1 0 0 > < 2 if lom; m ¼ m; l Afl 1; l þ 1g; ¼ 12 if l ¼ m; ðl 0 ; m0 ÞAfðl 1; lÞ; ðl þ 1; l þ 1Þg; > : 0 otherwise: We want to determine the law of the pair (Sn ; Sn ). For this computation, we follow some classical arguments used for finding the law of ðB ; B Þ (see, for instance, Lamberton and Lapeyre, 1996, 1

1

Chapter 3). Let tl ¼ inffnX0 j Sn Xlg for lAN; then, for any borel function h; by the symmetry of Sn ; we have E½hðSn Þ1ftl png  ¼ E½1ftl png hðSn tl þ lÞ ¼ E½1ftl png hðl Sn tl Þ ¼ E½1ftl png hð2l Sn Þ: Choosing h ¼ 1ð N;l ; we have PfSn pl; tl png ¼ PfSn pl; Sn Xlg ¼ PfSn Xl; Sn Xlg ¼ PfSn Xlg; consequently, PfSn Xlg ¼ PfSn pl; Sn Xlg þ PfSn > l; Sn Xlg ¼ PfSn Xlg þ PfSn > lg and, in particular, PfS ¼ lg ¼ PfS ¼ lg þ PfS ¼ l þ 1g: For mpl; taking h ¼ 1 n

n

n

ð N;m ;

we

have PfSn Xm; Sn Xlg ¼ PfSn X2l m; Sn Xlg ¼ PfSn X2l mg while, for mXl; we can write PfSn pm; Sn Xlg ¼ PfSn pmg PfSn olg ¼ PfSn Xlg þ PfloSn pmg: From these relations, we deduce the density of the pair (Sn ; Sn ), that is, for mpl; PfS ¼ m; S ¼ lg ¼ PfS ¼ 2l mg PfS ¼ 2l m þ 2g: n

n

n

n

ð2Þ

Description of the algorithm and computation of its complexity. The Markov property of the us to describe an easy algorithm which computes the approximating process ðSn ; Sn ÞnX0pallows ffiffiffi term E½f ððSn ; Sn Þ= nÞ: We denote by Em ; mX0; the subset of the states which can be attained by the Markov chain ðSn ; Sn ÞnX0 ; in m steps, starting from ð0; 0Þ: Let us also define, for mX0; ðj; lÞ in Em (i.e. lX0; 2l mpjplÞ; pffiffiffi vðm; j; lÞ ¼ E½f ððS ; S  Þ= nÞ j S ¼ j; S ¼ l: n

n

m

m

Since, from any state ðj; lÞ; the chain (Sn ; Sn ÞnX0 can move, in one step, only in two states, that we will call ðj 0 ; l 0 Þ and ðj 00 ; l 00 Þ; each with transition probability 1=2; we have, by the homogeneous Markov property of the chain,  ÞÞ=pffiffinffiÞ ¼ 1ðvðm þ 1; j 0 ; l 0 Þ þ vðm þ 1; j 00 ; l 00 ÞÞ: vðm; j; lÞ ¼ E½f ððj þ Sn m ; l3ðj þ Sn m ð3Þ 2 pffiffiffi If we notice that the terms p vðn; ffiffiffi j; lÞ ¼ f ððj; lÞ= nÞ are known, the previous considerations allow us  to determine E½f ððSn ; Sn Þ= nÞ ¼ vð0; 0; 0Þ with a backwards procedure. The algorithm can be

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

written as for

pffiffiffi ðj; lÞ in En ; vðn; j; lÞ :¼ f ððj; lÞ= nÞ

for

m ¼ n 1 down to 0;

275

for ðj; lÞ in Em ; vðm; j; lÞ :¼ 12ðvðm þ 1; j 0 ; l 0 Þ þ vðm þ 1; j 00 ; l 00 ÞÞ: The algorithm consists of an initialization, where we need to evaluate f in different points, followed by n steps, where, at each step m; m ¼ n 1; y; 0; we need to evaluate vðm; j; lÞ for all the pairs ðj; lÞ which represent some state that can be attained with exactly steps starting from ð0; 0Þ: One can easily notice that a state ðj; lÞ is in Em ; m odd 3 mpjpm; j odd; 03jplpðj þ mÞ=2; a state ðj; lÞ is in Em ; m even 3 mpjpm; j even; 03jplpðj þ mÞ=2: So the cardinality of Em is equal to

 (1 X ðm þ 1Þðm þ 3Þ mþj

ðj30Þ þ 1 ¼ 4m 2 ð 2 þ 1Þ2 jjjpm;jþm even

for m odd; for m even:

Summing up, the algorithm needs the following operations: * *

about n2 =4 evaluations of the function f ; for the computation of the terms vðn; j; lÞ; about n3 =12 operations ð¼ 1 sum þ 1 divisionÞ; since, for mon; vðm; j; lÞ is determined by means of (3), and so we have to use this relation a number of times equal to ½n=2 ½ðn 1Þ=2 X X 1 2 ðk þ 1Þ þ ðk þ 1Þðk þ 2ÞB n3 12 k¼0 k¼0

(we denote by ½x the entire part of any real number x and we remind that P n 2 3 2 0 k ¼ ð2n þ 3n þ nÞ=6). 3. The rate of convergence In this section we give the optimal bound for the speed of convergence of the approximating algorithm described in the previous section. The proof is written in detail for Lipschitz functions and then extended to the case of barrier options. In the case f is a Lipschitz function depending only on the first variable, some results are known, which, under different conditions, guarantee a rate of convergence of order n 1=2 or n 1 (see Lamberton, 1999; Talay and Tubaro, 1990; Walsh, 2003). Here, we prove a preliminary convergence result for one variable functions in Lemma 3.1, which guarantees a rate of convergence of order n 1=2 ; for a suitable class of functions. Then, we exploit this fact in order to prove Theorem 3.1 below, which gives a bound of the error for functions depending also on the maximum process. Our kind of problem is very similar, in particular, to the one in Walsh (2003) (when we do not consider the dependence on the maximum). In Walsh (2003), the author works with exponential binomial trees and, by embedding techniques, he obtains an expansion of the error; this is a really

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

276

stronger result than ours, but it holds for functions which are at least piecewise C2 : For similar convergence problems involving the maximum process, one can also see Baldi (1995), Baldi and Caramellino (2002), Gobet (2000) (all connected with the barrier case), and Tonou (1997), where there is only a partial proof, but for a more general context. Theorem 3.1. Let f be either a function of the form f ðx; yÞ ¼ f1 ðxÞ1ðM0 ;M1 Þ ðyÞ; with f1 Lipschitz and 0pM0 oM1 p þ N; or a 2-variable Lipschitz function. Then there exists a constant C0 such that, for all n; pffiffiffi C0 jE½f ðB1 ; B1 Þ E½f ððSn ; Sn Þ= nÞjppffiffiffi: n We start observing that, once we have proved the previous result, we can state that the rate of convergence cannot be better than n 1=2 for a generic Lipschitz function (for functions of the first form, it is well known). Take f ðx; yÞ ¼ eity and ht ðxÞ ¼ cosðtxÞ þ 2i sinðtxÞ1ð0;þNÞ ðxÞ; for some real t; ta0: Then, since we know the laws of B1 and Sn (see the previous section), we have pffiffi  itS  = n E½eitB1  E½e n  Z þN pffiffi pffiffi 2 2 itS = n itðS Þ= n e x =2 eitx dx E½e n 1fSn X0g  E½e n 1 1fSn X1g  ¼ pffiffiffiffiffiffi 2p 0 and; taking n odd; in order to have PfSn ¼ 0g ¼ 0; pffiffi pffiffiffi pffiffiffi it= n ¼ E½ht ðB1 Þ E½ht ðSn = nÞ þ ð1 e ÞE½ht ðSn = nÞ: This relation shows that at least one of the two functions f and ht (which are Lipschitz) has rate of convergence n 1=2 : Indeed, if it is not ht ; then pffiffiffi pffiffiffi * e ðnÞ :¼ E½h ðB Þ E½h ðS = n Þ goes to zero faster than 1= n; t t 1 t n pffiffiffi * E½h ðS = nÞ converges to E½ht ðB1 Þ; which is different from 0 at least for a suitable ta0 (since t n t/E½ht ðB1 Þ is a continuous function and E½h0 ðB1 Þ ¼ 1Þ; pffiffiffi so the second term of the sum in the previous relation is asymptotic with (it= nÞE½ht ðB1 Þ: Now we prove Theorem 3.1. We start with the case when f is Lipschitz and we call L its Lipschitz constant. The idea of the proof (divided in 3 steps) is based on a non-uniform Berry– Esseen inequality and on the fact that we can write the densities of the random vectors (Sn ; Sn ) and ðB1 ; B1 Þ by using only the densities of Sn and B1 ; respectively. Before starting the proof, we define condition (LL) for real functions. We say that a function j : R-R verifies condition ðLLÞ if there exist ðLLÞ

some non-negative constants r and C such that; for w in ½0; 1; jjðxÞ jðx þ wÞjpCwð1 þ jxjÞr :

We remark that a function j verifies this hypothesis, for instance, when it is Lipschitz (this corresponds to the choice r ¼ 0) or when it is a polynomial. It is immediate that such a function j

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

277

increases not faster than polynomially, since, for x > 0;   ½x 1   X   jjðxÞj ¼ jð0Þ þ ðjðj þ 1Þ jðjÞÞ þ jðxÞ jð½xÞpjjð0Þj þ Cð1 þ xÞrþ1   j¼0 and similarly for xo0: Moreover, if j verifies (LL), it is at least locally Lipschitz and also its ‘‘discrete derivative’’ is polynomially bounded. pffiffiffi Step 1: We establish suitable expressions of E½f ðB1 ; B1 Þ and E½f ððSn ; Sn Þ= nÞ; where we use only the densities of B1 and Sn ; respectively. For the discrete case, by means of (2), we can write " !# ! X Sn Sn j l ¼ f pffiffiffi; pffiffiffi ðPfSn ¼ 2l jg PfSn ¼ 2l j þ 2gÞ E f pffiffiffi; pffiffiffi n n n n lX0;jpl ! X 2l m l pffiffiffi ; pffiffiffi ðPfSn ¼ mg PfSn ¼ m þ 2gÞ f ¼ n n lX0;mXl ! m X X mþ1 2l m l pffiffiffi ; pffiffiffi ; ðPfSn ¼ mg þ PfSn ¼ m þ 2gÞ ð4Þ f ¼ nþ1 n n mX0 l¼0 where we underline that we use the expression of the binomial density only in the last equality. Now, for the Brownian motion, since the joint density of the vector ðBt ; Bt Þ with respect to the Lebesgue measure is known to be 1fyX0g 1fxpyg 2ð2y xÞð2pt3 Þ 1=2 expð ð2y xÞ2 =ð2tÞÞ dx dy (see always Lamberton and Lapeyre, 1996, Chapter 3) we get, by a proper change of variable, Z þN Z þN 2z 2  pffiffiffiffiffiffi e z =2 f ð2y z; yÞ dz dy ¼ 2E½gðB1 Þ; E½f ðB1 ; B1 Þ ¼ 0 y 2p where gðxÞ ¼ x

Z

x

f ð2y x; yÞ dy1ð0;þNÞ ðxÞ:

ð5Þ

0

We point out that, even if f is Lipschitz, the corresponding function g is not always Lipschitz, but it satisfies hypothesis (LL). Indeed, for x > 0 and w in ½0; 1; Z x jf ð2y x; yÞ f ð2y x w; yÞj dy jgðxÞ gðx þ wÞjp x 0 Z xþw Z xþw þw jf ð2y x w; yÞj dy þ x jf ð2y x w; yÞj dy 0 x pffiffiffi p wLx2 þ wð2x þ wÞ½L 2ðx þ wÞ þ jf ð0; 0Þjpwðjf ð0; 0Þj þ 4LÞð1 þ xÞ2 : Step 2: Once we have written E½f ðB ; B Þ as the expectation 2E½gðB Þ; we concentrate on the 1

1

1

discrete approximation of the latter. In Petrov (1975, Theorem 12, page 124), the following nonuniform version of the Berry–Esseen inequality can be found (see also Osipov, 1969).

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

278

Theorem 3.2. Let X1 ; y; Xn be independent identically distributed random variables with E½X1  ¼ 0; and define Sn ¼ X1 þ ? þ Xn : Let Fn and f be E½X12  ¼ 1 and E½jX1 jb oN for some integerpbX3; ffiffiffi the cumulative distribution functions of Sn = n and of the standard normal law, respectively. Then, for any bX3; there exists a suitable constant cðbÞ; such that, for any x in R and any nAN ; cðbÞ jFn ðxÞ fðxÞjppffiffiffi : nð1 þ jxjÞb This result easily implies the following convergence result for the one variable case. Lemma 3.1. Let j be a real function verifying pffiffiffi condition (LL). Then, in the hypothesis of the previous theorem with b ¼ r þ 2; jðB1 Þ and jðSn = nÞ have finite expectation for any n; and pffiffiffi jE½jðB1 Þ E½jðSn = nÞjp2C cðr þ 2Þn 1=2 ; where cðr þ 2Þ is the constant that appears in Theorem 3.2. Proof. Since j verifies (LL), it does not increase more than polynomially fast. This implies that the expectations we consider are finite. Then we can write  Z pffiffiffi    jE½jðB1 Þ E½jðSn = nÞj ¼  jðxÞdðf Fn ÞðxÞ R     ½M=w X    ¼  lim limþ jðjwÞðfðjwÞ Fn ðjwÞ fðjw wÞ þ Fn ðjw wÞÞ  M-þN w-0 j¼½M=wþ1     X   p lim  limþ ðfðjwÞ Fn ðjwÞÞðjðjwÞ jðjw þ wÞÞ  M-þN w-0 j þ

jððf Fn Þð½M=wwÞÞjð½M=wwÞ ððf Fn Þð ½M=wwÞÞjðð1 ½M=wÞwÞj ! rþ1 X cðr þ 2Þ cðr þ 2Þðjjð0Þj þ Cð1 þ MÞ Þ pffiffiffi pffiffiffi p lim lim Cwð1 þ jjjwÞr þ 2 rþ2 M-þN w-0þ n ð1 þ jjwjÞ nð1 þ w½M=wÞrþ2 j Z cðr þ 2ÞC 1 2C cðr þ 2Þ pffiffiffi pffiffiffi ¼ dx ¼ : 2 n n R ð1 þ jxjÞ lim

M-þN

This concludes the proof. & Now, by applying Lemma 3.1, with r ¼ 2; to the function g we obtained in Step 1, we get pffiffiffi 2cð4Þ jE½gðSn = nÞ E½gðB1 Þjp pffiffiffi ðjf ð0; 0Þj þ 4LÞ; n where cð4Þ is still the constant introduced in Theorem 3.2. Step 3: Since, by Step 1, E½f ðB1 ; B1 Þ ¼ 2E½gðB1 Þ; the Schwarz inequality implies pffiffiffi jE½f ðB1 ; B1 Þ E½f ððSn ; Sn Þ= nÞj pffiffiffi pffiffiffi pffiffiffi p2jE½gðB1 Þ E½gðSn = nÞj þ jE½2gðSn = nÞ f ððSn ; Sn Þ= nÞj:

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

279

The first term of the right hand side has already been treated in Step 2, so, in order to complete the proof of Theorem 3.1, we just need to prove the following result. Proposition 3.1. There exists a constant C such that, for all n;  " !# " !#  C  Sn Sn Sn   p ffiffi ffi p ffiffi ffi p ffiffi ffi

E f ; Þ ppffiffiffi: 2E g   n n n n Proof. The proof of this result consists of quite long but very simple computations. Here, we will denote pn ðmÞ ¼ PfSn ¼ mg: By (4), we have  " !# " !#   Sn Sn Sn  

E f pffiffiffi; pffiffiffi  2E g pffiffiffi  n n n   " !#  ! Z m=pffiffin   X  m m S S   n ¼ 2 pn ðmÞpffiffiffi f 2y pffiffiffi; y dy E f pffiffiffi; pnffiffiffi   mX1 n 0 n n n  pffiffi  !! !  X m 1 Z ðlþ1Þ= n m X m 1 2l m l   pffiffiffi ; pffiffiffi p2 pn ðmÞpffiffiffi f 2y pffiffiffi; y dy pffiffiffi f  p ffiffi  mX1 n l¼0 n n n n l= n  ! ! X m X 2l m l X m m 1 m þ 1X 2l m l   pffiffiffi ; pffiffiffi

pffiffiffi ; pffiffiffi  þ  pn ðmÞ f pn ðmÞ f mX1 n l¼0 n þ 1 l¼0 n n n n  mX0  ! ! X m 2 X X 2l m l m m 1 m 1X 2l m þ 2 l   pffiffiffi ; pffiffiffi

pffiffiffi þ  ; pffiffiffi : p ðmÞ f pn ðmÞ f mX1 n n l¼0 n þ 1 l¼0 n n n n  mX2 In the last expression, we call An ; Bn and Cn the first, second and third absolute values, respectively. We search for an upper bound for each of these terms separately. In these computations, we will basically use Schwarz inequality and the Lipschitz property of the function f : pffiffi  ! ! m 1 Z ðlþ1Þ= n  X m X m 2l m l   pffiffiffi ; pffiffiffi  dy pn ðmÞpffiffiffi An p  f 2y pffiffiffi; y f pffiffi  n l¼0 l= n n n n  mX1 p ffiffi ffi p ffiffi ffi p ffiffi ffi   m 1 X 3L 3L S 2 3L m X p ¼ pffiffiffi E n 1fSn >0g ¼ pffiffiffi : pn ðmÞpffiffiffi 2 n n 4 n n l¼0 2n mX1 For the third term, we have  ! ! m 2 m 1 X 2l m þ 2 l   2l m l pffiffiffi ; pffiffiffi f pffiffiffi Cn p pn ðmÞ ; pffiffiffi  f  n þ 1 n n n n  mX2 l¼0  ! X m m 2 m 1  þ pn ðmÞ  f pffiffiffi ; pffiffiffi  n n n  mX1 X

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

280

þ

X mX2

!  X   m 1 m m 2 2l

m l     pffiffiffi ; pffiffiffi  pn ðmÞ

f n þ 1 n  l¼0  n n 

! pffiffiffi m ðm 1Þ2 2L X m pffiffiffi þ o pn ðmÞ pn ðmÞ jf ð0; 0Þj þ 2L pffiffiffi n nþ1 n mX1 n mX2 ! p ffiffi ffi X X m þ n m 2 m 2 þ pn ðmÞ jf ð0; 0Þj þ L pffiffiffi nðn þ 1Þ l¼0 n mX2 ! pffiffiffi   pffiffiffi

 X Lð2 þ 2Þ Sn2 jf ð0; 0Þj m2 L 2 mðm 1Þðn þ mÞ pffiffiffi E o pn ðmÞ 2m 1 þ þ þ pffiffiffi nþ1 nðn þ 1Þ n n 2 n n mX1 " # pffiffiffi pffiffiffi Lð2 þ 2Þ jf ð0; 0Þj jSn j L 2 X 2m2 jf ð0; 0Þj þ 4L pffiffiffi þ pffiffiffi E pffiffiffi þ pffiffiffi pffiffiffi p ; pn ðmÞo 2 n n n n mX2 n n X

pffiffiffi where, for the last inequality, we have simply used that E½jSn j= npE½Sn2 =n ¼ 1: For the term Bn the computation is almost the same. & The case of barrier options: The proof of Theorem 3.1 for Lipschitz functions can be easily adapted to the case when f is of the form f ðx; yÞ ¼ f1 ðxÞ1ðM0 ;M1 Þ ðyÞ; with M0 oM1 p þ N and f1 Lipschitz. For Steps 1 and 2, we just need to remark that the corresponding function g can be written as Z x 4 M1 Z x 4 ð2M1 xÞ gðxÞ ¼ x f1 ð2y xÞ dy 1ðM0 ;þNÞ ðxÞ ¼ 2x f1 ðzÞ dz1ðM0 ;þNÞ ðxÞ M0

2M0 x

and it verifies hypothesis (LL). For Step 3, with the same ideas as before, one can prove the result in Proposition 3.1. In this context, some possible choices of f1 can be f1 ðxÞ ¼ ðK xÞþ ; f1 ðxÞ ¼ ðx KÞþ ; f1 ðxÞ ¼ ðK esx Þþ ; or also, when M1 oN; f1 ðxÞ ¼ ðesx KÞþ (functions that can be used when considering European options with barriers). In all these cases, we can explicitly compute the integral which defines the function g: Remark 3.1. How to improve the numerical algorithm. The proof of Theorem 3.1 suggests a possible improvement of the numerical method. In the case when the integral in (5), defining function g, can be explicitly computed, it can be numerically convenient to approximate pffiffiffi  E½f ðB1 ; B1 Þ directly with 2E½gðSn = nÞ: This can offer different advantages. pffiffiffi (i) The complexity of the algorithm can be reduced: the computation of E½gðSn = nÞ needs n evaluations p of ffiffiffi g and n2 =2 operations (the arguments are the same used for computing  E½f ððSn ; Sn Þ= nÞÞ: Obviously, this is convenient only if the evaluations of g are not much heavier than the ones of f : (ii) Remark that, when g verifies some additional conditions, the rate of convergence can be improved, eventually to 1=n (see Lamberton, 1999; Talay and Tubaro, 1990). We cite here only a result, proved in Lamberton (1999), that we will use in the examples in the next section: Let g be a continuous real function such that its second derivative g00 ; in the sense of distributions, is a bounded

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

measure. Then jE½gðB1 Þ E½gðSn =

281

pffiffiffi nÞjpCn 1 jjg00 jjMb ;

where C is a constant independent of n; and jjg00 jjMb is the total variation of the measure g00 : (When g00 is a function, jjg00 jjMb coincides with the L1 norm of g00 :)

4. Examples and extension to the trinomial random walk Example 1. Here we consider an example where the function g can be easily computed and we see how to exploit this fact in order to improve the numerical algorithm according to the considerations in Remark 3.1. We take K and M real constants, MX0; KpM; and define the function f f ðx; yÞ ¼ ðx KÞþ 1ð0;MÞ ðyÞ:  Þ is approximated by E½ðS =pffiffinffi KÞ 1  =pffiffinffiÞ with By Theorem 3.1, E½ðBp KÞ 1 ðB ðS 1

ð0;MÞ n ð0;MÞ þ þ 1 n ffiffiffi precision of order 1= n: But, if we want pffiffito ffi find a more satisfying numerical solution, we can try to use the approximating term 2E½gðSn = nÞ; as suggested in Remark 3.1. Firstly, we compute the corresponding function g; (

2

) x þ K x þ K

ðx þ KÞ x4M

: gðxÞ ¼ x1ðK;2M KÞ ðxÞ ðx4MÞ2

2 2 According to the possible values of the parameters K and M; we obtain different expressions for g; but we notice that g is always continuous and it is always possible to write its domain as the disjoint union of intervals in such a way that, on any interval, g coincides with the null function or with a polynomial. With the same ideas as in Remark 3.1, we conclude that, p from ffiffiffi a numerical view point, it is more convenient to use the approximation term 2E½gðSn = nÞ: Indeed, the recursive procedure has a number of operations of order n2 and the explicit expression of g we have found makes its evaluation very simple (one evaluation of f corresponds to 1 sum, 1 multiplication and 3 comparisons; one evaluation of g is not really heavier since it corresponds at most to 2 sums, 3 multiplications and 4 comparisons). Moreover the speed of convergence is certainly 1=n since the function g is continuous and has a second derivative g00 which is a bounded measure, so we can conclude by using Lamberton (1999). Example 2. The same results as in the previous example also can be obtained considering f ðx; yÞ ¼ ðesx 1Þþ 1ð0;M1 Þ ðyÞ; with M1 and s positive constants. In this case, we obtain  x  sx ðe 1 sxÞ1ð0;M1 Þ ðxÞ þ ðesð2M1 xÞ 1 sð2M1 xÞÞ1ðM1 ;2M1 Þ ðxÞ ; gðxÞ ¼ 2s so g is a continuous function, with a simple explicit expression, and whose second derivative is a bounded measure.

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

282

The trinomial approximating random walk. We now want to consider the case when ðXk ÞkX1 is a sequence of independent identically distributed random variables with trinomial law. Take dX1; we set PfX1 ¼ dg ¼ PfX1 ¼ dg ¼ ð2d2 Þ 1 ; PfX1 ¼ 0g ¼ 1 d 2 ; then E½X1k  ¼ 0 for all k odd and E½X1k  ¼ dk 2 for allpkffiffiffi even. We define again S0 ¼ 0; Sn ¼ X1 þ ? þ Xn ; and BðnÞ s ¼ ðS½sn þ ðsn ½snÞX½snþ1 Þ=ð nÞ; for nX1: The binomial case corresponds to the choice d ¼ 1: The law of the process ðSn ; Sn ÞnX0 can be determined by using the same ideas as in the binomial case. Take lAdN and define tl ¼ inffnX0 j Sn X0g; we obtain, for mAdZ and lAdN; mpl; PfSn ¼ m; Sn ¼ lg ¼ PfSn ¼ 2l mg PfSn ¼ 2l m þ 2dg: Essentially the only change with respect to the binomial case is the variable step d; but no wonder that the result holds true since we did not use explicitly the binomial density, but only the fact that it is symmetric and that it cannot ‘‘make jumps’’, so that Stl ¼ l: On the state space E ¼ fðld; mdÞ j l; mAZ; lpm; mX0g; the process ðSn ; Sn ÞnX0 is a homogeneous Markov chain, with transition probabilities  ¼ m0 j S ¼ l; S  ¼ mg PfSnþ1 ¼ l0 ; Snþ1 n n 8 2 1 0 ð2d Þ if lom; m ¼ m; l0 Afl d; l þ dg; > > > < or l ¼ m; ðl0 ; m0 ÞAfðl d; lÞ; ðl þ d; l þ dÞg; ¼ > 1 d 2 if ðl0 ; m0 Þ ¼ ðl; mÞ; > > : 0 otherwise: In order to describe the algorithm, we define pffiffiffiffiffi  Þ= N Þ j S ¼ l; S  ¼ m; wðn; l; mÞ ¼ E½f ððSN ; SN n n satisfying the recursive relation wðn; l; mÞ ¼ ð2d2 Þ 1 ðwðn þ 1; l0 ; m0 Þ þ wðn þ 1; l00 ; m00 ÞÞ þ ð1 d 2 Þwðn þ 1; l; mÞ:

ð6Þ

In this case, starting from ð0; 0Þ; thep random walk, in n steps, can reach ðn þ 1Þðn=2 þ 1Þ states. So, ffiffiffiffiffi  Þ= N Þ; one needs first ðN þ 1ÞðN=2 þ 1Þ evaluations of f ; and ; S in order to compute E½f ððS N N P 1 3 then N 1 j¼0 ðj þ 1Þðj=2 þ 1ÞB6N evaluations of w; according to relation (6). The estimate of the error in the approximation given in Theorem 3.1 still holds, as the proof can be extended to the trinomial case with little changes. We only have to adapt the few passages where the binomial properties were used. In Step 1, we can always write ! !# m X X Sn Sn ð2l mÞd ld pffiffiffi ; pffiffiffiÞ ; ¼ ðPfSn ¼ mdg PfSn ¼ ðm þ 2ÞdgÞ f E f pffiffiffi; pffiffiffi n n n n mX0 l¼0 "

ð7Þ

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

283

2

1 now we observe that X1 has characteristic function vðtÞ ¼ cosðtdÞþd and that d2 Z d 2p=d itmd PfSn ¼ mdg PfSn ¼ ðm þ 2Þdg ¼ ðe

e itðmþ2Þd ÞðvðtÞÞn dt 2p 0 Z 2p=d id ¼ sinðtdÞðvðtÞÞn e itðmþ1Þd dt p 0

by integration by parts formula; remembering that v0 ðtÞ ¼ d 1 sinðtdÞ; Z d3 m þ 1 2p=d mþ1 ðvðtÞÞnþ1 e itðmþ1Þd dt ¼ 2d2 PfSnþ1 ¼ ðm þ 1Þdg ¼ nþ1 p nþ1 0 mþ1 ¼ ðPfSn ¼ mdg þ PfSn ¼ ðm þ 2Þdg þ 2ðd2 1ÞPfSn ¼ ðm þ 1ÞdgÞ nþ1 and we use this expression in (7). The remaining considerations in Step 1 and all Step 2 do not need to be modified. In Step 3, the changes in the proof of Proposition 3.1 easily follows from the use of (7) with the same ideas as in the binomial case. 5. The link with a partial differential equation problem In this section we prove that, when f is Lipschitz, firstly, the function u mentioned in the introduction is solution of a partial differential equations problem (P), and, secondly, that the probability approach described before for estimating u imply the convergence of a finite difference method applied to the problem (P) (the Euler method). @u Proposition 5.1. Let us consider u : ½0; T  fðx; yÞAR2 j yX0; yXxg-R; uAC1t -C1y -C2x ; with @x bounded, u solution of the problem 8  @u 1 @2 u > > > > @t þ 2 @x2 ðt; x; yÞ ¼ 0 for tA½0; TÞ; xpy; > < ðPÞ ð8Þ @u > ðt; x; xÞ ¼ 0 for tA½0; T; xX0; > > @y > > : uðT; x; yÞ ¼ f ðx; yÞ for xpy:

Then uðt; x; yÞ ¼ E½f ðBT ; BT ÞjBt ¼ x; Bt ¼ y: Proof. By the Ito’s formula @u @u 1 @2 u @u duðt; Bt ; Bt Þ ¼ ðt; Bt ; Bt Þ dt þ ðt; Bt ; Bt Þ dBt þ ðt; Bt ; Bt Þ dt þ ðt; Bt ; Bt Þ dBt ; @t @x 2 @x2 @y is a process with continuous increasing trajectories, dB is the associated where, since ðB Þ t tX0

measure in the sense of Stieltjes. Since u satisfies the equation above, we get Z t Z t @u @u   uðt; Bt ; Bt Þ ¼ ðs; Bs ; Bs Þ dBs þ ðs; Bs ; Bs Þ dBs ; 0 @x 0 @y

t

ARTICLE IN PRESS 284

R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

the first term is a martingale, since we have supposed that the partial derivative of u with respect to x is bounded. This allows to conclude if we prove that the second term is null. So, let us fix o in O and define the open set Go ¼ fsA½0; t j Bs ðoÞoBs ðoÞg; then Z t Z Z @u @u @u     ðs; Bs ; Bs Þ dBs ¼ ðs; Bs ; Bs Þ dBs þ ðs; Bs ; Bs Þ dBs @y @y @y 0 ½0;t\Go Go

*

*

the first term is zero since @u @y is zero when its second and third arguments coincide, like on ½0; t\Go ;  the second term is zero since the set Go has zero measure with respect s : Go ¼ ,nX0 In;o Pto dB   with In;o ¼an;o ; bn;o ½CGo with rational extremes, then dB ðoÞðGo Þp n dB ðoÞðIn;o Þ ¼ 0: &

Proposition 5.2. Let us take a Lipschitz function f and u : ½0; T  fðx; yÞAR2 j yX0; yXxg-R defined by uðt; x; yÞ ¼ E½f ðBT ; BT Þ j Bt ¼ x; Bt ¼ y: Then u verifies (P). Proof. It is sufficient to notice that  Þ uðt; x; yÞ ¼ E½f ðx þ BT t ; y3ðx þ BT t and then remember that the joint density of the vector ðBt ; Bt Þ is known. By this way one can compute the derivatives of u by using the Lebesgue dominated convergence theorem and the Lipschitz property of the function f : & Convergence of the numerical scheme solving the PDE problem. We apply a finite difference method in order to solve numerically problem (P). First of all, we fix k ¼ T=N step of discretization for the time variable, and h step of discretization for the space (or price) variables. Also in this case, we can consider, without loss of generality, T ¼ 1: We solve by the ‘‘Euler explicit method’’ the first equation in (8) and we denote by un ð ; lÞ the vector ðun ðj; lÞÞj ; where un ðj; lÞ can be seen as the approximation of uðnk; jh; lhÞ; then 1 0   C B C B 1 2 1 nþ1 n C B u

u 1 nþ1 C B ð ; lÞ ¼ 2 Au ð ; lÞ; where A ¼ B & & & C 2h k C B 1 2 1 A @   and the values in  can be specified, only if we choose to introduce some conditions on the border of the discretization domain. In the probabilistic context, the central limit theorem forced the choice k ¼ h2 : Here the same choice guarantees the stability of the finite difference method. Consequently, the relation above becomes

 1 n u ð ; lÞ; ¼ 1 þ A unþ1 ð ; lÞ: 2

ARTICLE IN PRESS R. Carbone / Statistics & Probability Letters 69 (2004) 271–285

285

So, if we solve numerically the equation on the set fðjh; lhÞ j j; lAZ; lX0; jpl þ 1g and we discretize the condition on the border line by imposing un ðl; lÞ ¼ un ðl; l 1Þ for all n and lX1; the equations we have to solve are exactly the same as in the probabilistic method (see Eq. (3)). This allows us to claim the convergence of the Euler scheme for this kind of PDE’s by using the convergence result of the probabilistic method (Theorem 3.1). Acknowledgements I am deeply grateful to Bernard Lapeyre for his really useful and constant help in the work and for his hospitality in the CERMICS. I also thank Paolo Baldi and Damien Lamberton for useful discussions and comments. References Baldi, P., 1995. Exact asymptotics for the probability of exit from a domain and applications to simulation. Ann. Probab. 23 (4), 1644–1670. Baldi, P., Caramellino, L., 2002. Asymptotics of hitting probabilities for general one-dimensional pinned diffusion. Ann. Appl. Probab. 12 (3), 1071–1095. Gobet, E., 2000. Weak approximation of killed diffusion using Euler schemes. Stochastic Process. Appl. 87 (2), 167–197. Hull, J.C., 2000. Options, Futures, and Other Derivatives. Prentice-Hall, Englewood Cliffs, NJ. Lamberton, D., 1999. Vitesse de convergence pour des approximations de type binomiale. Ecole CEA-EDF-INRIA ‘‘Math!ematiques Financi!eres: mod"eles e! conomiques et math!ematiques des produits d!eriv!es’’, juin 1999, INRIA Rocquencourt. Lamberton, D., Lapeyre, B., 1996. Introduction to Stochastic Calculus Applied to Finance. Chapman & Hall, London. Osipov, L.V., 1969. Asymptotic expansions of the distribution function of a sum of independent lattice random variables. Teor. Verojatnost. i Primenen. 14, 468–475 (in Russian) (Translated in Theory Probab. Appl.). Petrov, V.V., 1975. Sums of Independent Random Variables. Springer, Berlin. Talay, D., Tubaro, L., 1990. Expansion of the global error for numerical schemes solving Stochastic Differential Equations. Stochastic Anal. Appl. 8 (4), 483–509. Tonou, P.S., 1997. Me! thodes nume! riques probabilistes pour la re! solution d’e! quations du transport et pour l’e! valuation d’options exotiques. Th"ese de Doctorat. Walsh, J.B., 2003. The rate of convergence of the binomial tree scheme. Finance Stochastics 7 (3), 337–361.