Some Mathematical Preliminaries

instance, consider the following AR(1) process .... problem. Also note that two time periods are always involved in this process: the current period, ..... Time Optimization Problems, Mathematics of Operations Research, 1986, 12, 216–229.
369KB taille 3 téléchargements 367 vues
Some Mathematical Preliminaries

1

Lag Operators

This section aims at introducing you to one of the basic tools we will need in this course: Lag operators. Lag operators are a very powerful tool for studying linear (stochastic) difference equation and being able to characterize the structure of these equations. An introduction to these operator will prove useful not only for this course but also for the time series econometrics course.

Definition The lag operator, or backshift operator, L is defined by LXt = Xt−1

This is therefore an operator that lags a variable. It is then useful in any dynamic model. For instance, consider the following AR(1) process xt = ρxt−1 + εt This process can then be rewritten as xt = ρLxt + εt ⇐⇒ (1 − ρL)xt = εt The usefulness of this operator will be come more evident after we define polynomials in the lag operator. Before, it will be useful to give some properties attached to this operator.

1

Property Basic rules of exponentiation apply to the lag operator. In particular, we have 1. L0 = 1 2. Lp Lq = Lp+q for p, q ∈ N 3. Lp L−p = 1 for p ∈ N

Property We have Xt−n = Ln Xt

Proof: The proof is straightforward. Let us consider Xt−n . By definition of the backshift operator, we have Xt−n = LXt−n+1 But again, we have Xt−n+1 = LXt−n+2 such that Xt−n = LLXt−n+2 = L2 Xt−n+2 Proceeding recursively, we get the result. q.e.d.

2

Property We have Xt+n = L−n Xt

Proof: First of all, note that Xt = Ln Xt+n . Then applying property 1.3, we get L−n Xt = L−n Ln Xt+n = Xt+n 2

q.e.d.

Sone that we sometime denote F = L−1 , such that the last property can be written as F n Xt = Xt+n .

2

Definition A(L) is a polynomial in the lag operator if 2

3

A(L) = a0 + a1 L + a2 L + a3 L + . . . =

∞ X

aj Lj

j=0

where aj are constants.

Definition A(L) is a rational polynomial in the lag operator if P∞

j B(L) j=0 bj L A(L) = = P∞ j C(L) j=0 cj L

where bj , cj are constants.

Some rational polynomial are important to know because they actually give rise to infinite series. 1. A(L) =

1 1−aL

with |a| < 1 is equivalent to the infinite series ∞ X

aj Lj

j=0

2. A(L) =

1 (1−aL)2

with |a| < 1 is equivalent to the infinite series ∞ X

jaj−1 Lj−1

j=1

3

2

Dynamic Optimization

This section is not meant to remind you how to solve any dynamic optimization problem. This is just a quick refresher of what you should know and it does not offer a full mathematical treatment of the problem. The problem we will consider takes the following form max

{xτ +1 ,yτ }∞ τ =0

Et

"∞ X

# s

β u(yt+s , xt+s )

s=0

s.t. xt+1 = h(yt , xt )

(P)

x0 given where u(·, ·) is a continuous, class C 2 function. We will assume that both u(·, ·) and h(·, ·) are concave. Note that h(·, ·) can be a multidimensional function. Most of the problems we will deal with in the course actually take this particular form. We will cover two methods to approach this type of problem 1. The Lagrangian, which just extend what you know about static optimization. In case you are not familiar with optimization, you want to have a look at Chiang (1984). Sargent (1987) and Ljunqvist and Sargent (2004) also cover more systematically the dynamic Lagrangian approach. 2. The dynamic programming approach, as introduced by Bellman (1957). In case you need more food for thoughts about this method, you may want to have a look at Lucas et al. (1989) or Ljunqvist and Sargent (2004).

2.1

The Lagrangian

This approach is probably the simplest as it corresponds to the simple extension of the standard static Lagrangian method you should be familiar with. The Lagrangian associated to problem (P) is Lt = Et

"∞ X

# s

β (u(yt+s , xt+s ) + λt+s (h(yt+s , xt+s ) − xt+s+1 ))

s=0 ∞ Recall that our problem is to find two optimal sequences {yτ }∞ τ =t and {xτ +1 }τ =t that solve this

problem. Also note that two time periods are always involved in this process: the current period, say t, and the next period, t + 1. In other words, setting s = 0, we can just consider the reduced Lagrangian, Lft : Lft = Et [(u(yt , xt ) + λt (h(yt , xt ) − xt+1 )) + β (u(yt+1 , xt+1 ) + λt+1 (h(yt+1 , xt+1 ) − xt+2 ))] Also note that any variable dated in t is perfectly known to the agent, such that this further reduces to Lft = u(yt , xt ) + λt (h(yt , xt ) − xt+1 ) + βEt [u(yt+1 , xt+1 ) + λt+1 (h(yt+1 , xt+1 ) − xt+2 )] 4

First order conditions with respect to yt and xt+1 are then given by1 uy (yt , xt ) + λt hy (yt , xt ) = 0 −λt + βEt [ux (yt+1 , xt+1 ) + λt+1 hx (yt+1 , xt+1 )] = 0 which rewrites as uy (yt , xt ) = −λt hy (yt , xt )

(1)

λt = βEt [ux (yt+1 , xt+1 ) + λt+1 hx (yt+1 , xt+1 )]

(2)

These two conditions do not really call for more comments as we did not attach any economic content to the problem. On top of these two conditions, there is another limit condition called the transversality condition. This condition is obtained by first considering a finite version of the problem, where the economy stops in period T , such that the Lagrangian takes the form Lt = Et

"T−t X

#

β s (u(yt+s , xt+s ) + λt+s (h(yt+s , xt+s ) − xt+s+1 ))

s=0

Since the economy ends in period T , it has to be the case that Et β T −t λT xT +1 = 0. Letting T tend toward ∞, we get "

lim Et [β s λt+s xt+s ] = 0 ⇐⇒ lim Et

s→∞

s→∞

#

uy (yt+s , xt+s ) βs xt+s = 0 hy (yt+s , xt+s )

This condition actually selects one particular trajectory among an infinity of them (the first order conditions actually defines a variety of trajectories). The so selected trajectory places a lot of structure on the solution of the problems we will consider. We will discuss this issue in context.

2.2

Dynamic Programming

This approach obviously leads to the same characterization of agents’ optimal behavior. It is however obtained in a different way as it rests on the recursive properties of the problem2 and our ability to write it as a functional equation.3 Let us define the value of the problem as V (xt ) = 1

max

{yτ ,xτ +1 }∞ τ =t

Et

3

# s

β u(yt+s , xt+s )

s=0

From now on, we denote uxi (x1 , . . . , xi , . . . , xn ) =

2

"∞ X

∂u(x1 , . . . , xi , . . . , xn ) . ∂xi

Note that this imply that the approach is only valid for time–consistent problems. From a technical point of view, this implies that we will rely on functional analysis rather than real analysis.

5

The interpretation of the function V (·) is clear: it gives us the value attained by the problem when an agent adopts its optimal behavior. It is easy to see that this actually rewrites4 "

V (xt ) = max

max

{yt ,xt+1 } {yτ ,xτ +1 }∞ τ =t+1

Et u(yt , xt ) +

∞ X

# s

β u(yt+s , xt+s )

s=1

Since yt and xt are known in period t, we have V (xt ) = max u(yt , xt ) + {yt ,xt+1 }

max

{yτ ,xτ +1 }∞ τ =t+1

Et

"∞ X

#

β s u(yt+s , xt+s )

s=1

denoting ` = s − 1 V (xt ) = max u(yt , xt ) + {yt ,xt+1 }

max

{yτ ,xτ +1 }∞ τ =t+1

by definition V (xt+1 ) = max{yτ ,xτ +1 }∞ Et+1 τ =t+1 projections, we have

Et

"∞ X

#

β `+1 u(yt+1+` , xt+1+` )

`=0

hP ∞

i

` `=0 β u(yt+1+` , xt+1+` ) and the law of iterated

V (xt ) = max u(yt , xt ) + βEt [V (xt+1 )] {yt ,xt+1 }

This last equation is known as the Bellman equation. This is a functional equation, since we are looking for a function V (·) which satisfies this relationship. Solving the optimization problem then amounts to find the solution of the Bellman equation subject to the constraint imposed by the law of motion of x: V (xt ) = max u(yt , xt ) + βEt [V (xt+1 )] {yt ,xt+1 }

(P 0 )

s.t. xt+1 = h(yt , xt ) x0 given which rewrites as V (xt ) = max u(yt , xt ) + βEt [V (h(yt , xt ))] {yt }

The optimal choice for yt then satisfies uy (yt , xt ) + βEt [V 0 (xt+1 )hy (yt , xt )] = 0 which rewrites uy (yt , xt ) + βEt [V 0 (xt+1 )]hy (yt , xt ) = 0 since both xt and yt are known in period t. Note that if we denote λt = Et V 0 (xt+1 ), then this equation rewrites as uy (yt , xt ) + βλt hy (yt , xt ) = 0 4 We just take the first term (s = 0) out of the sum. Also note that this derivation skips an important step, as the max of an expectation is not the expectation of the max.

6

This actually provides us with a simple interpretation of the Lagrange multiplier involved in the approach we covered in the previous section: The Lagrange multiplier is the discounted expected marginal value of xt+1 . One of the complication of the preceding equation is that we do not know how to determine V 0 (·) yet. V 0 (·) tells us what is the marginal effect on the value of the problem of an increase in xt . This can be simply computed from the Bellman equation itself. From the envelope theorem, we have V 0 (xt ) = ux (yt , xt ) + βEt V 0 (xt+1 )hx (yt , xt ) 



This equation is usually dubbed the Euler equation. To show how it maps with the equivalent Euler equation in the Lagrangian version of the problem, let us lead this equation by one period to get V 0 (xt+1 ) = ux (yt+1 , xt+1 ) + βEt+1 V 0 (xt+2 )hx (yt+1 , xt+1 ) 



Then apply the expectation operator and multiply it by β βEt V 0 (xt+1 ) = βEt ux (yt+1 , xt+1 ) + βEt+1 V 0 (xt+2 ) hx (yt+1 , xt+1 ) 











Then use the fact that λt = βEt V 0 (xt+1 ), to get λt = βEt [ux (yt+1 , xt+1 ) + λt+1 hx (yt+1 , xt+1 )] which is (hopefully!) exactly the same equation as the one we obtained in the Lagrangian case. The transversality condition is then given by5 h

i

lim Et β s+1 V 0 (xt+s+1 )xt+s+1 = 0

s→∞

Then using the law of iterated projection and the fact that λt = βEt V 0 (xt+1 ), we have lim β s Et [λt+s xt+1+s ] = 0

s→∞

which is equivalent to the Lagrangian problem.

5

The proof of this statement goes beyond the scope of this course but can be found in Ekeland and Scheinkman (1986).

7

3

Second Order Polynomials

Polynomials of second order play a particular role in the study of the dynamics of linear rational expectations models of the form aEt yt+1 + byt + cyt−1 + dxt = 0 The stability of the model is related to the position of the roots of the “characteristic” polynomial with respect to -1 and 1. In this note, we will study these polynomials of order 2. the typical polynomials of order 2 has the form P(x) = ax2 + bx + c As aforementioned, we are interested in the roots of these polynomials, that is we want to find µ1 and µ2 such that P (µ1 ) = P (µ2 ) = 0. The solution can be obtained by the following simple algorithm 1. Compute the discriminant, ∆: ∆ = b2 − 4ac 2. Depending on the sign of the discriminant, we are face with the following cases I

I

∆ > 0: In this case, the two roots are real (µi ∈ R, i = 1, 2) and are given by √ √ −b − ∆ −b + ∆ µ1 = and µ2 = 2a 2a ∆ = 0: In this case, there is one single real root (µi ∈ R, i = 1, 2 and µ1 = µ2 ) given by µ=−

I

b 2a

∆ < 0: In this case, the two roots are complex (µi ∈ C, i = 1, 2) and are given by √ √ −b − i ∆ −b + i ∆ and µ2 = µ1 = 2a 2a

Note that if the two roots of the polynomial are µ1 and µ2 , then the polynomial can be factorized to give P(x) = (x − µ1 )(x − µ2 ) Expanding the previous product, we get P(x) = x2 − (µ1 + µ2 )x + µ1 µ2 In other words, the polynomial can always be written as a function of the sum, S ≡ µ1 + µ2 , and the product, P ≡ µ1 µ2 , of its roots. This can be clearly seen from our initial polynomial. Provided a 6= 0, it can be rewritten as b c P(x) = x − − x+ a a 2



8



In other words, -b/a is the sum of the two roots and c/a is their product. We will now work with polynomials of the form P(x) = x2 − Sx + P where S and P denote, respectively, the sum and the product of the two roots. Then the discriminant is given by ∆ = S 2 − 4P Depending on the sign of the discriminant, we are face with the following cases ∆ > 0 In this case, the two roots are real (µi ∈ R, i = 1, 2) and are given by √ √ S− ∆ S+ ∆ µ1 = and µ2 = 2 2 ∆ = 0 In this case, there is one single real root (µi ∈ R, i = 1, 2 and µ1 = µ2 ) given by µ=−

S 2

∆ < 0 In this case, the two roots are complex (µi ∈ C, i = 1, 2) and are given by √ √ S+i ∆ S−i ∆ and µ2 = µ1 = 2 2 As explained at the very beginning of this note, what we are really interested in, in the case of the analysis of bi–dimensional dynamic systems, is the position of the two roots with respect to -1 and 1. We will now give conditions for the two roots to lie in the regions outside the (-1,1) interval, inside this interval, or one inside, one outside. We will mainly resort to graphical intuition to give these conditions. The study of P(1) = 1 − S + P and P(−1) = 1 + S + P gives us information about location of the roots with respect to 1 and -1. More precisely: I

If P(1)P(−1) < 0 = (1 + P )2 − S 2 < 0 ⇐⇒ |1 + P | < |S|, this means that P(1) (or P(−1)) is negative, while P(−1) (or P(1)) is positive. Given that P(x) is a second order polynomial, it is continuous and only crosses the zero line once in the range[-1,1]. In other words, one root only is lower than one. The steady state is a saddle point. This situation is depicted in Figure 1. Note that in this case, we necessarily have ∆ > 0 implying that the two roots are necessarily real. This can be simply seen by considering the following polynomial P(x) = x2 − (µ1 + µ2 )x + µ1 µ2 where |µ1 | < 1 and |µ2 | > 1 in this case, ∆ = (µ1 + µ2 )2 − 4µ1 µ2 = (µ1 − µ2 )2 > 0.

9

Figure 1: The saddle path P(x)

P(x)

P(−1)

P(1)

1 -1

-1

x

0

0

P(1)

I

1

x

P(−1)

If P(1)P(−1) > 0 = (1 + P )2 − S 2 > 0 ⇐⇒ |1 + P | > |S|, this means that P(1) and P(−1) are one the same side of the zero line. We then have two possibilities – |P | < 1, P(1) > 0 and P(−1) > 0 then both roots have modulus less than unity. The steady state is locally stable and the system is a sink. This situation is depicted in Figure 2. Figure 2: The sink P(x)

P(−1)

P(1)

-1

0

1

x

– |P | > 1, P(1) < 0 and P(−1) < 0 then both roots have modulus greater than unity. The steady state is locally unstable and the system is a source. This situation is depicted in Figure 3. All this information can be gathered in the following graph.

10

Figure 3: The source P(x)

-1

1

x

0

P(−1)

P(1)

Figure 4: Local Stability Analysis: A Graphical Summary P 1 + P = −S

1+P =S SOURCE

P =1

SINK

SADDLE

SADDLE

SOURCE

11

S

References Bellman, R., Dynamic Programming, Princeton (NJ): Princeton University Press, 1957. Chiang, A.C., Fundamental Methods of Mathematical Economics, London: McGraw–Hill, 1984. Ekeland, I. and J.A.. Scheinkman, Transversality Conditions for Some Infinite Horizon Discrete Time Optimization Problems, Mathematics of Operations Research, 1986, 12, 216–229. Ljunqvist, L. and T.J. Sargent, Recursive Macroeconomic Theory, Cambridge (MA): MIT University Press, 2004. Lucas, R., N. Stokey, and E. Prescott, Recursive Methods in Economic Dynamics, Cambridge (MA): Harvard University Press, 1989. Sargent, T.J., Dynamic Macroeconomic Theory, Londres: Harvard University Press, 1987.

12