Autocorrelation and the AR(1) Process

null hypothesis is a model without lagged dependent variables, .... d and Durbin h are known as asymptotically equivalent. 2. 2. 12. 1. 1 lag lag. Ts. T d. Ts. T rh.
203KB taille 1 téléchargements 237 vues
© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 1

Autocorrelation and the AR(1) Process Hun Myoung Park This document discusses autocorrelation (or serial correlation) in linear regression models with focus on the first-order autoregression process, AR(1). This document is largely based on Greene (2003).

1. Defining Autocorrelation Autocorrelation occurs in time-series data more often than in cross-sectional data. Autocorrelation (or autoregressiion and serial correlation) is a result of the violation of the nonautocorrelation assumption that each disturbance is uncorrelated with every other disturbance 1.1 Stationarity and Autocorrelation In the presence of autocorrelation, E (ε t | X ) = E (ε s | X ) = 0

Var (ε t | X ) = Var (ε s | X ) = σ 2 , but Cov(ε t , ε s | X ) = 0 for all t ≠ s . The distribution of disturbances is said to be covariance stationary or weekly stationary.1 E (ε ' ε ) ≠ σ 2 I , but E (ε ' ε ) = σ 2Ω that is a full, positive definite matrix with a constant σ 2 on the diagonal. Since Ωts is a function of |t-s|, but not of t or s alone (stationary assumption), the covariance between observations t and s is also a finite function of |t-s|, the distance apart in time of the observations. The autocovariances is defined as Cov (ε t , ε t − s | X ) = Cov (ε t + s , ε t | X ) = γ s = σ 2Ωt ,t − s = σ 2Ωt + s , s and σ 2Ωt ,t = γ 0 = σ 2 Autocorrelation is the correlation between ε t and ε t − s , γ Cov(ε t , ε t − s | X ) = s = ρs Corr (ε t , ε t − s | X ) = Var (ε t | X ) Var (ε t − s | X ) γ 0 1.2 Autoregression and AR(p)

A typical autoregression model AR(p) is y t = μ + φ1 y t −1 + φ2 y t −2 ... + φ p y t − p + ε t or

1

Strong stationarity requires that whole joint distribution is the same over the time periods.

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 2

(1 − φ1 B 1 − φ 2 B 2 − ... − φ p B p ) y t = μ + ε t or simply φ ( B) y t = μ + ε t , where B denotes the

backward shift operator. The first-order autoregression AR(1) process is structured so that the influence of a given disturbance fades as it recedes into the more distant past but vanishes only asymptotically. y t = μ + φ1 y t −1 + ε t y t = μ + φ1 (u + φ1 y t − 2 + ε t −1 ) + ε t = μ + φ1u + φ12 y t − 2 + φ1ε t −1 + ε t … Alternatively, ε t = ρε t −1 + u t In contrast, the first-order moving-average MA(1) process has a short memory, ε t = u t − λu t −1 . Interestingly, AR(1) can be written in MA(∝) form, ε t = ρε t −1 + ut , where E (ut ) = 0 ,

E (ut2 ) = σ u2 , and Cov(ut , us ) = 0 if t ≠ s . Repeated substitution ends up with

ε t = u t + ρu t −1 + ρ 2 u t −2 ... Each disturbance embodies the entire past history of us, with the most recent observations receiving greater weight than those in the distant past. The variance and covariance of disturbances are ⎧ σ u2 2 2 2 4 2 ( ε ) σ ρ σ ρ σ ... = + + + = = σ ε2 Var t u u u ⎪ 2 1 ρ − ⎪ ⎨ 2 ⎪Cov(ε , ε ) = E (ε , ε ) = E (ε , ρε + u ) = ρVar (ε ) = ρσ u t t −1 t t −1 t −1 t −1 t t −1 ⎪⎩ 1− ρ2

2. Causes and Consequences of Autocorrelation Autocorrelation may result from a problem in (linear) functional form assumption, omitted relevant explanatory variables (often lagged dependent variables), or measurement errors that could be autocorrelated. In practice, the specification errors (ignoring relevant variables) appear to be most critical. Like heteroscedasticity, autocorrelation makes estimated variances of OLS (ordinary least squares) parameter estimates asymptotically inefficient. Technically speaking, σ 2 is biased (underestimated). However, OLS parameter estimates themselves remain unbiased and consistent. In short, OLS is not BLUE.

3. Detecting Autocorrelation This section considers several test statistics including Breusch-Godfrey LM, Box-Pierce Q, Ljung-Box Q’, Durbin-Watson d, and Durbin h. 3.1 Lagrange Multiplier Test for AR(p)

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 3

Breusch (1978) and Godfrey (1978) develop a Lagrange multiplier test that can be applied to the pth order autoregression models. Thus, this test is more general than D-W d and Durbin h. The null hypothesis is a model without lagged dependent variables, ρ1 = ρ 2 = ... = ρ p = 0 .2 The LM test consists of several steps. First, regress Y on Xs to get residuals. Compute lagged residuals up to pth order. Replace missing values for lagged residuals with zeros. Regress εt on Xs and et-1, et-2, … and et-p to get R2. Finally compute LM statistic using the R2 and the number of observations T used in the model.3 LM = TR 2 ~ χ 2 ( p) This statistic follows the chi-squared distribution with p degrees of freedom. This BreuschGodfrey LM is preferred to other test statistics. 3.2 Q and Q’ Test for AR(p)

Box and Pierce (1970) develop the Q test that is asymptotically equivalent to the BreuschGodfrey LM test. Box-Pierce Q has a chi-squared distribution with p degrees of freedom. The Q statistic is T

P

Q = T ∑ rj2 ~ χ 2 ( p ) , where rp = j =1

∑e e

t = p +1 T

t t− p

∑e t =1

2 t

First, regress Y on Xs to get residuals and compute lagged residuals up to pth order. Compute individual rp s using et2 and et et − p . Finally, plug rp s in the formula to compute Box-Pierce Q. Ljung and Box (1979) refine the Box-Pierce Q test to get Q’. You may use information obtained above. Ljung-Box Q’ also follows the chi-squared distribution with p degrees of freedom. rj2

P

Q' = T (T + 2)∑ j =1

T−j

~ χ 2 ( p) .

3.3 Durbin-Watson d for AR(1)

The Durbin-Watson (D-W) test is based on the principle that if the true disturbances are autocorrelated, this fact will be revealed through the autocorrelations of the least squares residuals (Durbin and Watson 1950, 1951, 1971). The null hypothesis is that disturbances are not autocorrelated, ρ = 0 . The test statistic is 2

This model is viewed as a restricted model, whereas the full or unrestricted model has p lagged dependent variables. 3 Since missing values in lagged residuals are filled with zero, the number of observations used in the model is the same as that in the original model. http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 4

T

d=

∑ (e − e t =2

t −1

t

)2

T

∑e t =1

2 t

From the Durbin-Watson statistic table (T and k), we get the following decision criteria.4 2 0 dU dL d U* d L* Reject H0 ρ >0

Inconclusive (uncertain)

Do not reject H0 H0: ρ = 0

Inconclusive (uncertain)

4

Reject H0 ρ F R-squared Adj R-squared Root MSE

= = = = = =

36 88.47 0.0000 0.9195 0.9091 .18221

-----------------------------------------------------------------------------lnPg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnI | .5279324 .4061863 1.30 0.203 -.3004899 1.356355 lnPnc | .9524263 .6809687 1.40 0.172 -.4364186 2.341271 lnPuc | .1862664 .4491529 0.41 0.681 -.729787 1.10232 e1 | .3949087 1.001913 0.39 0.696 -1.648507 2.438324 _cons | -4.681809 3.626033 -1.29 0.206 -12.07715 2.713534 -----------------------------------------------------------------------------. gen ee=e^2 . gen ee1=e*e1 . tabstat ee ee1, stat(n sum mean) save // for AR(1) stats | ee ee1 ---------+-------------------N | 36 36 sum | .0338369 .0228194 mean | .0009399 .0006339 -----------------------------. matrix sum=r(StatTotal) . local r1=sum[2,2]/sum[2,1] . local Q = $T*(`r1'^2)

// for AR(1)

. disp `Q' 16.373108 . disp chi2tail(1,`Q') // for AR(1) .00005202 . local Q1 = $T*($T+2)*(`r1'^2/($T-1))

// for AR(1)

. disp `Q1' 17.776517 . disp chi2tail(1,`Q1') // for AR(1) .00002484

AS and STATA do not have option or command to compute Q or Q’. 5.1.4 Durbin-Watson d Test

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 11

First, let us compute D-W d manually to make sure it is identical to the statistic provided by STATA. Note that .dwstat and .estat dwatson are equivalent. . local dw= `s_e_e1_2'/`s_e_2'

// .60469933

. dwstat

// equivalent to .estat dwatson

Durbin-Watson d-statistic( . local rho=1-`dw'/2

5,

36) =

.6046993

// DW based rho: rhotype(dw) .69765033

. local rho=`s_ee1'/`s_e_2'

// Autocorrelation rho: rhotype(tscorr) .67439496

5.1.5 Durbin h Test

Finally, let us compute the Durbin h for a model with a lagged dependent variable. Note that Durbin h is not a two-tailed test, but a one-tailed test. . regress $OLS2 Source | SS df MS -------------+-----------------------------Model | .680487618 5 .136097524 Residual | .014272173 29 .000492144 -------------+-----------------------------Total | .694759791 34 .020434112

Number of obs F( 5, 29) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 276.54 0.0000 0.9795 0.9759 .02218

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnG | L1. | .6877655 .1139042 6.04 0.000 .4548052 .9207257 lnPg | -.0939412 .0225678 -4.16 0.000 -.1400975 -.047785 lnI | .4312204 .1692678 2.55 0.016 .0850289 .7774119 lnPnc | -.2909653 .0927684 -3.14 0.004 -.4806981 -.1012325 lnPuc | .1737385 .0717074 2.42 0.022 .0270803 .3203967 _cons | -3.844963 1.524568 -2.52 0.017 -6.963055 -.7268714 -----------------------------------------------------------------------------. matrix list e(V) symmetric e(V)[6,6] L. lnG lnPg L.lnG .01297417 lnPg -.00065925 .0005093 lnI -.01830652 .00067796 lnPnc -.00199992 -.00036491 lnPuc .00496849 -.00033304 _cons .16504766 -.00614887

lnI

lnPnc

lnPuc

_cons

.02865159 .0033396 -.00791344 -.25805664

.00860598 -.00550537 -.03035274

.00514196 .07142527

2.3243077

. matrix V = e(V) . local v_lag = V[1,1] // variance of coefficient of the lagged DV . predict e, residuals . gen e_2=e^2 . gen ee1=e*l.e . list e l.e e_2 ee1 in 1/5

1. 2. 3. 4.

+----------------------------------------------+ | e L.e e_2 ee1 | |----------------------------------------------| | . . . . | | .0065173 . .0000425 . | | .0107834 .0065173 .0001163 .0000703 | | -.0018847 .0107834 3.55e-06 -.0000203 |

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 12

5. | -.010278 -.0018847 .0001056 .0000194 | +----------------------------------------------+ . tabstat e e_2 ee1, stat(n sum mean) save stats | e e_2 ee1 ---------+-----------------------------N | 35 35 34 sum | -1.86e-09 .0142722 .0017067 mean | -5.32e-11 .0004078 .0000502 ---------------------------------------. . . .

matrix sum=r(StatTotal) local s_e_2 = sum[2,2] local s_ee1 = sum[2,3] global T = sum[1,1]

//.0142722 //.0017067 // 35

. local rho=`s_ee1'/`s_e_2'

// Autocorrelation rho: rhotype(tscorr)

. disp `rho' .11958521 . disp $T*`v_lag' .4540958

// to check if Ts < 1

. local h = `rho'*sqrt($T/(1-$T*`v_lag')) . disp `h' .95753197 . disp 1-norm(`h') .16914941

SAS AUTOREG procedure returns the same Durbin h .9575 (See 5.2.2). Using the alternative term ρˆ ≈ 1 − DW 2 will give you a quite different statistic largely because this sample is not large sufficiently. . dwstat

// equivalent to estat dwatson

Durbin-Watson d-statistic( . matrix dw = r(dw) . local dw = dw[1,1]

6,

35) =

1.743835

// dw

. local h2 = (1-`dw'/2)*sqrt($T/(1-$T*`v_lag')) . disp `h2' 1.0255711 . disp 1-norm(`h2') .15254689

Let us run either .durbina or .estat durbinalt to conduct the Durbin’s alternative test, which produces a chi-squared statistic whose p-value is different from that of h2 above. . durbina

// equivalent to .estat durbinalt

Durbin's alternative test for autocorrelation --------------------------------------------------------------------------lags(p) | chi2 df Prob > chi2 -------------+------------------------------------------------------------1 | 0.660 1 0.4164 --------------------------------------------------------------------------H0: no serial correlation

5.2 SAS REG and AUTOREG Procedure http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 13

In SAS, you may use the REG procedure of SAS/STAT and the AUTOREG procedure of SAS/ETS. REG computes the D-W d statistic, while AUTORE produces both D-W d and Durbin h statistics. 5.2.1 SAS REG Procedure

The /DW option in the REG procedure computes the D-W d statistic. PROC REG DATA=masil.gasoline; MODEL lnG = lnPg lnI lnPnc lnPuc /DW; RUN; The REG Procedure Model: MODEL1 Dependent Variable: lnG Number of Observations Read Number of Observations Used

36 36

Analysis of Variance Source

DF

Sum of Squares

Mean Square

Model Error Corrected Total

4 31 35

0.77152 0.03384 0.80535

0.19288 0.00109

Root MSE Dependent Mean Coeff Var

0.03304 -0.00371 -890.84804

R-Square Adj R-Sq

F Value

Pr > F

176.71

|t|

1 1 1 1 1

-12.34184 -0.05910 1.37340 -0.12680 -0.11871

0.67489 0.03248 0.07563 0.12699 0.08134

-18.29 -1.82 18.16 -1.00 -1.46

F R-squared Adj R-squared Root MSE

= = = = = =

36 54.03 0.0000 0.8971 0.8805 .0213

-----------------------------------------------------------------------------TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------TlnPg | -.1523066 .0370525 -4.11 0.000 -.2278756 -.0767376 TlnI | 1.266635 .107309 11.80 0.000 1.047777 1.485493 TlnPnc | -.0308443 .1271973 -0.24 0.810 -.2902649 .2285764 TlnPuc | -.0638014 .0758518 -0.84 0.407 -.2185021 .0908993 Intercept | -11.3873 .955492 -11.92 0.000 -13.33604 -9.438561 ------------------------------------------------------------------------------

The rhotype(dw) option uses the D-W d-based ρ estimator when estimating autoregressive error models. . prais $OLS, rhotype(dw) twostep Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.6977

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 20

Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .122007581 4 .030501895 Residual | .014062161 31 .000453618 -------------+-----------------------------Total | .136069743 35 .003887707

Number of obs F( 4, 31) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

36 67.24 0.0000 0.8967 0.8833 .0213

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1523067 .0370525 -4.11 0.000 -.2278757 -.0767377 lnI | 1.266636 .1073091 11.80 0.000 1.047778 1.485494 lnPnc | -.0308446 .1271973 -0.24 0.810 -.2902653 .2285761 lnPuc | -.0638011 .0758518 -0.84 0.407 -.2185019 .0908996 _cons | -11.38731 .9554926 -11.92 0.000 -13.33605 -9.438566 -------------+---------------------------------------------------------------rho | .6976503 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.137768

The following example uses the Theil’s ρ estimator, which adjusts the autocorrelation coefficient. . local rho=`s_ee1'/`s_e_2'*($T-$K)/($T) . prais $OLS, rhotype(theil) twostep Iteration 0: Iteration 1:

//Theil rho: rhotype(theil) .58072899

// Theil rho

rho = 0.0000 rho = 0.5781

Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .187934725 4 .046983681 Residual | .016189637 31 .000522246 -------------+-----------------------------Total | .204124362 35 .005832125

Number of obs F( 4, 31) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

36 89.96 0.0000 0.9207 0.9105 .02285

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1235462 .0367836 -3.36 0.002 -.1985669 -.0485254 lnI | 1.31413 .0982142 13.38 0.000 1.113821 1.514439 lnPnc | -.0700476 .1289092 -0.54 0.591 -.3329597 .1928645 lnPuc | -.0792208 .0791098 -1.00 0.324 -.2405664 .0821247 _cons | -11.81037 .8751567 -13.50 0.000 -13.59526 -10.02547 -------------+---------------------------------------------------------------rho | .5780528 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.010610

The following uses the adjustment of D-W d-based ρ estimator. . local rho = ((1-`dw'/2)*$T^2+$K^2)/($T^2-$K^2)

// Nagar .73104236

. prais $OLS, rhotype(nagar) twostep Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.7462

Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS -------------+------------------------------

http://www.masil.org

Number of obs = F( 4, 31) =

36 58.73

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 21

Model | .100642676 4 .025160669 Residual | .013280162 31 .000428392 -------------+-----------------------------Total | .113922839 35 .003254938

Prob > F R-squared Adj R-squared Root MSE

= = = =

0.0000 0.8834 0.8684 .0207

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1649401 .0368649 -4.47 0.000 -.2401266 -.0897536 lnI | 1.237828 .1112792 11.12 0.000 1.010872 1.464783 lnPnc | -.0096368 .1261748 -0.08 0.940 -.266972 .2476984 lnPuc | -.0571432 .0740091 -0.77 0.446 -.2080859 .0937994 _cons | -11.13132 .9905196 -11.24 0.000 -13.1515 -9.111143 -------------+---------------------------------------------------------------rho | .7461546 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.199445

The following uses the default type of ρ estimator, which is obtained by regressing e on et-1 without the intercept. . prais $OLS, rhotype(regress) twostep Iteration 0: Iteration 1:

// default

rho = 0.0000 rho = 0.6831

Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .129028315 4 .032257079 Residual | .014306248 31 .000461492 -------------+-----------------------------Total | .143334562 35 .004095273

Number of obs F( 4, 31) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

36 69.90 0.0000 0.9002 0.8873 .02148

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1485802 .0370721 -4.01 0.000 -.2241892 -.0729712 lnI | 1.274075 .1061383 12.00 0.000 1.057605 1.490546 lnPnc | -.0365926 .127477 -0.29 0.776 -.2965837 .2233986 lnPuc | -.0657675 .0763471 -0.86 0.396 -.2214784 .0899434 _cons | -11.45348 .9451596 -12.12 0.000 -13.38115 -9.525816 -------------+---------------------------------------------------------------rho | .6830819 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.120645

The following uses the ρ estimator obtained by regressing e on et+1 without the intercept. . prais $OLS, rhotype(freg) twostep Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.6980

Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .121850923 4 .030462731 Residual | .014056649 31 .00045344 -------------+-----------------------------Total | .135907573 35 .003883074

Number of obs F( 4, 31) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

36 67.18 0.0000 0.8966 0.8832 .02129

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval]

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 22

-------------+---------------------------------------------------------------lnPg | -.1523921 .0370518 -4.11 0.000 -.2279598 -.0768244 lnI | 1.26646 .1073359 11.80 0.000 1.047547 1.485373 lnPnc | -.0307104 .1271908 -0.24 0.811 -.2901177 .2286969 lnPuc | -.0637561 .0758402 -0.84 0.407 -.2184332 .0909209 _cons | -11.38574 .9557293 -11.91 0.000 -13.33497 -9.436521 -------------+---------------------------------------------------------------rho | .6979822 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.138165

6.1.4 Cochrane-Orcutt FGLS

Like Prais-Winsten FGLS, Cochrane-Orcutt FGLS runs OLS with the transform data. Unlike the Prais-Winsten, the Cochrane-Orcutt ignores the first observation. Let us begin with Cochrane-Orcutt FGLS using the autocorrelation coefficient. . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept if _n > 1, noconst Source | SS df MS -------------+-----------------------------Model | .073993944 5 .014798789 Residual | .014299108 30 .000476637 -------------+-----------------------------Total | .088293052 35 .002522659

Number of obs F( 5, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 31.05 0.0000 0.8380 0.8111 .02183

-----------------------------------------------------------------------------TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------TlnPg | -.142636 .0380588 -3.75 0.001 -.2203625 -.0649094 TlnI | 1.329594 .1396031 9.52 0.000 1.044487 1.614702 TlnPnc | -.0793608 .1464852 -0.54 0.592 -.3785234 .2198018 TlnPuc | -.0561649 .0797507 -0.70 0.487 -.2190375 .1067078 Intercept | -11.95372 1.249882 -9.56 0.000 -14.50632 -9.401116 ------------------------------------------------------------------------------

The .prais command has the corc option to estimate Cochrane-Orcutt FGLS. . prais $OLS, rhotype(tscorr) twostep corc Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.6744

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .0700521 4 .017513025 Residual | .014299112 30 .000476637 -------------+-----------------------------Total | .084351212 34 .002480918

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 36.74 0.0000 0.8305 0.8079 .02183

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1426362 .0380589 -3.75 0.001 -.2203627 -.0649096 lnI | 1.329593 .139603 9.52 0.000 1.044486 1.6147 lnPnc | -.079361 .1464852 -0.54 0.592 -.3785237 .2198016 lnPuc | -.0561643 .0797507 -0.70 0.487 -.219037 .1067084 _cons | -11.9537 1.249881 -9.56 0.000 -14.5063 -9.401107 -------------+---------------------------------------------------------------rho | .674395 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 23

Durbin-Watson statistic (transformed) 1.125520

The following two outputs use the D-W d-based ρ estimator. . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept if _n > 1, noconst Source | SS df MS -------------+-----------------------------Model | .066189092 5 .013237818 Residual | .013979013 30 .000465967 -------------+-----------------------------Total | .080168105 35 .002290517

Number of obs F( 5, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 28.41 0.0000 0.8256 0.7966 .02159

-----------------------------------------------------------------------------TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------TlnPg | -.1492824 .0382297 -3.90 0.000 -.227358 -.0712069 TlnI | 1.307018 .1448034 9.03 0.000 1.01129 1.602746 TlnPnc | -.0599178 .1461395 -0.41 0.685 -.3583746 .2385389 TlnPuc | -.0563603 .0788697 -0.71 0.480 -.2174338 .1047132 Intercept | -11.75192 1.29727 -9.06 0.000 -14.4013 -9.102544 -----------------------------------------------------------------------------. prais $OLS, rhotype(dw) twostep corc Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.6977

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .062119363 4 .015529841 Residual | .013979017 30 .000465967 -------------+-----------------------------Total | .07609838 34 .002238188

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 33.33 0.0000 0.8163 0.7918 .02159

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1492826 .0382297 -3.90 0.000 -.2273581 -.071207 lnI | 1.307019 .1448035 9.03 0.000 1.01129 1.602747 lnPnc | -.059918 .1461396 -0.41 0.685 -.3583748 .2385388 lnPuc | -.05636 .0788697 -0.71 0.480 -.2174336 .1047135 _cons | -11.75193 1.297271 -9.06 0.000 -14.40131 -9.102547 -------------+---------------------------------------------------------------rho | .6976503 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.140131

The followings estimate other autoregressive error models using other ρ estimators such as Theil’s estimator. Pay attention to the rhotype() option. . prais $OLS, rhotype(theil) twostep corc Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.5781

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .111987045 4 .027996761 Residual | .01564161 30 .000521387 -------------+-----------------------------Total | .127628655 34 .003753784

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 53.70 0.0000 0.8774 0.8611 .02283

------------------------------------------------------------------------------

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 24

lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.118986 .0370215 -3.21 0.003 -.194594 -.0433779 lnI | 1.385947 .1205699 11.49 0.000 1.13971 1.632183 lnPnc | -.140429 .1459554 -0.96 0.344 -.4385097 .1576518 lnPuc | -.0554635 .0823714 -0.67 0.506 -.2236882 .1127613 _cons | -12.45608 1.077645 -11.56 0.000 -14.65693 -10.25524 -------------+---------------------------------------------------------------rho | .5780528 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.059270 . prais $OLS, rhotype(nagar) twostep corc Iteration 0: Iteration 1:

rho = 0.0000 rho = 0.7462

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .04835238 4 .012088095 Residual | .013278094 30 .000442603 -------------+-----------------------------Total | .061630474 34 .001812661

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 27.31 0.0000 0.7846 0.7558 .02104

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1643651 .0384036 -4.28 0.000 -.2427958 -.0859344 lnI | 1.245164 .1559178 7.99 0.000 .9267376 1.563591 lnPnc | -.0141688 .1443708 -0.10 0.922 -.3090133 .2806758 lnPuc | -.0561394 .0766464 -0.73 0.470 -.2126722 .1003934 _cons | -11.19776 1.399397 -8.00 0.000 -14.05572 -8.339815 -------------+---------------------------------------------------------------rho | .7461546 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.174443 . prais $OLS, rhotype(regress) twostep corc Iteration 0: Iteration 1:

// default

rho = 0.0000 rho = 0.6831

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .066988349 4 .016747087 Residual | .014180226 30 .000472674 -------------+-----------------------------Total | .081168575 34 .002387311

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 35.43 0.0000 0.8253 0.8020 .02174

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1450741 .0381281 -3.80 0.001 -.222942 -.0672062 lnI | 1.321657 .1415266 9.34 0.000 1.032621 1.610692 lnPnc | -.07231 .1463868 -0.49 0.625 -.3712718 .2266517 lnPuc | -.0562499 .0794347 -0.71 0.484 -.2184772 .1059774 _cons | -11.8828 1.267388 -9.38 0.000 -14.47115 -9.294444 -------------+---------------------------------------------------------------rho | .6830819 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.130955 . prais $OLS, rhotype(freg) twostep corc

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006) Iteration 0: Iteration 1:

Autocorrelation and the AR(1) Process: 25

rho = 0.0000 rho = 0.6980

Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS -------------+-----------------------------Model | .062012395 4 .015503099 Residual | .0139744 30 .000465813 -------------+-----------------------------Total | .075986795 34 .002234906

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 33.28 0.0000 0.8161 0.7916 .02158

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.1493802 .0382318 -3.91 0.000 -.22746 -.0713004 lnI | 1.306665 .1448787 9.02 0.000 1.010783 1.602547 lnPnc | -.0596277 .1461326 -0.41 0.686 -.3580704 .2388149 lnPuc | -.0563619 .0788564 -0.71 0.480 -.2174081 .1046842 _cons | -11.74877 1.297958 -9.05 0.000 -14.39955 -9.097981 -------------+---------------------------------------------------------------rho | .6979822 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.140342

6.1.5 Iterative Prais-Winsten and Cochrane-Orcutt FGLS

STATA provides the iterative two-step estimation method for the Prais-Winsten and CochraneOrcutt FGLS. . prais $OLS, rhotype(tscorr) Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration

0: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:

// Iterative Prais-Winsten FGLS

rho = 0.0000 rho = 0.6744 rho = 0.8361 rho = 0.9030 rho = 0.9273 rho = 0.9366 rho = 0.9403 rho = 0.9419 rho = 0.9426 rho = 0.9428 rho = 0.9430 rho = 0.9430 rho = 0.9430 rho = 0.9431 rho = 0.9431 rho = 0.9431 rho = 0.9431

Prais-Winsten AR(1) regression -- iterated estimates Source | SS df MS -------------+-----------------------------Model | .044594423 4 .011148606 Residual | .011097443 31 .000357982 -------------+-----------------------------Total | .055691865 35 .001591196

Number of obs F( 4, 31) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

36 31.14 0.0000 0.8007 0.7750 .01892

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.2101637 .0347875 -6.04 0.000 -.2811133 -.139214 lnI | 1.071587 .1288525 8.32 0.000 .8087905 1.334383 lnPnc | .0939725 .1252189 0.75 0.459 -.161413 .3493581 lnPuc | -.0341095 .0653817 -0.52 0.606 -.1674564 .0992375 _cons | -9.666983 1.148614 -8.42 0.000 -12.0096 -7.324369

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 26

-------------+---------------------------------------------------------------rho | .9430583 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.531091

The following example is the iterative Cochrane-Orcutt FGLS. . prais $OLS, rhotype(tscorr)corc Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration Iteration

0: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14:

// Iterative Cochrane-Orcutt FGLS

rho = 0.0000 rho = 0.6744 rho = 0.8080 rho = 0.9037 rho = 0.9235 rho = 0.9279 rho = 0.9294 rho = 0.9300 rho = 0.9301 rho = 0.9302 rho = 0.9303 rho = 0.9303 rho = 0.9303 rho = 0.9303 rho = 0.9303

Cochrane-Orcutt AR(1) regression -- iterated estimates Source | SS df MS -------------+-----------------------------Model | .029891322 4 .007472831 Residual | .01060038 30 .000353346 -------------+-----------------------------Total | .040491702 34 .001190932

Number of obs F( 4, 30) Prob > F R-squared Adj R-squared Root MSE

= = = = = =

35 21.15 0.0000 0.7382 0.7033 .0188

-----------------------------------------------------------------------------lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------lnPg | -.2223182 .036462 -6.10 0.000 -.2967836 -.1478527 lnI | .8847412 .2033351 4.35 0.000 .4694755 1.300007 lnPnc | .091974 .1237493 0.74 0.463 -.1607557 .3447038 lnPuc | -.0422291 .0655293 -0.64 0.524 -.1760577 .0915996 _cons | -7.865689 1.897604 -4.15 0.000 -11.74111 -3.990264 -------------+---------------------------------------------------------------rho | .9302665 -----------------------------------------------------------------------------Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.515506

6.2 FGLS in SAS

SAS support both (iterative) two-step Prais-Winten and maximum likelihood algorithms. 6.2.1 Two-step Prais-Winsten Estimation

Once variables are transformed, run OLS with the intercept suppressed in the REG procedure. SAS by default uses the autocorrelation coefficient as the ρ estimator. PROC REG DATA=masil.gasoline; MODEL TlnG = Intercept TlnPg TlnI TlnPnc TlnPuc /NOINT; RUN; The REG Procedure Model: MODEL1 Dependent Variable: TlnG

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 27

Number of Observations Read Number of Observations Used

36 36

NOTE: No intercept in model. R-Square is redefined. Analysis of Variance Source

DF

Sum of Squares

Mean Square

Model Error Uncorrected Total

5 31 36

0.13379 0.01445 0.14825

0.02676 0.00046625

Root MSE Dependent Mean Coeff Var

0.02159 0.00352 614.10313

R-Square Adj R-Sq

F Value

Pr > F

57.39

|t|

1 1 1 1 1

-11.49075 -0.14638 1.27826 -0.03988 -0.06693

0.93906 0.03708 0.10545 0.12764 0.07663

-12.24 -3.95 12.12 -0.31 -0.87

REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Alg=Corc$

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 34

+---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .69765 | | Maximum iterations = 100 | | Method = Cochrane - Orcutt | | Iter= 1, SS= .014, Log-L= 89.951767 | | Iter= 2, SS= .012, Log-L= 93.021836 | | Iter= 3, SS= .011, Log-L= 94.167826 | | Iter= 4, SS= .011, Log-L= 92.432530 | | Final value of Rho = .945360 | | Iter= 4, SS= .011, Log-L= 92.432530 | | Durbin-Watson: e(t) = .005738 | | Std. Deviation: e(t) = .057619 | | Std. Deviation: u(t) = .018785 | | Durbin-Watson: u(t) = 1.448556 | | Autocorrelation: u(t) = .275722 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -7.657883189 2.0205150 -3.790 .0002 LNPG -.2242234551 .36380584E-01 -6.163 .0000 .67409429 LNI .8659170265 .21323776 4.061 .0000 9.1109277 LNPNC .7957307155E-01 .12510703 .636 .5248 .44319819 LNPUC -.4109737743E-01 .65129565E-01 -.631 .5280 .66361220 RHO .9453603504 .55108674E-01 17.154 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

6.3.4 Maximum Likelihood Estimation

Let us fits the autocorrelation error model using the maximum likelihood algorithm. Compared to SAS AUTOREG procedure, LIMDEP converges quickly and produces slightly different estimates and standard errors. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Alg=MLE$ +-----------------------------------------------------------------------+ | Ordinary least squares regression Weighting variable = none | | Dep. var. = LNG Mean= -.3708600000E-02, S.D.= .1516908393 | | Model size: Observations = 36, Parameters = 5, Deg.Fr.= 31 | | Residuals: Sum of squares= .3383693649E-01, Std.Dev.= .03304 | | Fit: R-squared= .957985, Adjusted R-squared = .95256 | | Model test: F[ 4, 31] = 176.71, Prob value = .00000 | | Diagnostic: Log-L = 74.3732, Restricted(b=0) Log-L = 17.3181 | | LogAmemiyaPrCrt.= -6.690, Akaike Info. Crt.= -3.854 | | Autocorrel: Durbin-Watson Statistic = .60470, Rho = .69765 | +-----------------------------------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |t-ratio |P[|T|>t] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -12.34185146 .67489522 -18.287 .0000 LNPG -.5909591880E-01 .32484970E-01 -1.819 .0786 .67409429 LNI 1.373400354 .75627733E-01 18.160 .0000 9.1109277 LNPNC -.1267972409 .12699351 -.998 .3258 .44319819 LNPUC -.1187078514 .81337098E-01 -1.459 .1545 .66361220 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .69765 | | Maximum iterations = 100 | | Method = Maximum likelihood | | Iter= 1, SS= .014, Log-L= 89.845024 |

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 35

| Iter= 2, SS= .012, Log-L= 92.675210 | | Iter= 3, SS= .011, Log-L= 93.303814 | | Iter= 4, SS= .011, Log-L= 93.363749 | | Iter= 5, SS= .011, Log-L= 93.367649 | | Final value of Rho = .930078 | | Iter= 5, SS= .011, Log-L= 93.367649 | | Durbin-Watson: e(t) = .110756 | | Std. Deviation: e(t) = .051615 | | Std. Deviation: u(t) = .018961 | | Durbin-Watson: u(t) = 1.526119 | | Autocorrelation: u(t) = .236940 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -9.764190340 1.1324911 -8.622 .0000 LNPG -.2079613173 .34919931E-01 -5.955 .0000 .67409429 LNI 1.082832640 .12712938 8.518 .0000 9.1109277 LNPNC .8779474079E-01 .12473030 .704 .4815 .44319819 LNPUC -.3504838029E-01 .65860554E-01 -.532 .5946 .66361220 RHO .9300777597 .62095632E-01 14.978 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

6.3.5 Hatanaka’s Two-stage Estimation

Also LIMDEP supports the Hatanaka’s (1974)’s autocorrelation error model with a lagged dependent variable. The SAMPLE;2-36$ command specifies the observations to be used in analysis. --> CREATE; LNG1=LNG[-1]$ --> SAMPLE;2-36$ --> 2SLS;Lhs=LNG; Rhs=ONE,LNG2,LNPG,LNI,LNPNC,LNPUC; Inst=ONE,LNPG,LNI,LNPNC,PPT,PD; Ar1;Hatanaka$ +-----------------------------------------------------------------------+ | Two stage least squares regression Weighting variable = none | | Dep. var. = LNG Mean= .5660128571E-02, S.D.= .1429479338 | | Model size: Observations = 35, Parameters = 6, Deg.Fr.= 29 | | Residuals: Sum of squares= .2065570283E-01, Std.Dev.= .02669 | | Fit: R-squared= .964118, Adjusted R-squared = .95793 | | (Note: Not using OLS. R-squared is not bounded in [0,1] | | Model test: F[ 5, 29] = 155.84, Prob value = .00000 | | Diagnostic: Log-L = 80.4516, Restricted(b=0) Log-L = 18.9291 | | LogAmemiyaPrCrt.= -7.089, Akaike Info. Crt.= -4.254 | | Autocorrel: Durbin-Watson Statistic = 1.83451, Rho = .08275 | +-----------------------------------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant 1.735962744 3.2268496 .538 .5906 LNG1 1.092342390 .23876226 4.575 .0000 -.73433543E-02 LNPG -.1187551218 .29553512E-01 -4.018 .0001 .69558160 LNI -.1873315634 .35787177 -.523 .6007 9.1225115 LNPNC -.6036842439 .20357769 -2.965 .0030 .45460337 LNPUC .4997800149 .18164139 2.751 .0059 .68769045 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .08275 | | Maximum iterations = 100 | | Method = Prais - Winsten |

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 36

| Hatanaka 2 step estimator | | Iter= 1, SS= .012, Log-L= 90.680776 | | Final value of Rho = .631295 | | Iter= 1, SS= .012, Log-L= 90.680776 | | Durbin-Watson: e(t) = 1.674507 | | Std. Deviation: e(t) = .025158 | | Std. Deviation: u(t) = .019511 | | Durbin-Watson: u(t) = 1.473400 | | Autocorrelation: u(t) = .263300 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant 1.677836432 4.2654065 .393 .6941 LNG1 1.002874884 .30497653 3.288 .0010 -.73433543E-02 LNPG -.1356071426 .33570524E-01 -4.039 .0001 .69558160 LNI -.1817894244 .47327353 -.384 .7009 9.1225115 LNPNC -.7065437630 .21245185 -3.326 .0009 .45460337 LNPUC .5938185531 .18513096 3.208 .0013 .68769045 RHO .6312948808 .15398917 4.100 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

http://www.masil.org

http://www.joomok.org

© Jeeshim and KUCC625 (7/18/2006)

Autocorrelation and the AR(1) Process: 37

7. Conclusion The autocorrelation coefficient of the model of U.S. gasoline consumption is .6744, which is slightly different from D-W d-based coefficient and Theil’s adjustment (see Table 7.1). There are various ways to test the first order autocorrelation. Table 7.1 summarizes test statistics. D-W d, Breusch-Godfrey LM, Box-Pierce Q, and Ljung-Box Q’ all indicate an autoregressive error in the model. The D-W d .6047 (36, 4) out of the bound from dL= 1.24 and dU 1.73 at the .05 significance level. The Durbin h test for a model with a lagged dependent variable does not reject the null hypothesis of no autocorrelation in the model with a lagged dependent variable. All tests lead to the same conclusion. The LM test becomes a standard in econometrics. Table 7.1 Comparison of Test Statistics for the First Order Autocorrelation R1

R1 (DW d)

Theil’s R1

D-W d

Durbin h*

B-G LM

B-P Q

L-B Q’

.67439496

.69765033

.58072899

.60469933

.95753197 (