Untitled - Jean-Renaud PYCKE

Mar 31, 2004 - Bahadur optimal in the case of location alternatives with the hyper- bolic cosine and .... For each ν > 0 we have, under the null hypothesis, the.
173KB taille 1 téléchargements 251 vues
1

1

Bahadur local asymptotic optimality for generalizations of the Cram´er-von Mises and the Anderson-Darling statistics J.-R. Pycke* March 31, 2004

Abstract We introduce a family of statistics generalizing that of Cram´er von Mises and Anderson-Darling. The latter are known to be locally Bahadur optimal in the case of location alternatives with the hyperbolic cosine and the logistic densities respectively. We generalize this property to the whole family of statistics and a corresponding family of densities. Keywords and phrases: goodness of fit test, Cram´er-von Mises statistic, Anderson-Darling statistic, Bahadur efficiency, Bahadur local optimality. 2000 Mathematics Subject Classification : 62H15, 62G20, 62G30.

1

Introduction

In this paper we extend to a new family of statistics some of the well-known properties of the celebrated Cram´er-von Mises and Anderson-Darling statistics (discussed for example in [2]). The latter are commonly used in the following goodness of fit problem. Let X1 , ..., Xn be independent observations on a random variable X with continuous cumulative distribution function (d.f.) F (x) = P (X ≤ x). Suppose that we wish to test the hypothesis H 0 : F = F0 where F0 is a completly specified continuous d.f., against the general alternative F 6= F0 . For this purpose many tests of goodness of fit have been ∗

Jean-Renaud Pycke Universit´e Paris vi, L.S.T.A, case 158, 175 rue du Chevaleret, 75013 Paris, e-mail:[email protected]

2

introduced, often based on measures of discrepancy between the hypothesized d.f., F0 , and the sample d.f. Fn (x) :=

number of observations ≤ x , x ∈ R. n

A wide class of tests springs from the discrepancy measure given by the Cram´er-von Mises family of quadratic statistics Z ∞  2 2 Fn (x) − F0 (x) q(F0 (x))dF0 (x), (n ≥ 1) (1.1) ωq,n := n −∞

where q : (0, 1) → [0, ∞) is a suitable weight-function. If we introduce the uniform empirical process defined on [0, 1] by Un (t) := n−1/2

n X

 1{F0 (Xi )≤t} − t ,

i=1

(1.1) may be rewritten (1.2)

2 ωq,n

Z =

1

q(t)U2n (t)dt.

0

When q(t) = 1 the statistic is the Cram´er-von Mises statistic ; when q(t) = [t(1 − t)]−1 it is the Anderson-Darling statistic. In section 2 we introduce a family (qν ) of weight-functions indexed by ν ∈ (0, ∞) with q1 (t) = 1, q2 (t) = [t(1 − t)]−1 and such that qν is summable for ν ∈ (0, 2). We define the corresponding Cram´er-von Mises type statis2 . Proposition 2.4 gives their asymptotic distribution under the null tics ων,n hypothesis in the case ν ∈ [1, ∞). In section 3 we generalize the following property. It is well known (see [9] Chapter 6, Corollary 1 p. 225 and Theorem 6.3.5 p. 227) that the Cram´er-von Mises and the Anderson-Darling statistics are locally asymptotically optimal (LAO) in the sense of Bahadur in the case of location alternatives for the hyperbolic cosine and the logistic distributions respectively. Theorem 3.18 extends this result to the statistics 2 2 ων,n in the case ν ∈ [1, 2], stating that ων,n is LAO in the case of location alternatives for the Gumbel generalized logistic distribution fν given by (3.21). For results and references about this density, the reader is refered to [8], Chapter 22, Section 11. We give in Section 4 some useful technical results. Most results of the present paper are based upon those expounded in [10], in which a new family of explicit Karhunen-Lo`eve expansions is given, including that of the Brownian bridge and the Anderson-Darling processes as particular cases. These results enabled us to state Proposition 2.3 which

3

is the key property for the proofs of Proposition 2.4 and Theorem 3.1. Recent examples of interesting statistical applications arising from the explicit knowledge of Karhunen-Lo`eve expansions can be found in [6] and [7]. Let us recall the following basic facts about Karhunen-Lo`eve expansions. For details, the reader is refered to [12], Chapter 5. If X = {X(t) : 0 < t < 1} is a centered Gaussian process with continuous covariance function K(s, t) = EX(s)X(t) such that Z 1 K(t, t)dt < ∞, 0

one has almost surely (1.3)

X(t) =

∞ p X

λk ξk fk (t)

in L2 (0, 1)

k=1

where {ξk : k ≥ 1} denotes a sequence of independent N (0, 1) variables and the set {(λk , fk ) : k ≥ 1} has the following properties : P 1 : ∀k ≥ 1, (λk , fk ) ∈ [0, ∞) × L2 (0, 1); P 2 : the sequence (λk ) is decreasing; Z 1 P 3 : ∀k ≥ 1, K(s, .)fk (s)ds = λk fk (.); 0 ( Z 1 1 if k = `, P4 : fk (s)f` (s)ds = 0 if k 6= `. 0 Consequently one has the equality Z 1 ∞ X (1.4) X2 (t)dt = λk ξk2 , 0

k=1

and the characteristic function of the random variable on the left-hand side of (1.4) is given by Z 1 ∞ Y −1/2 2 1 − 2iuλk , (u ∈ R). (1.5) exp{iu X (t)dt} = 0

k=1

In the sequel (1.3) will be refered to as the Karhunen-Lo`eve (K-L) expansion of X.

2

A family of statistics

To begin with we recall that the normalized incomplete beta function I is defined (see e.g. [1] formulas 6.2.2 and 26.5.1), for α, β > 0 by Z Γ(α + β) x α−1 I(α, β, x) = (2.6) t (1 − t)β−1 dt for 0 ≤ x ≤ 1. Γ(α)Γ(β) 0 4

For each α, β > 0 the function I(α, β, .) is increasing on [0, 1] and we have (2.7) I(α, β, 0) = 0,

I(α, β, 1 − x) = 1 − I(β, α, x)

hence I(α, β, 1) = 1.

Particular cases are (2.8)

1 1 arccos(1 − 2x) I( , , x) = 2 2 π

and I(1, 1, x) = x for x ∈ [0, 1].

Definition 2.1. For each ν ∈ (0, ∞) we set (2.9)

2 ων,n

2 = Hence ω1,n

:= π

R1 0

2ν−3 2

ν−1 Γ2 ( ν ) 2

Γ(ν)

1

Z 0

 U2n I( ν2 , ν2 , 1−cos(πr) ) 2 dr. sinν−1 (πr)

U2n (r)dr is the Cram´er-von Mises statistic and

2 ω2,n = 2π

Z 0

1

U2n ( 1−cos(πr) ) 2 dr = sin(πr)

Z

1

0

U2n (t) dt t(1 − t)

is the Anderson-Darling statistic. 2 in the form (1.2), we introduce the In order to express the statistic ων,n following notations. For each ν > 0, Jν will denote the function uniquely defined on [0, 1] by

ν ν 1 − cos(πr) ) ⇐⇒ Jν (t) = r t = I( , , 2 2 2

for r, t ∈ [0, 1].

In view of (2.8) it is readily checked that (2.10)

J1 (t) = t,

J2 (t) =

1 arccos(1 − 2t). π

For each ν ∈ (0, ∞), we define the weight-function qν by setting, for t ∈ (0, 1), (2.11)

qν (t) := π 2ν−4

2ν−1 Γ2 ( ν2 ) 2 2(1−ν) sin {πJν (t)}. Γ(ν)

From (2.10) we see that, as claimed in the introduction, q1 (t) = 1, q2 (t) =

1 . t(1 − t)

Furthermore, from Lemma 4.3 (applied with f constant), and from the combination of Lemma 4.2 and Lemma 4.3 (applied with f (t) = t(1 − t)), we obtain respectively (2.12) Z 1 Z 1 ∀ν ∈ (0, 2) : qν (t)dt < ∞ and ∀ν > 0 : t(1 − t)qν (t)dt < ∞. 0

0

5

2 can be written as Proposition 2.1. The statistic ων,n 2 ων,n

(2.13)

Z =

1

qν (t)U2n (t)dt.

0

Proof. If we use Lemma 4.3 with f = U2n and multiply both sides of (4.29) by π 2ν−3 π 2ν−3

2ν−1 Γ2 ( ν2 ) , Γ(ν)

we obtain

2ν−1 Γ2 ( ν2 ) 2ν−1 Γ2 ( ν2 ) Γ(ν) πΓ(ν)

1

U2n (t) dt 2(ν−1) {πJν (t)} 0 sin  ν−1 Γ2 ( ν ) Z 1 U2 I( ν , ν , 1−cos(πr) ) n 2ν−3 2 2 2 2 2 dr, =π Γ(ν) sinν−1 (πr) 0 Z

which is the desired equality. Recall that a Brownian bridge is a centered Gaussian process on [0, 1] with covariance function EB(s)B(t) = min(s, t) − st for s, t ∈ [0, 1] Proposition 2.2. For each ν > 0 we have, under the null hypothesis, the convergence in law Z 1 2 (2.14) lim ων,n = qν (t)B2 (t)dt n→∞

0

where B is a Brownian bridge. Proof. This result is a consequence of the first assertion in (2.12) in combination with Theorem 3.3 p. 325 in [5]. The next proposition will enable us to compute the characteristic function of the random variable on the right-hand side of (2.14) in the case ν ≥ 1. For each ν ∈ [1, ∞) and k ∈ N∗ , let fν,k denote the function defined on (0, 1) by   ν − ν2 fν,k (t) := αν,k sin1− 2 {πJν (t)}P ν +k−1 cos{πJν (t)} , 2

R1 2 − ν2 where αν,k is the positive real number such that 0 fν,k (t)dt = 1, and P ν +k−1 2 denotes a Legendre function of the first kind, hence satisfies − ν2 P ν +k−1 (x) 2

ν

ν (−1)k−1 (1 − x2 )− 4 dk−1 = (1 − x2 ) 2 +k−1 , ν k−1 +k−1 ν Γ( 2 + k) dx 22

(see [11], formula (359) p. 184).

6

for −1 < x < 1,

p Proposition 2.3. Suppose ν ∈ [1, ∞). The process { qν (t) B(t) : 0 < t < 1} admits a K-L expansion given by (2.15)

π ν−2 2ν−1 Γ( ν2 )2 B(t)  ν−1 Γ(ν) sin πJν (t) =

∞ X  k=1

1/2 π 2ν−4 ξk fν,k (t). k(k + ν − 1)

Proof. It is proved in [10] that for each ν ∈ [1, ∞) a K-L expansion of form ∞

X 1/2 2ν−1 Γ( ν2 )2 B(t) 1  = ξk fν,k (t) ν−1 2 πΓ(ν) sin π k(k + ν − 1) πJν (t) k=1

is valid on (0, 1). On multiplying the latter by π ν−1 we obtain the K-L expansion (2.15). The next corollary follows readily from the preceding proposition. Corollary 2.1. For each ν ∈ [1, ∞), one has the equality in law Z (2.16)

1 2

qν (t)B (t)dt = 0

∞ X k=1

π 2ν−4 ξ2. k(k + ν − 1) k

In view of (1.4), (1.5) and (2.16) we obtain the following result. Proposition 2.4. For each ν ∈ [1, ∞), under the null hypothesis, one has almost surely 2 lim exp{iuων,n }=

n→∞

3

∞ Y k=1

1−

2iuπ 2ν−4 −1/2 , (u ∈ R). k(k + ν − 1)

Bahadur local efficiency in the case of location alternatives

One way of measuring the efficiency of tests of fit is that introduced by Bahadur [3]. For a recent exposition of the concept of Bahadur efficiency and a comparison with other types of efficiencies we refer to [9]. We briefly recall the following basic facts. Suppose that the true d.f. corresponds to a probability measure that belongs to a parametric set {Pθ : θ ∈ R}. The d.f. and the density function corresponding to Pθ will be denoted respectively by F (θ, .) and f (θ, .). We wish to test H0 : θ = θ0 against the alternative H1 : θ 6= θ0 . Let s denote the sequence of observations {Xk : k ≥ 1} and {Tn } a sequence of statistics based on s such that Tn depends on s only 7

through X1 , ..., Xn , i.e. Tn (s) = Tn (X1 , ..., Xn ). Whithout loss of generality, it is assumed that the rejection region of H0 is given by {s : Tn (s) ≥ c} where c ∈ R. Let φn denote the null distribution of Tn , that is φn (t) := Pθ0 (Tn < t)

for t ∈ R.

The level attained by Tn is defined to be Ln (s) := 1 − φn (Tn (s)). If for each θ 6= θ0 and a certain nonrandom positive function cT (θ) the following convergence in Pθ -probability takes place : 1 lim n−1 log Ln (s) = − cT (θ), n→∞ 2 then one says that the sequence of statistics T = {Tn } has exact slope cT (θ). A fundamental result in the Bahadur theory is that the local exact slope of any sequence of statistics {Tn } satisfies the inequality (3.17)

cT (θ) ≤ 2K(θ, θ0 )

where K(θ, θ0 ) is the Kullback-Leibler information for two elements f (θ, .) and f (θ0 , .) of the familiy, defined by Z ∞ f (θ, x) K(θ, θ0 ) := log f (θ, x)dx. f (θ0 , x) −∞ Statistics for which equality holds in (3.17) for each θ 6= θ0 are said to be asymptotically optimal. A sequence of statistics T = {Tn } is said to be locally asymptotically optimal (LAO) in the Bahadur sense if it satisfies the weaker condition (3.18)

cT (θ) ∼ 2K(θ, θ0 ) as θ → θ0 .

p Suppose moreover that for each fixed x ∈ R the function θ 7→ f (θ, x) is differentiable with respect to θ and that the Fisher information defined by Z (3.19)



I(θ) := −∞

(θ,x) 2 { ∂f∂x } dx f (θ, x)

is continuous. Then the Kullback-Leibler information can be shown to satisfy (see e.g. [4] Chapter 2, § 21, Theorem 2 and Remark 1 p. 205), Kν (θ0 , θ) ∼

I(θ0 ) (θ − θ0 )2 as θ → θ0 . 2 8

Consider the particular case of a location family, i.e. when F (θ, .) = F (0, θ + .)

for each θ ∈ R.

In this case the Fisher information is easily seen to be a constant I, and it is readily checked that K(θ, 0) = K(0, −θ). Thus if we fix θ0 = 0, the condition for LAO (3.18) reduces to cT (θ) ∼ I(0)θ2 as θ → 0.

(3.20)

For each specified ν ∈ (0, ∞) we consider the location family such that for each θ ∈ R, ν ν 1 + tanh( x+θ ν ) Fν (θ, x) = I( , , ), (x ∈ R). 2 2 2 It follows from (4.30) that the corresponding density function is given by (3.21)

fν (θ, x) :=

Γ(ν)

1

ν2ν−1 Γ2 ( ν2 ) coshν ( x+θ ν )

.

The value ν = 1 gives the hyperbolic cosine distribution with f1 (0, x) =

1 2 and F1 (0, x) = arctan ex . π cosh x π

For ν = 2 we obtain the logistic distribution characterized by f2 (0, x) =

1 + tanh( x2 ) 1 ex 1 and F (0, x) = = = . 2 (1 + ex )2 1 + e−x 2 4 cosh2 ( x2 )

For each ν, the quantities I, K(θ) and cων,n 2 (θ) will be denoted by Iν , Kν and cν (θ). Proposition 3.1. For each ν > 0, the Fisher information satisfies (3.22)

Iν =

1 . ν+1

Proof. From (3.21) and (3.19) we see that Γ(ν) Iν = Iν (0) = ν−1 2 ν ν2 Γ ( 2 )

Z



−∞



x 1 x 2 x −ν cosh−ν−1 ( )· sinh( ) coshν ( )dx ν ν ν ν Z ∞ 2 Γ(ν) sinh x = ν−1 2 ν dx 2 Γ ( 2 ) −∞ coshν+2 x

and the result is a consequence of (4.31). 9

In order to compute cν (θ) in the case ν ∈ (0, 2), we use the following result (see [9] Section 2.6, Table 2 p.76 and the remarks following relation (2.6.2) p. 77-78). When T = {Tq,n } is a weighted Cram´er-von Mises type statistic, that is to say of the form Z 1 Tq,n := q(t)U2n (t)dt 0

where q : (0, 1) → R+ is a measurable function, then under the summability condition Z 1 (3.23) q(t)dt < ∞, 0

the local exact slope of {Tq,n } is given by Z ∞  2  (3.24) cq (θ) = a(q) F (θ, y) − F (θ0 , y) q F (θ0 , y) dF (θ, y)dy, −∞

with (3.25)

2 a(q) := − lim 2 log x→∞ x

Z

1

 q(t)B2 (t)dt ≥ x2 .

0

Proposition 3.2. Suppose ν ∈ (0, 2). As θ → 0 the local exact slope cν (θ) satisfies Z ∞  ∂Fν (θ, y) 2 θ2 (3.26) q{Fν (0, y)} fν (0, y)dy cν (θ) ∼ θ=0 λ1 (ν) −∞ ∂θ θ2 π 2ν−4 = (3.27) ν + 1 λ1 (ν)ν p where λ1 (ν) is the first eigenvalue in the K-L expansion of the process { qν (t) B(t) : 0 < t < 1}. Proof. Since (3.23) is satisfied for ν ∈ (0, 2), it is legitimate to use (3.24). When q = qν we note a(q) = aν . Firstly we infer from Lemma 4.1 that (3.28)

q{Fν (0, y)} = π 2ν−4

2ν−1 Γ2 ( ν2 ) 2 cosh−2(1−ν) y. Γ(ν)

Secondly from (4.32) we see that ∂Fν (θ, y) 21−ν Γ(ν) −ν x + θ = ). ν cosh ( 2 ∂θ ν Γ (2) ν

10

Consequently for each ν > 0 we have {

F (θ, y) − F (0, y) 2 ∂Fν (θ, y) 2 } fν (0, y) ≤ sup sup fν (0, y) θ ∂θ y∈R θ∈R 22−2ν  Γ(ν) 2 = fν (0, y). ν2 Γ2 ( ν2 )

From this inequality and (3.28) we deduce the existence of M > 0 such that for each θ ∈ R, F (θ, y) − F (0, y) 2 y y } q{Fν (0, y)}fν (0, y) ≤ M cosh−2(1−ν)−ν ( ) = M cosh−2+ν ( ). θ ν ν This means that we may use the theorem of Lebesgue on dominated convergence in (3.24), as θ → 0, to obtain Z ∞  ∂Fν (θ, y) 2 2 cν (θ) ∼ a(q)θ q{Fν (0, y)} fν (0, y)dy. θ=0 ∂θ −∞

{

Next we have Z ∞  ∂Fν (θ, y) 2 q{Fν (0, y)} fν (0, y)dy θ=0 ∂θ −∞ Z ∞ π 2ν−4 Γ(ν) dx = ν 2 ν−1 2 −2ν−2(1−ν)−ν ν ν2 Γ ( 2 ) −∞ cosh ( xν ) Z ∞ π 2ν−4 Γ(ν) π 2ν−4 dx = 2 ν−1 2 ν = , ν 2 Γ ( 2 ) −∞ coshν+2 x ν(ν + 1) where the last equality follows from Lemma 4.4. Thus in order to establish (3.26) and (3.27) there remains only to calculate a(q) thanks to (3.25) in the case q = qν . But it is shown in [13] that a centered Gaussian process {X(t) : 0 < t < 1} for which the representation (1.4) holds satisfies Z 1  1 2 lim 2 log P X2 (t)dt ≥ x2 = . x→∞ x λ1 0 And λ1 is nothing else but the first eigenvalue of the K-L expansion (1.3) of X. This completes the proof. 2 } is locally Theorem 3.1. For each ν ∈ [1, 2] the sequence of statistics {ων,n asymtotically optimal in the sense of Bahadur, for testing

H0 (ν) : F = Fν (0, .) against the location alternative H1 (ν) : F = Fν (θ, .) with θ 6= 0. Proof. The result is already known to be true in the case ν = 2. For 0 < ν < 2, we first use Proposition 2.3, which implies λ1 (ν) = π 2ν−4 /ν. Then in view of (3.22) and (3.27), we see that the LAO condition (3.20) is satisfied.

11

4

Useful technical results

Lemma 4.1. For each x ∈ R,   ν ν 1 + tanh x  sin2 πJν I( , , ) = cosh−2 x. 2 2 2 Proof. Let r ∈ R be such that tanh x = − cos(πr), hence cosh−2 x = sin2 (πr). Then by definition of Jν one has  ν ν 1 + tanh x  Jν I( , , ) = r. 2 2 2 Combining these two equalities, we obtain the claimed result. Lemma 4.2. Let f (t) = t(1 − t). For each ν > 0,  ν ν 1 − cos(πr) π ν Γ(ν) ) ∼ ν−1 2 ν sinν (πr) f I( , , 2 2 2 ν2 Γ ( 2 )

as r → 0+ or as r → 1− .

Proof. Formula 26.5.4 in [1] implies that for α, β > 0, one has I(α, β, x) ∼

Γ(α + β) α x αΓ(α)Γ(β)

as x → 0+ ,

which in view of the second equality of (2.7) implies in turn ν ν Γ(ν) I( , , x) ∼ ν 2 ν xν/2 as x → 0+ 2 2 ( 2 )Γ ( 2 ) ν ν Γ(ν) and 1 − I( , , x) ∼ ν 2 ν (1 − x)ν/2 as x → 1− . 2 2 ( 2 )Γ ( 2 ) Thus ν/2  Γ(ν)  ν ν as x → 0+ or as x → 1− , f I( , , x) ∼ ν 2 ν x(1 − x) 2 2 ( 2 )Γ ( 2 ) and the result follows since  1 − cos(πr) 1 + cos(πr) ν/2 sin(πr) ν · ∼ 2 2 2

as r → 0+ or as r → 1− .

Lemma 4.3. If f is measurable on (0, 1) and ν ∈ (0, ∞), then (4.29)

2ν−1 Γ2 ( ν2 ) πΓ(ν)

Z 0

1

f (t) dt = 2(ν−1) sin {πJν (t)} 12

Z 0

1

 f I( ν2 , ν2 , 1−cos(πr) ) 2 dr. sinν−1 (πr)

Proof. The change of variable t = I( ν2 , ν2 , 1−cos(πr) ) gives Jν (t) = r, and in 2 view of (2.6), dt =

Γ(ν)  1 − cos(πr) ν2 −1  1 + cos(πr) ν2 −1 π sin(πr)  Γ(ν) sinν−1 (πr) = Γ2 ( ν2 ) 2 2 2 2ν−1 Γ2 ( ν2 )

hence

2ν−1 Γ2 ( ν2 ) dr dt = ν−1 2(ν−1) πΓ(ν) sin sin (πr) {πr}

and the result follows immediately. Lemma 4.4. For each ν > 0 one has Z x 2ν−1 Γ2 ( ν2 ) ν ν 1 + tanh x dy (4.30) = I( , , ) ν Γ(ν) 2 2 2 −∞ cosh y and Z



(4.31) −∞

dy =ν coshν+2 y

Z



−∞

ν2ν−1 Γ2 ( ν2 ) sinh2 y dy = . (ν + 1)Γ(ν) coshν+2 y

Furthermore ∂ ν ν 1 + tanh( x+θ 21−ν Γ(ν) −ν x + θ ν ) I( , , )= ). ν cosh ( 2 ∂θ 2 2 2 ν Γ (2) ν

(4.32)

Proof. The change of variables t=

1 + tanh y , 2

hence 4t(1 − t) =

1 cosh2 y

and 2dt =

dy cosh2 y

gives Z

x

−∞

dy = coshν y

Z

1+tanh x 2

  ν−2 4t(1 − t) 2 · 2dt

0

which, on using (2.6), leads to (4.30). The latter, when ν is changed into ν + 2 and as x → ∞, yields Z ∞ 2ν+1 Γ2 ( ν2 + 1) 2ν+1 ( ν2 )2 Γ2 ( ν2 ) dy = = , ν+2 Γ(ν + 2) (ν + 1)νΓ(ν + 2) y −∞ cosh which proves the equality between the left-hand and the right-hand terms of (4.31). Next, formula 4.5.86 in [1] implies Z Z 1 sinh y 1 dy sinh2 y − + = dy ν coshν+1 y ν coshν+2 y coshν+2 y from which we easily deduce the equality between the first and the second terms of (4.31). Finally, on differentiating both sides of (4.30) with respect to x, one easily obtains (4.32). 13

References [1] Milton Abramowitz and Irene A. Stegun, editors. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Dover Publications Inc., New York, 1992. Reprint of the 1972 edition. [2] T. W. Anderson and D. A. Darling. Asymptotic theory of certain “goodness of fit” criteria based on stochastic processes. Ann. Math. Statistics, 23:193–212, 1952. [3] R. R. Bahadur. Some limit theorems in statistics. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1971. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 4. [4] A. Borovkov. Statistique math´ematique. Editions Mir, Moscou, 1987. [5] Mikl´os Cs¨org˝o and Lajos Horv´ath. Weighted approximations in probability and statistics. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Ltd., Chichester, 1993. With a foreword by David Kendall. [6] Paul Deheuvels and Guennady V. Martynov. A Karhunen-Lo`eve decomposition of a Gaussian process generated by independent pairs of exponential random variables. Preprint, 2003. [7] N. Henze and Ya. Yu. Nikitin. A new approach to goodness-of-fit testing based on the integrated empirical process. J. Nonparametr. Statist., 12(3):391–416, 2000. [8] Norman L. Johnson, Samuel Kotz, and N. Balakrishnan. Continuous univariate distributions. Vol. 2. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons Inc., New York, 1995. [9] Yakov Nikitin. Asymptotic efficiency of nonparametric tests. Cambridge University Press, Cambridge, 1995. [10] J.-R. Pycke. A family of Karhunen-Lo`eve expansions related to the ndimensional Euclidean sphere and real projective space. Preprint, 2003. [11] Louis Robin. Fonctions sph´eriques de Legendre et fonctions sph´ero¨ıdales. Tome II. Gauthier-Villars, Paris, 1958. [12] Galen R. Shorack and Jon A. Wellner. Empirical processes with applications to statistics. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986. 14

[13] V. M. Zolotarev. Concerning a certain probability problem. Teor. Verojatnost. i Primenen., 6:219–222, 1961.

15