PCM QUANTIZATION ERRORS AND THE WHITE

DAVID JIMENEZ, LONG WANG, AND YANG WANG. Abstract. ... Note that any signal x ∈ Rd can be easily reconstructed using the data {x · vj}N j=1. Set.
172KB taille 1 téléchargements 234 vues
PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS DAVID JIMENEZ, LONG WANG, AND YANG WANG

Abstract. The White Noise Hypothesis (WNH), introduced by Bennett half century ago, assumes that in the pulse code modulation (PCM) quantization scheme the errors in individual channels behave like white noise, i.e. they are independent and identically distributed random variables. The WNH is key to estimating the means square quantization error (MSE). But is the WNH valid? In this paper we take a close look at the WNH. We show that in a redundant system the errors from individual channels can never be independent. Thus to an extend the WNH is invalid. Our numerical experients also indicate that with coarse quantization the WNH is far from being valid. However, as the main result of this paper we show that with fine quantizations the WNH is essentially valid, in which the errors from individual channels become asymptotically pairwise independent, each uniformly distributed in [−∆/2, ∆/2), where ∆ denotes the stepsize of the quantization.

1. Introduction In processing, analysing and storaging of analog signals it is often necessary to make atomic decompositions of the signal using a given set of atoms, or basis {v j }. With the basis, a signal x is represented as a linear combination of {v j }, x=

X

cj vj .

j

In practice {vj } is a finite set. Furthermore, for the purpose of error correction, recovery from data erasures or robustness, redundancy is built into {v j }, i.e. more elements than needed are in {vj }. Instead of a true basis, {vj } is chosen to be a frame. Since {vj } is a d finite set, we may without loss of generality assume {v j }N j=1 are vectors in R with N ≥ d.

Let F = [v1 , v2 , . . . ,vN ] be the d × N matrix whose columns are v 1 , . . . ,vN . We say {vj }N j=1 is a frame if F has rank d. Let λ max ≥ λmin > 0 be the maximal and minimal The third author is supported in part by the National Science Foundation, grant DMS-0139261. 1

2

DAVID JIMENEZ, LONG WANG, AND YANG WANG

eigenvalues of F F T , respectively. It is easily checked that N X

2

λmin kxk ≤

|x · vj |2 ≤ λmax kxk2 .

(1.1)

j=1

λmax and λmin are called the upper and lower frame bounds for the frame, respectively. If λmax = λmin = λ, in which case F F T = λId , we call {vj }N j=1 a tight frame with frame bound λ. Note that any signal x ∈ Rd can be easily reconstructed using the data {x · v j }N j=1 . Set y = [x · v1 , x · v2 , · · · , x · vN ]T . Then y = F T x and (F F T )−1 F y = (F F T )−1 F F T x = x. Let G = (F F T )−1 F = [u1 , u2 , . . . , uN ]. The set of columns {uj }N j=1 of G is called the canonical dual frame of the frame {v j }N j=1 . We have the reconstruction x=

N X

(x · vj ) uj .

(1.2)

j=1

−1 If {vj }N j=1 is a tight frame with frame bound λ, then G = λ F , and we have the recon-

struction N

1X x= (x · vj ) vj . λ

(1.3)

j=1

In digital applications, quantizations will have to be performed. The simplest scheme is the Pulse Code Modulation (PCM) quantization scheme, in which the coefficients {x·v j }N j=1 are quantized. In this paper we consider exclusively linear quantizations. Let A = ∆Z where ∆ > 0 is the quantization step. With linear quantization a real value t is replaced with the value in A that is the closest to t. So, in our setting, t is replaced with Q ∆ (t) given by   t 1 Q∆ (t) := + ∆. ∆ 2 N Thus, given a frame {vj }N j=1 and its canonical dual frame {u j }j=1 , instead of using the data N {x · vj }N j=1 and (1.2) to obtain a perfect reconstruction, we use the data {Q ∆ (x · vj )}j=1

and obtain an imperfect reconstruction b= x

N X

Q∆ (x · vj ) uj .

(1.4)

j=1

This raises the following question: How good is the reconstruction? This question has been studied in terms of both the worst case error and the mean square error (MSE), see e.g. [12].

PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS

3

Note that the error from the reconstruction is

where τ∆ (t) := t − Q∆ (t) =

t



b= x−x +

1 2





N X

τ∆ (x · vj ) uj ,

(1.5)

j=1

1 2



∆, with {·} denoting the fractional part. While

the a priori error bound is relatively straightforward to obtain, the mean square error  bk2 , assuming certain probability distribution for x, is much harder. To MSE := E kx − x

simplify the problem, the so-called White Noise Hypothesis (WNH), originally introduced in [4], is employed by engineers and mathematicians in this area. The WNH asserts the

following: • Each τ∆ (x · vj ) is uniformly distributed in [−∆/2, ∆/2); hence it has mean 0 and variance ∆2 /12. • {τ∆ (x · vj )}N j=1 are independent random variables. With the WNH the MSE is easily shown to be d N  ∆2 X ∆2 X b k2 = E kx − x λ−1 = kuj k2 . j 12 12 j=1

(1.6)

j=1

where {λj } are the eigenvalues of F F T .

But surprisingly there has not been any study on the legitimacy of the WNH, especially considering the fact that it is made under very general settings, where both the frame d {vj }N j=1 and the probability distribution of x ∈ R can take on numerous possibilities.

Thus, the WNH deserves a closer scrutiny, which is what this paper intends to do. We prove in this paper that under the assumption that the distribution of x has a density (absolutely continuous), the components of the quantization errors {τ ∆ (x · vj )}N j=1 can never be independent if N > d, and thus the WNH can never hold. However, our main result is that of a vindication of the WNH. We show that as ∆ → 0 + , {τ∆ (x · vj )}N j=1 becomes asymptotically pairwise independent and thus pairwise uncorrelated, as long as v i is not parrallel to vj for any i 6= j. Additionally each τ∆ (x · vj ) indeed becomes asymptotically uniformly distributed on [−∆/2, ∆/2]. These slightly weaker properties are sufficient to lead to the MSE given by (1.6) asymptotically. We also characterize the asymptotic behavior of the MSE if some vectors are parallel. These and other results are stated and proved in subsequent sections.

4

DAVID JIMENEZ, LONG WANG, AND YANG WANG

2. A Priori Error Bound and MSE under the WNH In this section we derive a priori error bounds and the formula for the MSE under the WNH. These results are not new. We include them for self-containment. We use the d following settings throughout this section: Let {v j }N j=1 be a frame in R with corresponding

frame matrix F = [v1 , v2 , . . . , vN ]. The eigenvalues of F F T are λmax = λ1 ≥ λ2 ≥ · · · ≥ λd = λmin > 0. Let {uj }N j=1 be the canonical dual frame with corresponding matrix P G = (F F T )−1 F . For any x = N j=1 (x · vj ) uj , using the quantization alphabet A = ∆Z we have the PCM quantized reconstruction b= x

N X

Q∆ (x · vj ) uj .

j=1

Proposition 2.1. For any x ∈ Rd we have 1 bk ≤ kx − x 2

r

N ∆. λmin

(2.1)

If in addition {vj }N j=1 is a tight frame with frame bound λ, then r 1 N bk ≤ kx − x ∆. 2 λ PN

T j=1 τ∆ (x · vj ) uj = Gy, where y = [τ∆ (x · v1 ) , . . . , τ∆ (x · vN )] .  yT GT Gy ≤ ρ GT G kyk2 where ρ(·) denotes the spectral radius. Now

Proof. We have x−b x= b k2 = Thus kx − x

ρ(GT G) = ρ(GGT ) = ρ((F F T )−1 ) = λ−1 min . Observe that |τ∆ (x · vj )| ≤ ∆/2. kyk2



(2.2)

N (∆/2)2 .

Thus

This yields the a priori error bound (2.1). The bound (2.2) is an

immediate corollary.



Proposition 2.2. Under the WNH, the MSE is bk E kx − x

2



d N ∆2 X −1 ∆2 X = λj = kuj k2 . 12 12 j=1

(2.3)

j=1

In particular, if {vj }N j=1 is a tight frame with frame bound λ, then  d 2 b k2 = E kx − x ∆ . 12λ

Proof.

(2.4)

T Denote GT G = [bij ]N i,j=1 and again let y = [τ∆ (x · v1 ) , . . . , τ∆ (x · vN )] . Note

b = Gy and that with the WNH, E(yi yj ) = E(τ∆ (x · vi )τ∆ (x · vj )) = (∆2 /12)δij . Now x − x

PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS

5

hence N N X X   ∆2 ∆2 bk2 = E yT GT Gy = E kx − x bij E (yi yj ) = = tr(GT G). bii 12 12 i,j=1

Finally, tr(GT G) =

PN

j=1 kuj k

2,

i=1

and tr(GT G) = tr(GGT ) = tr((F F T )−1 ) =

Pd

−1 j=1 λj .



Remark: The MSE formulae (2.2-2.4) still hold if the independence of {τ ∆ (x · vj )}N j=1 in the WNH is replaced with the weaker condition that {τ ∆ (x · vj )}N j=1 are uncorrelated. 3. A Closer Look at the WNH The WNH asserts that the error components {τ ∆ (x · vj )}N j=1 are independent and identically distributed random variables. We show that in general this is not true. Theorem 3.1. Let X ∈ Rd be an absolutely continuous random vector. Let {v j }N j=1 be a frame in Rd with N > d and vj 6= 0. Then the random variables {τ∆ (X · vj )}N j=1 are not independent. Proof.

Let F be the frame matrix for the frame {v j }. Then dim(range(F T )) = d,

and therefore L(range(F T )) = 0 where L is the Lebesgue measure on R N . Let Y = b = [Q∆ (Y1 ), . . . , Q∆ (YN )]T be the quantized Y. Denote [Y1 , . . . , YN ]T := F T X, and let Y b = [Z1 , . . . , ZN ]T . Note that Yj = vj · X, so each Yj is absolutely continuous, Z = Y−Y

and therefore so is each Zj . Assume Zj has density ψj (·). If {Zj } are independent, then Z has density ψ(z1 , . . . , zN ) =

N Y

ψj (zj ).

j=1

b = [Q∆ (y1 ), . . . , Q∆ (yN )]T . Set Ω := {y − y b : y ∈ Img(F T )}, Now, for y ∈ RN denote y b = w} and Lw := {y : y ∈ Img(F T ), y b = w}. Note that Kw := {y − w : y ∈ Img(F T ), y

Kw and Lw are just translations of one another. Therefore L(K w ) = L(Lw ). If Λ = {b y:y∈ S S Img(F )}, then, Λ is countable, and therefore Ω = Kw , while Img(F ) = Lw . w∈Λ

w∈Λ

Therefore L(Kw ) ≤ L(Img(F )) = 0, and hence L(Ω) = 0. Nevertheless, note that ψ(x) = 0 for x ∈ Ωc . It follows, n

P (Z ∈ R ) = which is a contradition.

Z

ψ(x) dx = 1, Ω



6

DAVID JIMENEZ, LONG WANG, AND YANG WANG

Theorem 3.2. Let X = [X1 , . . . , Xm ]T be a random vector in Rm whose distribution has density function g(x1 , . . . , xm ). (1) The error components {τ∆ (Xj )}m j=1 are independent if and only if there exist complex numbers {βj (n) : 1 ≤ j ≤ m, n ∈ Z} such that

gb

a

1



,...,

am  = β1 (a1 ) · · · βm (am ) ∆

(3.1)

for all [a1 , . . . , am ]T ∈ Zm . (2) Let hj (t) be the marginal density of Xj . Then {τ∆ (Xj )}m j=1 are identically distributed if and only if

X

hj (t − n∆) = H(t)

a.e.

n∈Z

for some H(t) independent of j. They are uniformly distributed on [−∆/2, ∆/2] if and only if H(t) = 1/∆ a.e..

Proof. To prove (1) we first prove that Y = [τ ∆ (X1 ) , . . . , τ∆ (Xm )]T has a density function. Let I∆ = [−∆/2, ∆/2]. Set

h(y) :=

X

g(y − ∆a)

(3.2)

a∈Zm

m . For any Ω ⊆ I m we have for y ∈ I∆ ∆

Z



m

h(y) = P (Y ∈ Ω) = P (X ∈ Ω + ∆Z ) =

X Z

a∈Zm

g(y − ∆a) = Ω

Z

X

Ω a∈Zm

g(y − ∆a).

PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS

7

m . Now, on I the Fourier series of h(y) Thus, the density of Y is given by (3.2) for y ∈ I ∆ ∆ P a 2iπ ∆ ·y is h(y) = a∈Zm ca e , where

ca = = =

D Z

g(y), e2iπ ∆ ·y a

X

=

m) L2 (I∆

g(y − ∆b)e−2iπ ∆ ·y dy a

m I∆ b∈Zm

X Z

b∈Zm

=

E

g(y − ∆b)e−2iπ ∆ ·y dy

X Z

b∈Zm

m +∆b I∆

b∈Zm

m +∆b I∆

X Z

Z

a

m I∆ a

g(y)e−2iπ ∆ ·(y+∆b) dy g(y)e−2iπ ∆ ·y dy a

g(y)e−2iπ ∆ ·y dy m R a . = gb ∆

=

a

m But {Yj }m j=1 are independent if and only if on I ∆ we have g(y1 , . . . , ym ) = g1 (y1 ) · · · gm (ym ).

It is easily checked that this happens if and only if gb

a

a  a  a  a2 am  1 2 m = h1 h2 · · · hm ,..., ∆ ∆ ∆ ∆ ∆ ∆ 1

,

for all a = [a1 , . . . , am ]T ∈ Zm , with hj (ξ) = gbi (ξ). This part of the theorem is proved by

setting βj (n) = hj (n).

P To prove (2), we only have to observe that the density of τ ∆ (Xj ) is precisely n∈Z hj (t − P ∆n) for t ∈ I∆ . This immediately yields n∈Z hj (t − ∆n) = H(t) for some H(t) on I∆ . P But each n∈Z hj (t − ∆n) is ∆-periodic. Furthermore, if τ ∆ (Xj ) is uniformly distributed

on I∆ then H(t) = 1/∆. This completes the proof of the theorem.



Theorem 3.2 puts strong constraints on the distribution of x for the WNH to hold. Let X ∈ Rd be a random vector with joint density f (x). Let {v j }dj=1 be linearly independent, and let Y = [X · v1 , X · v2 , . . . , X · vd ]T . Then the joint density of Y is  g(y) = | det(F )|−1 f (F T )−1 y where F = [v1 , v2 , . . . , vd ]. Thus, both the independence

and the identical distribution assumptions in the WNH, even for N = d, will not be true unless very exact conditions are met.

8

DAVID JIMENEZ, LONG WANG, AND YANG WANG

Corollary 3.3. Let X ∈ Rd be a random vector with joint density f (x) and {v j }dj=1 be linearly independent vectors in Rd . Let Y = F T X = [X · v1 , . . . , X · vN ]T and g(y) =  | det(F )|−1 f (F T )−1 y where F = [v1 , . . . , vd ]. (1) {τ∆ (Yj )}dj=1 are independent random variables if and only if there exist complex numbers {βj (n) : 1 ≤ j ≤ d, n ∈ Z} such that gb

a

1



,...,

ad  = β1 (a1 ) · · · βd (ad ) ∆

(3.3)

for all [a1 , . . . , ad ]T ∈ Zd . R (2) Let hj (t) = Rd−1 g(x1 , . . . , xj−1 , t, xj+1 , . . . , xd ) dx1 · · · dxj−1 dxj+1 . . . dxd . Then P {τ∆ (Xj )}dj=1 are identically distributed if and only if n∈Z hj (t − n∆) = H(t) a.e. ∆ for some H(t) independent of j. They are uniformly distributed on [− ∆ 2 , 2 ] if and

only if H(t) = 1/∆ a.e.. Proof. We only have to observe that g(y) is the density of Y and that h j is the marginal density of Yj . The corollary now follows directly from the theorem.



4. Asymptotic Behavior of Errors: Linear Independence Case In many practical applications such as music CD, fine quantizations with 16 bits or more have been adopted. Although the WNH is not valid in general, with fine quantizations we prove here that a weaker version of the WNH is close to being valid, which yields an asymptotic formula for the PCM quantized MSE. d We again consider the same setup as before. Let {v j }N j=1 be a frame in R with corre-

sponding frame matrix F = [v1 , v2 , . . . , vN ]. The eigenvalues of F F T are λmax = λ1 ≥ λ2 ≥ · · · ≥ λd = λmin > 0. Let {uj }N j=1 be the canonical dual frame with corresponding matrix P T −1 G = (F F ) F . For any x ∈ Rd we have x = N j=1 (x · vj ) uj . Using the quantization

b=x b(∆) as it depends alphabet A = ∆Z we have the PCM reconstruction (1.4). Note that x

on ∆. With the WNH we obtain the MSE

bk MSE = E kx − x

2



N ∆2 X −1 = λj . 12 j=1

PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS

9

To study the asymptotic behavior of the error components, we study as ∆ → 0 + the normalized quantization error N

X 1 1 b) = (x − x τ∆ (x · vj ) uj . ∆ ∆

(4.1)

j=1

Theorem 4.1. Let X ∈ Rd be an absolutely continuous random vector. Let w 1 , . . . , wm be linearly independent vectors in Rd . Then T  1 1 τ∆ (X · w1 ) , . . . , τ∆ (X · wm ) ∆ ∆ converges in distribution as ∆ → 0+ to a random vector unformly distributed in [−1/2, 1/2] m . Proof. Denote Yj = X · wj . Since {wj } are linearly independent, Y = [Y1 , . . . , Ym ]T is absolutely continuous with some joint density f (x), x ∈ R m . As a consequence of (3.2) one n o Y 1 τ∆ (Yj ) = ∆j + 12 − 21 , is has that the distribution of Z = [Z1 , . . . , Zm ]T , where Zj = ∆ X f∆ (x) := ∆m f (∆x − ∆a). (4.2) a∈Zm

for x ∈

[−1/2, 1/2]m .

Again denote I1 := [−1/2, 1/2]. Observe that Z |f∆ (x)| dx kf∆ kL1 (I1m ) = I1m

= ≤ = =

X ∆m f (∆x − ∆a) dx m I1 a∈Zm Z X ∆m |f (∆x − ∆a)| dx

Z

a∈Zm

I1m

a∈Zm

m +∆a I∆

X Z

Z

S

=

Z

|f (y)| dy

a∈Zm

(I∆m +∆a)

|f (y)| dy

|f (y)| dy

Rm

= kf kL1 (Rm ) . Now, if Ω = [a1 , b1 ] × · · · × [am , bm ] and f (x) = 1Ω (x), then for x ∈ I1m observe that f∆ (x) = ∆m K∆ where K∆ (x) = #{a ∈ Zm : ∆x + ∆a ∈ Ω}. Obviously, K∆ (x) = s/∆m + O(∆−m+1 ) where s = L(Ω) is the Lebesgue measure of Ω. Then f ∆ → s1I1m in L1 (I1m ) as ∆ → 0+ .

10

DAVID JIMENEZ, LONG WANG, AND YANG WANG

Coming back to the case when f (x) is the density of Y. For any ε > 0 it is possible to P choose a g(x) ∈ L1 (Rm ) such that kf − gkL1 < 3ε , and furthermore, g(x) = N j=1 cj 1Ej (x)

is a simple function where cj ∈ R and each Ej is a product of finite intervals. Observe that  R R PN 1 m m j=1 cj L(Ej ). Since (1Ej )∆ → L(Ej )1I1 in L we have g∆ → Rm g = Rm g 1I1 as

R 

1 < ε/3 whenever ∆ < δ. ∆ → 0. Hence there exists a δ > 0 such that g∆ − m g 1I m R

1

L

Now, for ∆ < δ,



 R

f∆ − 1I m 1 m = kf∆ − g∆ k 1 m + g∆ − m L (I1 ) Rm g 1I1 L1 (I1m ) 1 L (I1 ) R  + 1 − Rm g k1I1m kL1 (I1m )  R ε ε < + + 1 − Rm g 3 3   R 2ε R + Rm g − Rm g = 3 < ε.

 Applying the above theorem to the MSE, if {v j }N j=1 are pairwise linearly independent then the error components {τ∆ (X · vj )}N j=1 become asymptotically pairwise independent ∆ and each uniformly distributed in [− ∆ 2 , 2 ]. Therefore we have the following:

Corollary 4.2. Let X ∈ Rd be an absolutely continuous random vector. If {v j }N j=1 are pairwise linearly independent, then as ∆ → 0 + we have



d N  ∆2 X ∆2 X 2 −1 2 b E kX − Xk = λj + o(∆ ) = kuj k2 + o(∆2 ). 12 12 j=1

(4.3)

j=1

T −1 Proof. Denote F the frame matrix associated to {v j }N j=1 , H = (F F ) , Yj = X · vj , o n Y Zj = ∆j + 12 − 12 , and Z = [Z1 , . . . , Zm ]T . By Theorem 4.1, E (Zi ) → 0 and E (Zi Zj ) →

PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS 1 12 δij

11

as ∆ → 0+ . It follows from the proof of Proposition 2.2 that 1 b 2 ) = E(ZT HZ) E(kX − Xk ∆2   N X = E Zi Zj hij  i,j=1

=

N X

hij E (Zi Zj )

i,j+1 N

=

1 X hii + o(1) 12

=

d 1 X −1 λj + o(1), 12

i=1

j=1

and hence 

N N  ∆2 X ∆2 X −1 2 2 b E kX − Xk = λj + o(∆ ) = kuj k2 + o(∆2 ). 12 12 j=1

j=1

 5. Asymptotic Behavior of Errors: Linear Dependence Case Mathematically it is very interesting to understand what happens if {v j }N j=1 are not pairwise linearly independent, and how the MSE behaves as ∆ → 0 + . We return to previous calculations and note that b 2) = E(kX − Xk

N X

hij E (τ∆ (X · vi )τ∆ (X · vj )) .

i,j=1

Our main result in this section is:

Theorem 5.1. Let X be an absolutely continuous real random variable. Let α ∈ R \ {0}. Then

      

0,

α 6= Q,

1 p 1 , α = and p + q is even, E (τ (X)τ (αX)) = 12pq q ∆ ∆  ∆→0+ ∆2   p 1    − , α = and p + q is odd, 24pq q where p, q are coprime integers. lim

(5.1)

12

DAVID JIMENEZ, LONG WANG, AND YANG WANG

Proof. Denote g(x) := {x + 21 } − 12 , and let gn (x) be a small perturbation of g(x) such that (a) |gn (x)| ≤ 1/2; (b) supp(g(x) − gn (x)) ⊆ [ 12 − n1 , 21 + n1 ] + Z; (c) gn (x) ∈ C ∞ , and is Z-periodic. Now, set E(∆) := E (τ∆ (X)τ∆ (αX))      X αX = E g g ∆ ∆ Z     αx x g f (x) dx, g = ∆ ∆ R and En (∆) := Claim:

Z

gn R

x ∆

gn

 αx  ∆

f (x) dx.

En (∆) → E(∆) as n → ∞ uniformly for all ∆ > 0.

For any ε > 0, Z h      x   αx i x αx gn |En (∆) − E(∆)| = gn −g g f (x) dx ∆ ∆ ∆ ∆ R Z Z   x  x   αx  1 αx 1 ≤ −g −g f (x) dx + f (x) dx. gn gn 2 R ∆ ∆ 2 R ∆ ∆ R Now there exists an M > 0 such that [−M,M ]c f (x) dx < 2ε . So Z M   Z    x   x  ε x x −g −g f (x) dx ≤ f (x) dx + . gn gn ∆ ∆ ∆ ∆ 2 R −M Proof of the Claim.

Furthermore, let An (∆, M ) := supp(gn (x/∆) − g(x/∆)) ∩ [−M, M ]. Then we have 1 1 1 1 An (∆, M ) ⊆ ([ − , + ] + Z) ∩ [−M, M ]. 2 n 2 n 2∆ 4M Hence L(An (∆, M )) ≤ 2M ∆ · n = n , and thus Z M    x  x −g gn f (x) dx ≤ ∆ ∆ −M

Z

f (x) dx
0, giving absolute convergence of the Fourier series. Thus Z X   X −1 −1 (n) (n) f (t) dt ck e2πikαt∆ ck e2πikt∆ En (∆) = lim K→∞ R |k|≤K

=

lim

K→∞

X

|k|,|`|≤K

|k|≤K

 k + α`  (n) (n) . ck c` fb − ∆

 (n) b Observe that |f(ξ)| ≤ kf kL1 = 1, and |ck | = o (|k| + 1)−L for any L > 0. So the series converges absolutely and uniformly in ∆. Thus X (n) (n)  k + α`  . En (∆) = ck c` fb − ∆

(5.2)

k,`∈Z

For any n > 0 we have lim En (∆) =

∆→0+

X

k,`∈Z

(n) (n)

ck c`

 k + α`  lim fb − ∆ ∆→0+

because the series converges absolutely and uniformly. Suppose α ∈ / Q. Then k + α` 6= 0 if k+α`  b either k 6= 0 or ` 6= 0. Thus − → ∞, and hence lim∆→0+ f − k+α` = 0 as f ∈ L1 (R). ∆



It follows that

lim En (∆) = 0.

∆→0+

But En (∆) → E(∆) as n → ∞ uniformly in ∆, which yields E(∆) → 0 as ∆ → 0 + . Next, suppose α =

p q

where p, q ∈ Z, (p, q) = 1. We observe that k + α` = 0 if and only

if k = qm and ` = −pm for some m ∈ Z. In such a case Z  k + α`  = fb(0) = fb − f = 1. ∆ R It follows that X X X (n) (n) b (n) lim En (∆) = c(n) c(n) c(n) qm c−pm f (0) = qm c−pm = qm cpm . ∆→0+

m∈Z

m∈Z

m∈Z

14

DAVID JIMENEZ, LONG WANG, AND YANG WANG

For r ∈ Z, r 6= 0 set G(n) r (x) :=

X

2πimx c(n) rm e

m∈Z

and Gr (x) :=

X

crm e2πimx .

m∈Z

By Parseval we have

E D (n) En (∆) = G(n) , G q p

L2 ([0,1])

It is easy to check that

.

|r|−1 1 X x + j  = . gn |r| r j=0 1 P|r|−1 converges in L2 ([0, 1]) to Gr (x) = |r| j=0 g

G(n) r

(n) x+j  , which has Fourier series Hence Gr r P 2πimx k−1 Gr (x) = m∈Z crm e with c0 = 0 and ck = (−1) (2πik) for k 6= 0. This yields D E X (n) lim lim En (∆) = lim G(n) , G = hGq , Gp i = cqm cpm . q p n→∞ ∆→0+

n→∞

m∈Z

Finally X

cqm cpm =

m∈Z

X  (−1)qm−1  (−1)pm−1  2πimq 2πimp

m∈Z\{0}

=

∞ 1 X (−1)(p+q)m . 2pqπ 2 m2 m=1

P∞

P π2 1 Note that if p + q is even then m=1 = ∞ m=1 m2 = 6 . On the other hand, if m2 P P (−1)(p+q)m (−1)m π2 p + q is odd then ∞  = ∞ m=1 m=1 m2 = − 12 . The theorem follows. m2 (−1)(p+q)m

Corollary 5.2. Let X be an absolutely continuous random vector in R d , w 6= 0, w ∈ Rd and α ∈ R \ {0}. Then

      

0,

α 6= Q,

1 p 1 , α = and p + q is even, E (τ (w · X)τ (αw · X)) = ∆ ∆ 12pq q  ∆→0+ ∆2   1 p    − , α = and p + q is odd, 24pq q where p, q are coprime integers. lim

(5.3)

Proof. We only need to note that w · X is an absolutely continuous random variable. The corollary follows immediately from Theorem 4.1.



PCM QUANTIZATION ERRORS AND THE WHITE NOISE HYPOTHESIS

15

We can now characterize completely the asymptotic bahavior of the MSE in all cases. For any two vectors w1 , w2 ∈ Rd define r(w1 , w2 ) by  1 p   w1 · w 2 , w1 = w2 , and p + q is even,   pq q   1 p r(w1 , w2 ) = − w1 · w2 , w1 = w2 , and p + q is odd,   2pq q     0, otherwise, where p, q are coprime integers.

Corollary 5.3. Let X ∈ Rd be an absolutely continuous random vector. Then as ∆ −→ 0 + the MSE satisfies bk E kx − x

2



d ∆2 X −1 ∆2 λj + = 12 6 j=1

X

r(ui , uj ) + o(∆2 ),

(5.4)

1≤i 0 we shall denote MSEideal =

d ∆2 X −1 ∆2 λj + 12 6 j=1

X

r(ui , uj ),

(5.5)

1≤i