Digital Signal Processing 17 (2007) 836–847 www.elsevier.com/locate/dsp
Data block adaptive filtering algorithms for α-stable random processes Zhijin Zhao a,b , Kehai Dong b,∗ , Chunyun Xu b a National Key Lab of Integrated Service Network, Xidian University, Xi’an 710071, People’s Republic of China b School of Telecommunication, Hangzhou Dianzi University, Hangzhou 310018, People’s Republic of China
Available online 30 March 2007
Abstract The least mean p-norm (LMP) algorithm is an effective algorithm for processing the signal of α-stable distribution. This paper proposes data block adaptive filtering algorithms for the α-stable random processes based on the fractional lower order statistics (FLOS). The data block algorithms change the direction of coefficient increment vector by introducing a matrix which includes the information of more past input signal vectors than which are used in the LMP algorithm during the iteration process, taking full advantage of the past values of the gradient vector during the adaptation. Simulations studies indicate that the proposed algorithms increase convergence rate in non-Gaussian stable distribution noise environments compared to the existing algorithms based on FLOS summarized in this paper. © 2007 Elsevier Inc. All rights reserved. Keywords: α-Stable distribution; Fractional lower order statistics; Data block; Least mean p-norm algorithm; Convergence rate
1. Introduction In many signal processing applications, the noise is modeled as a Gaussian process for it can be significantly simplified in the required processing. This assumption has been widely accepted because the central limit theorem (CLT) is usually quoted. However, many types of noise exhibit non-Gaussian behavior, such as low frequency atmospheric noise, underwater acoustic noise, and man-made noise [1–5]. Typical realizations of such random signals produce more outliers than expected under the Gaussian assumption, degrading the performance of filtering. Due to this reason, the Gaussian noise model for these signals cannot be justified. Several studies [6–13] showed that an important class of distributions known as α-stable distributions which can be applied to model this type of non-Gaussian noise. These distribution processes which have heavier tails than those of Gaussian distributions and exhibit sharp spikes or occasional bursts in their realizations are the limiting distributions for a more general CLT [6]. The α-stable distributions do not have compact expression for the probability density functions except α = 1 and 2 cases which respectively correspond to the Cauchy and Gaussian distributions. However, the α-stable distribution process can be conveniently described by the following characteristic functions [6,8] * Corresponding author.
E-mail address:
[email protected] (K. Dong). 1051-2004/$ – see front matter © 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.dsp.2007.03.004
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
φ(t) = exp iat − γ |t|α 1 + iβ sign(t)ω(t, α) , where ω(t, α) =
tan απ 2 ,
if α = 1,
log |t|,
if α = 1,
2 π
837
(1)
(2)
and 0 < α 2, −1 β 1, γ > 0, −∞ < a < ∞, sign(·) denotes the signum function. Thus the α-stable distribution is completely determined by these four parameters: (1) β is the index of skewness, it is a symmetry parameter. When β = 0, the distribution is symmetric about the center α called symmetric α-stable (SαS). (2) γ is the dispersion parameter, it is similar to the variance of Gaussian process. (3) a is the location parameter. (4) α is the characteristic exponent, it controls the tails of the distribution. The Gaussian process is a special case of the stable processes with α = 2. For 0 < α < 2, the distributions have algebraic tails which are significantly heavier than the exponential tail of the Gaussian distributions. And the smaller the value of α is, the heavier the tails of the distribution. This property indicates that the α-stable distributions is an appealing model for impulsive noise environments. Because of the heavy tails, there are the nonexistence of the finite second or higher-order moments of stable distributions, except the limiting case of α = 2. Precisely, let X be an α-stable random variable, if 0 < α < 2, then E |X|p = ∞, if p α (3) and
E |X|p < ∞,
if 0 p < α.
(4)
If α = 2, then E |X|p < ∞,
for all p 0.
(5)
Although the second-order moment of a SαS random variable with 0 < α < 2 does not exist, all moments of order less than α do exist and are called the fractional lower order moment (FLOM’s). The FLOM’s of a SαS random variable with zero location parameter and dispersion γ can easily given by E |X|p = C(p, α)γ p/α , for 0 < p < α, (6) where
2p+1 p+1 2 (−p/α) C(p, α) = απ(−p/2)
(7)
depends only on α and p, not on X. (·) denotes the gamma function. For the linear theory of second-order processes, the most commonly used criterion for the best estimation is the minimum mean square error (MMSE) criterion in a Hilbert space where the existence of L2 norm is required. But for α-stable processes, the MMSE criterion is no longer appropriate due to the lack of finite variance. However, the minimum dispersion (MD) criterion can be applied to the α-stable processes [14]. It is also equivalent to minimizing the FLOMs of estimation errors in a Banach space where only the existence of Lp norm for p < α is required for the geometrical treatment of the α-stable process [6,9,15]. In this paper, we propose data block algorithms based on FLOS for adaptive filtering under additive α-stable noise corresponding to the case of 1 < α < 2. The purpose is to increase convergence rate under the precondition of comparable stability performance. In Section 2, some related adaptive filtering algorithms for the α-stable distribution are summarized. In Section 3, first, we propose data block least mean p-norm (DBLMP) algorithm and signedDBLMP algorithm called data block least mean absolute deviation (DBLMAD) algorithm based on FLOS. Second, we introduce two normalized adaptation algorithms, which are data block normalized least mean p-norm (DBNLMP) algorithm and data block normalized least mean absolute deviation (DBNLMAD) algorithm. Also, a generalization of data block normalized least mean p-norm algorithm (GDBNLMP) with a convergence proof is proposed then. In Section 4, the performance of the proposed algorithms is compared to those of related algorithms summarized in Section 2. Finally, we draw conclusions in Section 5.
838
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
2. Some related adaptive filtering algorithms For the α-stable distribution, the gradient descent algorithm can be applied to solve the adaptation problem by minimization of the following p-norm cost function (hence known as least mean p-norm (LMP) algorithm) [6] p J = E e(n) (8) and the update equation is as follows p−1 W(n + 1) = W(n) + μp e(n) sign e(n) xL (n)
(9)
for 1 p < α, where W(n) = [w0 (n), w1 (n), . . . , wL−1 (n)]T are the tap weights of the adaptive filter at time n, xL (n) = [x(n), x(n − 1), . . . , x(n − L + 1)]T is an input signal vector of the adaptive filter, containing the L samples of the input data in filter memory at time n, e(n) = d(n) − WT (n)xL (n) is the error between the adaptive filter output and the desired signal d(n), and μ is the step size which should be appropriately determined. When p is chosen as 1, the LMP algorithm is called the least mean absolute deviation (LMAD) algorithm [6], and it has the following update equation (10) W(n + 1) = W(n) + μ sign e(n) xL (n). In order to improve convergence rate and stability performance, the normalized version of LMP and LMAD algorithms, which are known as NLMP and NLMAD algorithms, have the following update equations, respectively [9], |e(n)|p−1 sign(e(n)) xL (n), p xL (n)p + λ sign(e(n)) xL (n). W(n + 1) = W(n) + μ xL (n)1 + λ
W(n + 1) = W(n) + μp
(11) (12)
And a more general form of NLMP (here we denote it as GNLMP) algorithm which has better performance than that of NLMP algorithm had been proposed with the following update equation [10] W(n + 1) = W(n) + μ
ea (n) (q−1)a (n). x qa xL (n)qa + λ L
(13)
Although the convergence rate and stability performance has been improved to some extent after the normalization and the generalization, the convergence rate can still be restricted. Because they are all based on the current instantaneous values of single input signal vector and error signal information during the adaptation processes. Hence, they do not take full advantage of useful information included in more past input signal vectors and error signals. Thus it still takes more time to update the filter coefficients, confining the rate of convergence. A proposed FLOS-based algorithm which is called “momentum”-type generalized NLMP (here we denote it as Mom-GNLMP) algorithm increases the convergence rate by using more than one term to estimate the gradient with the following update equation [10] W(n + 1) = W(n) + μ
n
ea (j ) (q−1)a (j ). xL qa x (j ) + λ L qa j =n−k
(14)
In Section 3, we will propose FLOS-based data block adaptive filtering algorithms which use more than one input signal vector and its corresponding error information during the iteration, achieving a higher convergence rate than that of algorithms summarized above. 3. Proposed algorithms Let us consider designing an FIR filter with an input consisting of a stationary SαS process {x(0), x(1), x(2), . . .}, we rewrite the input signal vector xL (n) at time n T xL (n) = x(n), x(n − 1), . . . , x(n − L + 1) . (15)
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
839
Then, a matrix which is composed of input signal vector at the current time n and its past m − 1 input vectors can be constructed given by the following (16) XL,m (n) = xL (n), xL (n − 1), . . . , xL (n − m + 1) . Let D(n) and E(n) be the desired signal vector and its corresponding error vector, both of the two vectors include the information at current time n and their past information of m − 1 vectors. They are given by the following equations, respectively, T (17) D(n) = d(n), d(n − 1), . . . , d(n − m + 1) , T (18) E(n) = e(n), e(n − 1), . . . , e(n − m + 1) . Hence, the error vector E(n) can be written as E(n) = D(n) − XTL,m (n)W(n).
(19)
The problem is to find the tap weights [w0 (n), w1 (n), . . . , wL−1 (n)]T such that the output of the filter is as close to the given desired response D(n) as possible. Here we assume D(n) and XL,m (n) are joint SαS. Specifically, we would like to find [w0 (n), w1 (n), . . . , wL−1 (n)]T such that the dispersion of the error E(n) is minimized. Base on the theory of FLOS, the p-norm cost function can be given by p p (20) J (n) = E E(n) = E D(n) − XTL,m (n)W(n) . This adaptation problem can be solved asymptotically by using the stochastic gradient method with the motivation of the LMP algorithm [6]. We obtain the gradient estimate given by p−1 ∂E(n) ∂|E(n)|p E(n) =p sign E(n) ∇ˆ W J (n) = ∂W(n) ∂W(n) p−1 = p −XL,m (n) E(n) sign E(n) = −pXL,m (n)Ep−1 (n),
(21)
where E(n) p−1 = e(n) p−1 , e(n − 1) p−1 , . . . , e(n − m + 1) p−1 T , T sign E(n) = sign e(n) , sign e(n − 1) , . . . , sign e(n − m + 1) , p−1 p−1 p−1 p−1 T Ep−1 (n) = E(n) sign E(n) = e(n) , e(n − 1) , . . . , e(n − m + 1) , (·)p−1 = | · |p−1 sign(·), and denotes the hadamard product. Hence, according to the instantaneous gradient descent algorithm, we propose the data block least mean p-norm (DBLMP) algorithm given by W(n + 1) = W(n) + μpXL,m (n)Ep−1 (n).
(22)
Also, in order to improve the convergence rate and the stability performance, we introduce the following normalized adaptation algorithms with the motivation of the normalized-LMP algorithm. The first algorithm, data block normalized least mean p-norm (DBNLMP) algorithm, uses the following update equation W(n + 1) = W(n) + μp
λ+
1 m
XL,m (n) Ep−1 (n), m k (n)p x p k=1 L
(23)
where μ and λ are appropriately chosen update parameters. λ is used to avoid excessively large updates in case of an occasionally small inputs. In (23) normalization is obtained by dividing the update term by the average of the results of all m input signal vectors after calculating each p-norm. Hence, the normalization term is obtained by
L−1 m m
1 xk (n) p = 1 x(n − k + 1 − i) p . (24) L p m m k=1
k=1
i=0
For m = 1, the DBNLMP algorithm in (23) reduces to the NLMP algorithm in (11).
840
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
The second algorithm, data block normalized least mean absolute deviation (DBNLMAD), corresponds to the case of p = 1 in (23). It has the following update equation XL,m (n) sign(E(n)) . (25) W(n + 1) = W(n) + μ k λ + m1 m k=1 xL (n)1 Similarly, by selecting m = 1 the update equation of DBNLMAD algorithm in (25) reduces to the NLMAD algorithm in (12). As a generalization of the DBNLMP algorithm, we propose GDBNLMP algorithm with the motivation of the GNLMP algorithm [10], and it has the following update equation ξ
W(n + 1) = W(n) + μp
XL,m (n) Eη (n), ξ +η m 1 k λ + m k=1 xL (n)ξ +η
(26)
where
ξ ξ ξ ξ XL,m (n) = xL (n), xL (n − 1), . . . , xL (n − m + 1) ⎡ ⎤ x ξ (n) x ξ (n − 1) ... x ξ (n − m + 1) ⎢ x ξ (n − 1) x ξ (n − 1 − 1) ... x ξ (n − m + 1 − 1) ⎥ ⎢ ⎥ =⎢ ⎥, .. .. .. .. ⎣ ⎦ . . . . ξ ξ ξ x (n − L + 1) x (n − 1 − L + 1) . . . x (n − m + 1 − L + 1) η η η T Eη (n) = e(n) , e(n − 1) , . . . , e(n − m + 1) , 0 < η α − 1, ξ > 0.
If η and ξ are chosen as p − 1 and 1, respectively, the GDBNLMP algorithm reduces to the DBNLMP algorithm. For weight update Eq. (26), the weight gradient vector is 2 ξ 2 XL,m (n) η E bXξ (n)Eη (n) 2 , (27) μp E (n) E W(n + 1) − W(n) 2 = E L,m 2 ξ +η k λ + m1 m 2 k=1 xL (n)ξ +η where
b = max
λ+
1 m
m
. ξ +η
μp
k k=1 xL (n)ξ +η
Using matrix norm inequality [16] Ax2 A2 · x2 , we have 2 2 2 ξ ξ E bXL,m (n)Eη (n) 2 E bXL,m (n) 2 Eη (n) 2 η ξ 2 2 = b2 E XL,m (n) sign XL,m (n) 2 E(n) sign E(n) 2
η 2 2η ξ 2 ξ 2 m−1 2 2 e(n − i) = b E XL,m (n) 2 E(n) 2 = b E XL,m (n) 2 . (28) i=0
Thus
2 2η ξ 2 m−1 2 XL,m (n) 2 e(n − i) E W(n + 1) − W(n) 2 b E .
(29)
i=0
Since 0 < η α − 1 and 0 < α < 2, we obtain 0 < 2η 2α − 2 < 2. Thus E[|e(n − i)|2η ] < ∞, and m−1 m−1
2η 2η e(n − i) E e(n − i) < ∞. E = i=0
i=0
Usually, on the assumption that the input signal xL (n) has the limited power, it can be easily proved that
2η ξ 2 m−1 XL,m (n) e(n − i) E < ∞. 2
i=0
(30)
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
Thus
2 E W(n + 1) − W(n) 2 < ∞.
841
(31)
Hence, we can get the conclusion that the proposed GDBNLMP algorithm can be expected to converge in SαS distribution environments. 4. Simulations studies In simulation studies, we compare the performances of the adaptation algorithms in a prediction problem where the input sequence is an AR(N )α-stable process, which is defined as follows: x(n) =
N
ai x(n − i) + u(n),
(32)
i=1
where ai , i = 1, 2, . . . , N are the deterministic coefficients and u(n) is a α-stable sequence of i.i.d. random variables. Without loss of generality, u(n) is commonly chosen to be an symmetrical α-stable (SαS) distribution. It has been shown that the input random variable x(n) is also an SαS with the same characteristic exponent if {ai } is an absolutely summable sequence [6,17]. Also, we assume that exact AR model order N of the input sequence is known. Here, six sets of simulation studies are performed to estimate parameters of an AR(2) model. For all simulation studies, the input sequence is selected as an AR(2) sequence with coefficients of a1 = 1.6 and a2 = −0.8. To get a fair comparison, in the first four sets of experiments, the step sizes of adaptive algorithms are selected in such a way that they all have a comparable stability performance (i.e., steady-state error) and their performances can be compared through the convergence rate. In the fifth and sixth sets of experiments, we use the system mismatch W(n) − Wopt (n)22 to investigate the performance. In order to gain more reliable results, all of the algorithms are run over 100 independent trials and for each trial, different realization of the process {u(n)} is used. 4.1. Comparing DBNLMP to NLMP algorithm In the first set of experiments, we present the comparison study between DBNLMP and NLMP algorithms. In order to investigate the dependence on the exponent of the α-stable process, we provide the results obtained with the three
Fig. 1. Transient behavior of tap weights for the proposed DBNLMP algorithm (dotted line) of (23), and the NLMP algorithm (solid line) of (11) for α = 1.2, 1.3, and 1.6.
842
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
Fig. 2. Transient behavior of tap weights for the proposed DBNLMAD algorithm (dotted line) of (25), and the NLMAD algorithm (solid line) of (12) for α = 1.1, 1.2, and 1.5.
different exponents 1.2, 1.3, and 1.6, respectively. The plots of the tap weights for DBNLMP and NLMP algorithms are given in Fig. 1. In Fig. 1, for α = 1.2, the value of the parameter p in both of the two algorithms is taken as 7/6, and for α = 1.3, and 1.6, p is taken as 5/4, and we take m = 5 in (23) in this simulation. As can be seen from the figure, the proposed DBNLMP algorithm converges to the optimum value around 2900 time steps while the NLMP algorithm converges around 3900 time steps. 4.2. Comparing DBNLMAD to NLMAD algorithm In the second set of experiments, we compare the performance of the DBNLMAD and NLMAD algorithms as the DBNLMAD algorithm is a special case of DBNLMP algorithm with p = 1. We provide the results obtained with the three different exponents 1.1, 1.2, and 1.5, respectively. And we take m = 5 in the DBNLMAD algorithm in this simulation. The plots of the tap weights for DBNLMAD and NLMAD algorithms are given in Fig. 2. It is clear that the proposed DBNLMAD algorithm outperforms the NLMAD algorithm, converging to the optimum value around 2700 time steps while the NLMAD algorithm converging around 3700 time steps. 4.3. Comparing GDBNLMP to GNLMP algorithm In the third set of experiments, we present the comparison study between GDBNLMP and GNLMP algorithms. The plots of the tap weights for GDBNLMP and GNLMP algorithms are given in Fig. 3. In Fig. 3, the value of the parameter p in the proposed algorithm is taken as the same in Fig. 1 for each corresponding value of α. The parameters η and ξ of GDBNLMP algorithm are set to the same value as a and (q − 1)a of GNLMP algorithm, respectively. The value of m is chosen as 5. As can be seen from the figure, the proposed GDBNLMP algorithm converges to the optimum value around 1500 time steps while the GNLMP algorithm converges around 2200 time steps. In the above experiments, we observed that the DBNLMP and DBNLMAD algorithms outperformed the NLMP and NLMAD algorithms, achieving a much faster convergence rate, respectively, and the GDBNLMP algorithm performed better than the GNLMP algorithm.
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
843
Fig. 3. Transient behavior of tap weights for the proposed GDBNLMP algorithm (dotted line) of (26), and the GNLMP algorithm (solid line) of (13) for α = 1.2, 1.3, and 1.6.
Fig. 4. Transient behavior of tap weights for the proposed GDBNLMP algorithm (dotted line) of (26), and the DBNLMP algorithm (solid line) of (23) for α = 1.2, 1.3, and 1.6.
4.4. Comparing GDBNLMP to DBNLMP algorithm As the DBNLMAD algorithm is a special case of the DBNLMP algorithm, hence, in the fourth set of experiments, we only compare the performance between GDBNLMP and DBNLMP algorithms. The plots of the tap weights for GDBNLMP and DBNLMP algorithms are given in Fig. 4.
844
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
Fig. 5. The system mismatch for various m and k values of the GDBNLMP and Mom-GNLMP algorithms for α = 1.1.
Fig. 6. The system mismatch for various m and k values of the GDBNLMP and Mom-GNLMP algorithms for α = 1.3.
In Fig. 4, the value of the parameter p in both of the two algorithms is taken as the same in the first set of experiments for its corresponding value of α. For both algorithms m is chosen as 5. As can be seen from the figure, the GDBNLMP algorithm performs better, converging to the optimum value around 1600 time steps while the DBNLMP algorithm converging around 2200 time steps.
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
845
(a) α = 1.1
(b) α = 1.6 Fig. 7. The system mismatch of the GDBNLMP algorithm for various m values.
4.5. Comparing GDBNLMP to Mom-GNLMP algorithm In the fifth set of experiments, we compare the performance of GDBNLMP and Mom-GNLMP algorithms, where both of the two algorithms use the same number of past input signal vectors in the simulation. We provide the results obtained with the four different m values 10, 12, 14, and 16, while the parameter k’s in (14) is chosen as 9, 11, 13, and 15, correspondingly. Therefore they are studied with the same number of past input signal vectors. In Figs. 5 and 6, we plot the system mismatch W(n) − Wopt (n)22 for the GDBNLMP algorithm of (26) and the Mom-GNLMP algorithm of (14) with the AR(2) process defined above for α = 1.1 and 1.3, respectively. The parameters η and ξ of GDBNLMP algorithm are set to the same value as a and (q − 1)a of Mom-GNLMP, respectively. The value of the parameter p in the proposed algorithm is taken as 12/11 for α = 1.1 in Fig. 5, while p is taken as 5/4 for α = 1.3 in Fig. 6. As can be seen from the figures, for each fixed m and its corresponding k shown in each subfigures, where both of the two algorithms use the same number of input signal vectors, the proposed GDBNLMP algorithm performs better than the Mom-GNLMP algorithm. For m = 10, and k = 9 in the first subfigures of Figs. 5 and 6, the proposed GDBNLMP algorithm converges to the optimum value around 400 and 300 time steps while the Mom-GNLMP algorithm converges around 700 and 500 time steps for α = 1.1 and 1.3, respectively.
846
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
In order to further reveal the performance of the proposed data block algorithms, in the sixth set of simulations, we investigate the relation between convergence rate and various m values of the GDBNLMP algorithm. In Fig. 7, we plot the system mismatch W(n) − Wopt (n)22 for the GDBNLMP algorithm of (26) with the AR(2) process defined above for α = 1.1 and 1.6, respectively. As can be seen from the figure clearly, the proposed generalized data block algorithm converges faster and faster as m becomes larger and larger. Concretely, in the subfigure (a), for m = 2, the GDBNLMP algorithm converges around 3700 time steps, and for m = 10, it converges around 1100 time steps. Further more, from the plots we observed that there is a great increase of the convergence rate in results when m is increased from 2 to 4. However, the rate of convergence is not much improved when m is increased from 8 to 10. That means the use of too many past values of the input vectors does not improve the convergence further. 5. Conclusions In this paper, data block adaptive filtering algorithms for non-Gaussian stable random processes are proposed with the motivation of FLOS. These algorithms change the direction of coefficient increment vector by introducing a matrix which includes the past information of the input signal vectors, taking most of the past values of the gradient vector during the iteration process. Thus convergence speed can be largely increased. In our simulation studies, the DBNLMP, DBNLMAD and GDBNLMP algorithms have better performance, exhibiting a faster convergence rate than the NLMP, NLAMD, and GNLMP algorithms in the tap weight adaptations, respectively. In addition, the GDBNLMP algorithm outperforms better than the DBNLMP algorithm and Mom-GNLMP, achieving the best performance of convergence among all of the other algorithms referred in the paper. References [1] B. Mandelbrot, J.W. Van Ness, Fractional Brownian motions, fractional noises, and applications, SIAM Rev. 10 (1968) 422–437. [2] S.S. Pillai, M. Harisankar, Simulated performance of a DS spread spectrum system in impulsive atmospheric noise, IEEE Trans. Electromagn. Comp. EMC-29 (1987) 80–82. [3] M. Bouvet, S.C. Schwartz, Comparison of adaptive and robust receivers for signal detection in ambient underwater noise, IEEE Trans. Acoust. Speech Signal Process. 37 (1989) 621–626. [4] D. Middleton, Statistical physical models of electromagnetic interference, IEEE Trans. Electromagn. Comp. EMC-19 (1977) 106–127. [5] D. Zha, T. Qiu, Underwater sources location in non-Gaussian impulsive noise environments, Digital Signal Process. 16 (2) (2006) 149–163. [6] M. Shao, C.L. Nikias, Signal processing with fractional lower moments: Stable processes and there applications, Proc. IEEE 81 (7) (1993) 986–1010. [7] G.A. Tsihrintzis, C.L. Nikias, Detection and classification of signals in impulsive noise modeled as an alpha-stable process, in: Conference Record of the Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, vol. 1, 1993, pp. 707–710. [8] C.L. Nikias, M. Shao, Signal Processing with Alpha-Stable Distributions and Applications, Wiley, New York, 1995. [9] O. Arikan, M. Belge, A.E. Cetin, E. Erzin, Adaptive filtering approaches for non-Gaussian stable processes, in: International Conference on Acoustics, Speech, and Signal Processing, ICASSP-95, vol. 2, 2000, pp. 1400–1403. [10] G. Aydin, O. Arikan, A.E. Cetin, Robust adaptive filtering algorithm for α-stable random processes, IEEE Trans. Circuits Syst. II Analog Digital Signal Process. 46 (2) (1999) 198–202. [11] J.S. Bodenschatz, C.L. Nikias, Symmetric alpha-stable filter theory, IEEE Trans. Signal Process. 45 (1997) 2301–2306. [12] D. Zha, T. Qiu, Direction finding in non-Gaussian impulsive noise environments, Digital Signal Process. 17 (2) (2007) 451–465. [13] D. Zha, T. Qiu, Adaptive mixed-norm filtering algorithm based on SαSG noise model, Digital Signal Process. 17 (2) (2007) 475–484. [14] D. Zha, T. Qiu, New blind estimation method of evoked potentials based on minimum dispersion criterion. GESTS, Int. Trans. Comp. Sci. Eng. 9 (1) (2005). [15] M. Rupi, P. Tsakalides, E. Del Re, C.L. Nikias, Constant modulus blind equalization based on fractional lower-order statistics, Signal Process. 84 (5) (2004) 881–894. [16] B. Fang, J. Zhou, Y. Li, Matrix Theory, Tsinghua Univ. Press, Peking, 2004. [17] Y. Hosoya, Discrete-time stable processes and their certain properties, Ann. Proc. 6 (1) (1978) 94–105.
Zhijin Zhao was born in Ningbo, China, in 1959. She received the M.S. degree in electronic engineering from Xidian University, Xi’an, China, in 1984. Since 1984, she has been teaching adaptive signal processing and discrete signal processing in Hangzhou Dianzi University, where she is currently a Professor. She also serves on the National Key Lab of Integrated Service Network at Xidian University, Xi’an, China. Her research interests include adaptive digital signal processing, communication signal processing, and speech signal processing.
Z. Zhao et al. / Digital Signal Processing 17 (2007) 836–847
847
Kehai Dong was born in Zhejiang, China, in 1981. He received the B.S. degree in communication engineering from Hangzhou Dianzi University, China, in 2004. He is currently a graduate student in Hangzhou Dianzi University. His research interests include adaptive signal processing and wireless sensor network. Chunyun Xu was born in Zhejiang, China, in 1956. He received the B.S. degree in electronic engineering from Xidian University, Xi’an, China, in 1982. He served on No. 14 Research Institute of the Department of Electronic Industry, China, from 1982 to 1986. He is currently an associate researcher in Hangzhou Dianzi University. His research interests include electronic measurements technique and signal processing.