Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014, Article ID 704231, 7 pages
http://dx.doi.org/10.1155/2014/704231
Research Article

A New Subband Adaptive Filtering Algorithm for Sparse System Identification with Impulsive Noise

Department of Electronic Engineering, Gangneung-Wonju National University, Gangneung 210-702, Republic of Korea

Received 13 March 2014; Revised 21 April 2014; Accepted 5 May 2014; Published 20 May 2014

Academic Editor: Guiming Luo

Copyright © 2014 Young-Seok Choi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a novel subband adaptive filter (SAF) for system identification where an impulse response is sparse and disturbed with an impulsive noise. Benefiting from the uses of -norm optimization and -norm penalty of the weight vector in the cost function, the proposed -norm sign SAF (-SSAF) achieves both robustness against impulsive noise and remarkably improved convergence behavior more than the classical adaptive filters. Simulation results in the system identification scenario confirm that the proposed -norm SSAF is not only more robust but also faster and more accurate than its counterparts in the sparse system identification in the presence of impulsive noise.

1. Introduction

Adaptive filtering algorithms have gained popularity and proven to be efficient in various applications such as system identification, channel equalization, and echo cancellation [14]. The normalized least mean square (NLMS) algorithm has become one of the most popular and widely used adaptive filtering algorithms because of its simplicity and robustness. Despite these advantages, the use of NLMS has been limited since it converges poorly for correlated input signals [2]. To address this problem, various approaches have been presented, such as the recursive least squares algorithm [2], the affine projection algorithm [2], and subband adaptive filtering (SAF) [59]. Among these, the SAF algorithm allocates the input signals and desired response into almost mutually exclusive subbands. This prewhitening characteristic of SAF allows each subband to converge almost separately so that the subband algorithms obtain faster convergence behavior. On the basis of these characteristics, Lee and Gan proposed a normalized SAF (NSAF) algorithm in [8, 9]. This work improves the convergence speed, while using almost the same computational complexity as the NLMS algorithm. However, the NSAF still suffers from the degradation of convergence performance in cases when an underlying system to be identified is sparse such as network echo path [10], underwater channel [11], and digital TV transmission channel [12]. Motivated by the proportionate step-size adaptive filtering [13, 14], the proportionate NSAF (PNSAF) has been presented to combat poor convergence in sparse system identification [15]. However, it does not exploit the sparsity condition itself. Moreover, the NSAF and PNSAF algorithms are highly sensitive to impulsive interference, leading to deteriorated convergence behavior. Impulsive interference exits in various applications such as acoustic echo cancellation [16], network cancellation [17], and subspace tracking [18].

To address the robustness issue, the sign SAF (SSAF) [19] has been developed based on the -norm optimization, making it robust against impulsive interference. However, its use is limited in case of sparse system identification. Moreover, the SSAF converges poorly and fails to accelerate the convergence rate with the number of subbands.

In recent years, motivated by compressive sensing framework [20, 21] and the least absolute shrinkage and selection operator (LASSO) [22], a variety of adaptive filtering algorithms which incorporate the sparsity of a system have been developed unlike the proportionate adaptive filtering approach [2327]. Along this line, the SAF with the -norm penalty has been recently presented as an alternative for incorporating the sparsity of a system [28]. In particular, the -norm of a system is able to represent the actual sparsity [2426]. In this paper, a -norm constraint SSAF (-SSAF) is presented, aiming at developing a sparsity-aware SSAF. With this in mind, by integrating the -norm penalty of the current weight vector into the -norm optimization criterion, the -SSAF benefits both superior convergence for sparse system identification and robustness against impulsive noise. In addition, the -SSAF is derived from a -norm optimization of the a priori error instead of the a posteriori error used in the SSAF. Thus, there is no need to approximate the a posteriori error with the a priori error to derive the update recursion of the -SSAF. Simulation results show that the -SSAF is superior to the conventional SAFs in identifying a sparse system in the presence of severe impulsive noise.

The remainder of the paper is organized as follows. Section 2 introduces the classical SAFs, followed by the derivation of the proposed -SSAF algorithm in Section 3. Section 4 illustrates the computer simulation results and Section 5 concludes this study.

2. Conventional SAFs

Consider a desired signal that arises from the system identification model where is a column vector for the impulse response of an unknown system that we wish to estimate, accounts for measurement noise with zero mean and variance , and is a input vector.

Figure 1 shows the structure of the NSAF, where the desired signal and output signal are partitioned into subbands by the analysis filters . The resultant subband signals, and for , are critically decimated to a lower sampling rate commensurate with their bandwidth. Here, the variables to index the original sequences and to index the decimated sequences are used for all signals. Then, the decimated desired signal and the decimated filter output signal at each subband are defined as and , where is the input data vector for the th subband such that and denotes an estimate for . Then, the decimated subband error vector is given by

704231.fig.001
Figure 1: Subband structure used in the proposed SAF.

In [8], the authors have presented that the update recursion of the NSAF algorithm is given by where is a step-size parameter. Then, the estimation error in all the subbands, that is, , can be written in a compact form as where the subband data matrix and the desired response vector are given by

More recently, the SSAF [19] has been obtained from the following optimization criterion: where denotes the -norm and is a parameter which prevents the weight coefficient vectors from abrupt change. Using Lagrange multipliers to solve the constrained optimization problem and utilizing the accessible instead of unavailable a posteriori error, that is, , the update recursion of the SSAF is formulated as where is a regularization parameter and denotes the sign function, where .

3. Proposed -Norm Constraint SSAF (-SSAF)

Our objective is to cope with the sparsity of an underlying system while inheriting robustness from the -norm optimization criterion. Our approach is to find a new weight vector, , that minimizes the -norm of the a priori error vector with the -norm penalty of the current weight vector as follows: where denotes the -norm and is a regularization parameter which governs the compromise between the effect of the -norm penalty term and the error vector related term. Note that the a priori error is used unlike the SSAF, leading to no approximation of the a posteriori error with the a priori error.

Taking derivative of , with respect to , it leads to where . To avoid a nonpolynomial hard problem from the -norm minimization, the -norm penalty is often approximated as follows [29]: where the parameter plays a role adjusting the degree of zero attraction. A th component of the gradient for (11) is given by To reduce the computational cost in (12), the first-order Taylor series expansion of the exponential function is employed: Then, a gradient (12) is computed as Finally, the update recursion of the -SSAF is given by where is the step-size parameter and .

4. Simulation Results

To validate the performance of the proposed -SSAF, computer simulations are carried out in a system identification scenario in which the unknown system is randomly generated. The length of the unknown system is , where of them are nonzero. The nonzero filter weights are positioned randomly and their values are taken from a Gaussian distribution . Here, the sparse systems of the sparsity are considered. The adaptive filter and the unknown system are assumed to have the same number of taps. The input signals are obtained by filtering a white, zero mean, Gaussian random sequence through a first-order system, or a second-order system,

A measurement noise with white Gaussian distribution is added to the system output such that the signal-to-noise ratio (SNR) is 20 dB, where the SNR is defined as where . An impulsive noise is added to the system output with the signal-to-interference ratio (SIR) of −30 or −10 dB correspondingly. The impulsive noise is modeled by a Bernoulli-Gaussian (BG) distribution [16], which is obtained as the product of a Bernoulli distribution and a Gaussian one; that is, , where is a Bernoulli process with a probability mass function given by for and for . In addition, is an additive white Gaussian noise with zero mean and variance . Here, is used. In order to compare the convergence performance, the normalized mean square deviation (NMSD), is taken and averaged over 50 independent trials. The cosine-modulated filter banks [30] with the subband numbers of are used in the simulations. The prototype filter of length is used. The parameters used in simulations are as follows: NSAF ( or ), SSAF (, ), PNSAF (, ), and -SSAF (, ). The of the -norm SSAF is obtained by repeated trials to minimize the steady-state NMSD. We use the input signals generated by and for Figures 27 and Figures 8 and 9, respectively.

704231.fig.002
Figure 2: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.003
Figure 3: NMSD learning curves of the -SSAF algorithm with various values [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.004
Figure 4: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.005
Figure 5: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms for different sparsity ( = 8, 16, 32) [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.006
Figure 6: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms with difference values of ( = 1, 20, 50, 100) [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.007
Figure 7: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms in case of a time-varying unknown system (). The system is right-shifted for 20 taps at 20000th iteration [,  dB, input: Gaussian AR with pole at 0.9].
704231.fig.008
Figure 8: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms [,  dB, input: Gaussian AR(2,2)].
704231.fig.009
Figure 9: NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms [,  dB, input: Gaussian AR(2,2)].

Figure 2 shows the NMSD learning curves of the NSAF, PNSAF, SSAF, and -norm SSAF algorithms in the case of  dB. For the -SSAF, is chosen. Compared to the conventional SAF algorithms, the proposed -SSAF yields remarkably improved convergence performance in terms of the convergence rate and the steady-state misalignment.

In Figure 3, to verify the effect of on convergence performance, the NMSD curves of the -SSAF for different values are illustrated in the case of dB. With different values (, and ), the -SSAF is not excessively sensitive to . The analysis for an optimal value remains a future work.

Figure 4 illustrates the NMSD learning curves of the NSAF, PNSAF, SSAF, and -norm SSAF algorithms under dB. The same value with Figure 2 is chosen. In the figure, similar results shown in Figure 2 are observed.

Figure 5 depicts the NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms for difference sparsity. Here, , 16, 32 were chosen. The same parameters as in Figure 2 are used. As can be seen, the more sparse the system, the better the convergence performance of the -SSAF.

Figure 6 shows the NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms with difference values of in the case of . The values of = 1, 20, 50, 100 were used. Also, the same step-size parameter is chosen. In the figure, it is apparent that the larger the value of , the higher the steady-state MSD. However, the optimal value of remains a future issue.

Next, the tracking capabilities of the algorithms to a sudden change in the system are tested for dB. Figure 7 shows the results in case when an unknown system is right-shifted for 20 taps. The same value of of Figure 2 is used. The figure shows that the -SSAF keeps track of weight change while achieving a faster convergence rate and a low steady-state misalignment compared to the conventional SAF algorithms.

Finally, Figures 8 and 9 show the simulation results with the different input signal generated by for dB and dB, respectively. The same parameters of all SAF algorithms in Figure 2 are chosen in Figures 8 and 9. We can see similar results in previous figures, implying the capability of the -norm SSAF over the classical SAF algorithms for different input signal.

5. Conclusion

This paper has proposed a robust and sparse-aware SSAF algorithm which incorporates the sparsity condition of a system into the -norm optimization criterion of the a priori error vector. By utilizing the -norm penalty of the current weight vector and approximating it to avoid a nonpolynomial hard problem, the update recursion of the proposed -norm SSAF is obtained while reducing the computational cost using Taylor series expansion. The simulation results indicate that the proposed -SSAF achieves highly improved convergence performance over the conventional SAF algorithms where a system is not only sparse but also disturbed with impulsive noise.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

References

  1. S. Haykin, Adaptive Filter Theory, Prentice Hall, Upper Saddle River, NJ, USA, 4th edition, 2002.
  2. A. H. Sayed, Fundamentals of Adaptive Filtering, John Wiley & Sons, New York, NY, USA, 2003.
  3. P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementations, Kluwer Academic, Boston, Mass, USA, 3rd edition, 2008.
  4. M. M. Sondhi, “The history of echo cancellation,” IEEE Signal Processing Magazine, vol. 23, no. 5, pp. 95–102, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Gilloire and M. Vetterli, “Adaptive filtering in subbands with critical sampling: analysis, experiments, and application to acoustic echo cancellation,” IEEE Transactions on Signal Processing, vol. 40, no. 8, pp. 1862–1875, 1992. View at Publisher · View at Google Scholar · View at Scopus
  6. M. de Courville and P. Duhamel, “Adaptive filtering in subbands using a weighted criterion,” IEEE Transactions on Signal Processing, vol. 46, no. 9, pp. 2359–2371, 1998. View at Publisher · View at Google Scholar · View at Scopus
  7. S. S. Pradhan and V. U. Redd, “A new approach to subband adaptive filtering,” IEEE Transactions on Signal Processing, vol. 47, no. 3, pp. 655–664, 1999. View at Publisher · View at Google Scholar · View at Scopus
  8. K. A. Lee and W. S. Gan, “Improving convergence of the NLMS algorithm using constrained subband updates,” IEEE Signal Processing Letters, vol. 11, no. 9, pp. 736–739, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. K. A. Lee and W. S. Gan, “Inherent decorrelating and least perturbation properties of the normalized subband adaptive filter,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4475–4480, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. D. L. Duttweiler, “Proportionate normalized least-mean-squares adaptation in echo cancelers,” IEEE Transactions on Speech and Audio Processing, vol. 8, no. 5, pp. 508–518, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. W. Li and J. C. Preisig, “Estimation of rapidly time-varying sparse channels,” IEEE Journal of Oceanic Engineering, vol. 32, no. 4, pp. 927–939, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. W. F. Schreiber, “Advanced television systems for terrestrial broad-casting: some problems and some proposed solutions,” Proceedings of IEEE, vol. 83, no. 6, pp. 958–981, 1995. View at Publisher · View at Google Scholar
  13. S. L. Gay, “Efficient, fast converging adaptive filter for network echo cancellation,” in Proceedings of the 32nd Asilomar Conference on Signals, Systems & Computers, vol. 1, pp. 394–398, Pacific Grove, Calif, USA, November 1998. View at Scopus
  14. H. Deng and M. Doroslovački, “Improving convergence of the PNLMS algorithm for sparse impulse response identification,” IEEE Signal Processing Letters, vol. 12, no. 3, pp. 181–184, 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. M. S. E. Abadi, “Proportionate normalized subband adaptive filter algorithms for sparse system identification,” Signal Processing, vol. 89, no. 7, pp. 1467–1474, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. L. R. Vega, H. Rey, J. Benesty, and S. Tressens, “A new robust variable step-size NLMS algorithm,” IEEE Transactions on Signal Processing, vol. 56, no. 5, pp. 1878–1893, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  17. Z. Yang, Y. R. Zheng, and S. L. Grant, “Proportionate affine projection sign algorithms for network echo cancellation,” IEEE Transactions on Audio, Speech and Language Processing, vol. 19, no. 8, pp. 2273–2284, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. B. Liao, Z. G. Zhang, and S. C. Chan, “A new robust Kalman filter-based subspace tracking algorithm in an impulsive noise environment,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 57, no. 9, pp. 740–744, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Ni and F. Li, “Variable regularisation parameter sign subband adaptive filter,” Electronics Letters, vol. 46, no. 24, pp. 1605–1607, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  21. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B: Methodological, vol. 58, no. 1, pp. 267–288, 1996. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  23. Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '09), pp. 3125–3128, Taipei, Taiwan, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Gu, J. Jin, and S. Mei, “l0 norm constraint LMS algorithm for sparse system identification,” IEEE Signal Processing Letters, vol. 16, no. 9, pp. 774–777, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Jin, Y. Gu, and S. Mei, “A stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 409–420, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. E. M. Eksioglu and A. K. Tanc, “RLS algorithm with convex regularization,” IEEE Signal Processing Letters, vol. 18, no. 8, pp. 470–473, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. N. Kalouptsidis, G. Mileounis, B. Babadi, and V. Tarokh, “Adaptive algorithms for sparse system identification,” Signal Processing, vol. 91, no. 8, pp. 1910–1919, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. Y.-S. Choi, “Subband adaptive filtering with l1-norm constraint for sparse system identification,” Mathematical Problems in Engineering, vol. 2013, Article ID 601623, 7 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  29. P. S. Bradley and O. L. Mangasarian, “Feature selection via concave minimization and support vector machines,” in Proceedings of the International Conference on Machine Learning (ICML '98), pp. 82–90, 1998.
  30. P. P. Vaidyanathan, Multirate Systems and Filterbanks, Prentice Hall, Englewood Cliffs, NJ, USA, 1993.