Abstract

This paper presents a novel subband adaptive filter (SAF) for system identification where an impulse response is sparse and disturbed with an impulsive noise. Benefiting from the uses of -norm optimization and -norm penalty of the weight vector in the cost function, the proposed -norm sign SAF (-SSAF) achieves both robustness against impulsive noise and remarkably improved convergence behavior more than the classical adaptive filters. Simulation results in the system identification scenario confirm that the proposed -norm SSAF is not only more robust but also faster and more accurate than its counterparts in the sparse system identification in the presence of impulsive noise.

1. Introduction

Adaptive filtering algorithms have gained popularity and proven to be efficient in various applications such as system identification, channel equalization, and echo cancellation [14]. The normalized least mean square (NLMS) algorithm has become one of the most popular and widely used adaptive filtering algorithms because of its simplicity and robustness. Despite these advantages, the use of NLMS has been limited since it converges poorly for correlated input signals [2]. To address this problem, various approaches have been presented, such as the recursive least squares algorithm [2], the affine projection algorithm [2], and subband adaptive filtering (SAF) [59]. Among these, the SAF algorithm allocates the input signals and desired response into almost mutually exclusive subbands. This prewhitening characteristic of SAF allows each subband to converge almost separately so that the subband algorithms obtain faster convergence behavior. On the basis of these characteristics, Lee and Gan proposed a normalized SAF (NSAF) algorithm in [8, 9]. This work improves the convergence speed, while using almost the same computational complexity as the NLMS algorithm. However, the NSAF still suffers from the degradation of convergence performance in cases when an underlying system to be identified is sparse such as network echo path [10], underwater channel [11], and digital TV transmission channel [12]. Motivated by the proportionate step-size adaptive filtering [13, 14], the proportionate NSAF (PNSAF) has been presented to combat poor convergence in sparse system identification [15]. However, it does not exploit the sparsity condition itself. Moreover, the NSAF and PNSAF algorithms are highly sensitive to impulsive interference, leading to deteriorated convergence behavior. Impulsive interference exits in various applications such as acoustic echo cancellation [16], network cancellation [17], and subspace tracking [18].

To address the robustness issue, the sign SAF (SSAF) [19] has been developed based on the -norm optimization, making it robust against impulsive interference. However, its use is limited in case of sparse system identification. Moreover, the SSAF converges poorly and fails to accelerate the convergence rate with the number of subbands.

In recent years, motivated by compressive sensing framework [20, 21] and the least absolute shrinkage and selection operator (LASSO) [22], a variety of adaptive filtering algorithms which incorporate the sparsity of a system have been developed unlike the proportionate adaptive filtering approach [2327]. Along this line, the SAF with the -norm penalty has been recently presented as an alternative for incorporating the sparsity of a system [28]. In particular, the -norm of a system is able to represent the actual sparsity [2426]. In this paper, a -norm constraint SSAF (-SSAF) is presented, aiming at developing a sparsity-aware SSAF. With this in mind, by integrating the -norm penalty of the current weight vector into the -norm optimization criterion, the -SSAF benefits both superior convergence for sparse system identification and robustness against impulsive noise. In addition, the -SSAF is derived from a -norm optimization of the a priori error instead of the a posteriori error used in the SSAF. Thus, there is no need to approximate the a posteriori error with the a priori error to derive the update recursion of the -SSAF. Simulation results show that the -SSAF is superior to the conventional SAFs in identifying a sparse system in the presence of severe impulsive noise.

The remainder of the paper is organized as follows. Section 2 introduces the classical SAFs, followed by the derivation of the proposed -SSAF algorithm in Section 3. Section 4 illustrates the computer simulation results and Section 5 concludes this study.

2. Conventional SAFs

Consider a desired signal that arises from the system identification model where is a column vector for the impulse response of an unknown system that we wish to estimate, accounts for measurement noise with zero mean and variance , and is a input vector.

Figure 1 shows the structure of the NSAF, where the desired signal and output signal are partitioned into subbands by the analysis filters . The resultant subband signals, and for , are critically decimated to a lower sampling rate commensurate with their bandwidth. Here, the variables to index the original sequences and to index the decimated sequences are used for all signals. Then, the decimated desired signal and the decimated filter output signal at each subband are defined as and , where is the input data vector for the th subband such that and denotes an estimate for . Then, the decimated subband error vector is given by

In [8], the authors have presented that the update recursion of the NSAF algorithm is given by where is a step-size parameter. Then, the estimation error in all the subbands, that is, , can be written in a compact form as where the subband data matrix and the desired response vector are given by

More recently, the SSAF [19] has been obtained from the following optimization criterion: where denotes the -norm and is a parameter which prevents the weight coefficient vectors from abrupt change. Using Lagrange multipliers to solve the constrained optimization problem and utilizing the accessible instead of unavailable a posteriori error, that is, , the update recursion of the SSAF is formulated as where is a regularization parameter and denotes the sign function, where .

3. Proposed -Norm Constraint SSAF (-SSAF)

Our objective is to cope with the sparsity of an underlying system while inheriting robustness from the -norm optimization criterion. Our approach is to find a new weight vector, , that minimizes the -norm of the a priori error vector with the -norm penalty of the current weight vector as follows: where denotes the -norm and is a regularization parameter which governs the compromise between the effect of the -norm penalty term and the error vector related term. Note that the a priori error is used unlike the SSAF, leading to no approximation of the a posteriori error with the a priori error.

Taking derivative of , with respect to , it leads to where . To avoid a nonpolynomial hard problem from the -norm minimization, the -norm penalty is often approximated as follows [29]: where the parameter plays a role adjusting the degree of zero attraction. A th component of the gradient for (11) is given by To reduce the computational cost in (12), the first-order Taylor series expansion of the exponential function is employed: Then, a gradient (12) is computed as Finally, the update recursion of the -SSAF is given by where is the step-size parameter and .

4. Simulation Results

To validate the performance of the proposed -SSAF, computer simulations are carried out in a system identification scenario in which the unknown system is randomly generated. The length of the unknown system is , where of them are nonzero. The nonzero filter weights are positioned randomly and their values are taken from a Gaussian distribution . Here, the sparse systems of the sparsity are considered. The adaptive filter and the unknown system are assumed to have the same number of taps. The input signals are obtained by filtering a white, zero mean, Gaussian random sequence through a first-order system, or a second-order system,

A measurement noise with white Gaussian distribution is added to the system output such that the signal-to-noise ratio (SNR) is 20 dB, where the SNR is defined as where . An impulsive noise is added to the system output with the signal-to-interference ratio (SIR) of −30 or −10 dB correspondingly. The impulsive noise is modeled by a Bernoulli-Gaussian (BG) distribution [16], which is obtained as the product of a Bernoulli distribution and a Gaussian one; that is, , where is a Bernoulli process with a probability mass function given by for and for . In addition, is an additive white Gaussian noise with zero mean and variance . Here, is used. In order to compare the convergence performance, the normalized mean square deviation (NMSD), is taken and averaged over 50 independent trials. The cosine-modulated filter banks [30] with the subband numbers of are used in the simulations. The prototype filter of length is used. The parameters used in simulations are as follows: NSAF ( or ), SSAF (, ), PNSAF (, ), and -SSAF (, ). The of the -norm SSAF is obtained by repeated trials to minimize the steady-state NMSD. We use the input signals generated by and for Figures 27 and Figures 8 and 9, respectively.

Figure 2 shows the NMSD learning curves of the NSAF, PNSAF, SSAF, and -norm SSAF algorithms in the case of  dB. For the -SSAF, is chosen. Compared to the conventional SAF algorithms, the proposed -SSAF yields remarkably improved convergence performance in terms of the convergence rate and the steady-state misalignment.

In Figure 3, to verify the effect of on convergence performance, the NMSD curves of the -SSAF for different values are illustrated in the case of dB. With different values (, and ), the -SSAF is not excessively sensitive to . The analysis for an optimal value remains a future work.

Figure 4 illustrates the NMSD learning curves of the NSAF, PNSAF, SSAF, and -norm SSAF algorithms under dB. The same value with Figure 2 is chosen. In the figure, similar results shown in Figure 2 are observed.

Figure 5 depicts the NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms for difference sparsity. Here, , 16, 32 were chosen. The same parameters as in Figure 2 are used. As can be seen, the more sparse the system, the better the convergence performance of the -SSAF.

Figure 6 shows the NMSD learning curves of the NSAF, PNSAF, SSAF, and -SSAF algorithms with difference values of in the case of . The values of = 1, 20, 50, 100 were used. Also, the same step-size parameter is chosen. In the figure, it is apparent that the larger the value of , the higher the steady-state MSD. However, the optimal value of remains a future issue.

Next, the tracking capabilities of the algorithms to a sudden change in the system are tested for dB. Figure 7 shows the results in case when an unknown system is right-shifted for 20 taps. The same value of of Figure 2 is used. The figure shows that the -SSAF keeps track of weight change while achieving a faster convergence rate and a low steady-state misalignment compared to the conventional SAF algorithms.

Finally, Figures 8 and 9 show the simulation results with the different input signal generated by for dB and dB, respectively. The same parameters of all SAF algorithms in Figure 2 are chosen in Figures 8 and 9. We can see similar results in previous figures, implying the capability of the -norm SSAF over the classical SAF algorithms for different input signal.

5. Conclusion

This paper has proposed a robust and sparse-aware SSAF algorithm which incorporates the sparsity condition of a system into the -norm optimization criterion of the a priori error vector. By utilizing the -norm penalty of the current weight vector and approximating it to avoid a nonpolynomial hard problem, the update recursion of the proposed -norm SSAF is obtained while reducing the computational cost using Taylor series expansion. The simulation results indicate that the proposed -SSAF achieves highly improved convergence performance over the conventional SAF algorithms where a system is not only sparse but also disturbed with impulsive noise.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.