Abstract

This paper presents a new approach of the normalized subband adaptive filter (NSAF) which directly exploits the sparsity condition of an underlying system for sparse system identification. The proposed NSAF integrates a weighted -norm constraint into the cost function of the NSAF algorithm. To get the optimum solution of the weighted -norm regularized cost function, a subgradient calculus is employed, resulting in a stochastic gradient based update recursion of the weighted -norm regularized NSAF. The choice of distinct weighted -norm regularization leads to two versions of the -norm regularized NSAF. Numerical results clearly indicate the superior convergence of the -norm regularized NSAFs over the classical NSAF especially when identifying a sparse system.

1. Introduction

Over the past few decades, the relative simplicity and good performance of the normalized least mean square (NLMS) algorithm have made it a popular tool for adaptive filtering applications. However, its convergence performance is significantly deteriorated in case of correlated input signals [1, 2]. As a popular solution, adaptive filtering in the subband has been recently developed, which is referred to as subband adaptive filter (SAF) [37]. Its distinct feature is based on the property that the LMS-type adaptive filters converge faster for white input signals than colored ones [1, 2]. Thus, carrying out a prewhitening on colored input signals, it results in the accelerated convergence compared to the LMS-type adaptive filters. Recently, the use of multiple-constraint optimization criteria into formulation of a cost function has resulted in the normalized SAF (NSAF) with its computational complexity close to that of the NLMS algorithm [6, 7].

In the context of a system identification, the unknown system to be identified is sparse in common scenarios, such as echo paths [8] and digital TV transmission channels [9]. Namely, the unknown system consists of many near-zero coefficients and a small number of large ones. However, the adaptive filtering algorithms suffer from poor convergence performance in case of identifying the sparse system [8]. Indeed, the capability of the NSAF is faded in a sparse system identification scenario. To deal with this issue, a variety of proportionate adaptive algorithms have been presented for NSAF, which utilize proportionate step sizes to distinct filter taps [1012]. However, these algorithms have not exploited the sparsity condition of an underlying system.

Recently, motivated by compressive sensing framework [13, 14] and the least absolute shrinkage and selection operator (LASSO) [15], a number of adaptive filtering algorithms which make use of the sparsity condition of an underlying system have been developed [1620]. The core idea behind this approach is to incorporate the sparsity condition of underlying system by imposing an a sparsity-inducing constraint term. Adding the sparsity constraint using or -norm constraint to the cost function makes the least relevant weights of the filter shrink to zeros. However, to the best of the author’s knowledge, adaptive filtering in subband which exploits the sparsity condition has not been studied yet.

With regard, this paper presents a novel approach of the sparsity-regularized NSAFs, which incorporates the sparsity condition of the system directly into the cost function via a sparsity-inducing constraint term. This is carried out by regularizing a weighted -norm of the filter weights estimate to the cost function. Considering the two choices of the weighted -norm regularization, two stochastic gradient-based -norm regularized NSAF algorithms are derived. First, the -norm NSAF (-NSAF) is obtained by utilizing the identity matrix as a weighting matrix. Second, the reweighted -norm NSAF (-RNSAF) which uses the current estimate of the system as a weighted -norm is developed. Through numerical simulations, the resultant sparsity-regularized NSAFs have proven their superiority over the classical NSAFs, especially when the sparsity of the underlying system becomes severe.

The remainder of the paper is organized as follows. Section 2 introduces the classical NSAF, followed by the derivation of the proposed sparsity-regularized NSAFs in Section 3. Section 4 illustrates the computer simulation results and Section 5 concludes this study.

2. Conventional NSAF

Consider a desired signal that arises from the system identification model where is a column vector for the impulse response of an unknown system that we wish to estimate, accounts for measurement noise with zero mean and variance , and denotes the input vector,

Figure 1 shows the structure of the NSAF, where the desired signal and input signal are partitioned into subbands by the analysis filters . The resulting subband signals, and for , are critically decimated to a lower sampling rate commensurate with their bandwidth. Here, the variable to index the original sequences and to index the decimated sequences are used for all signals. Then, the decimated filter output signal at each subband is defined as , where is row vector, such that and denotes an estimate for with length . Thus the decimated subband error signal is given by where is the decimated desired signal at each subband.

In [6], the authors have formulated the Lagrangian-based multiple-constraint optimization problem, which is formulated as where for denote the Lagrange multipliers. Solving the cost function (5), the update recursion of the NSAF algorithm is given by [6, 7]. Consider where is the step-size parameter.

3. Weighted -Norm Regularized NSAF

3.1. Derivation of the Proposed Algorithm

To reflect the sparsity condition of the true system, that is, , a weighted -norm of the filter weight estimate is regularized on the cost function of the NSAF, which is given by where accounts for the weighted -norm of the filter weight vector and is written as where is a weighting matrix whose diagonal elements are and other elements are equal to zero, and denotes the th tap weight of , for . In addition, is a positive value parameter which plays a role in compromising the error related term and the weighted -norm regularization in the right-hand side of (7).

To find the optimal weight vector which minimizes the cost function (7), the derivative of (7) with respect to is taken and set to zero. Note that the weighted -norm regularization term, that is, , is not differentiable at any point in case . To address this issue, a subgradient calculus [21] is carried out.

Thus, taking the derivative of (7) with respect to the weight vector and letting the derivative be equal to zero, it leads to where denotes a subgradient vector of a function with respect to . An available subgradient vector is obtained as [21]. Consider since is assumed as a diagonal matrix with positive-valued elements, where is a componentwise sign function defined by

Substituting (10) into (9) and assuming , it is given by

Substituting (12) into the multiple constraints of the NSAF, that is, , and rewriting as a matrix form, it leads to where is the Lagrange vector,

By neglecting the off-diagonal elements of [6], the components of in (13) can be simplified to for .

Consequently, combining (12) and (15), the update recursion of the sparsity-regularized NSAF is given by where is the step-size parameter.

3.2. Determination of the Weighted -Norm Regularization

Here, by choosing the weighting matrix , two versions of the sparsity-regularized NSAF are developed. First, the use of the identity matrix as the weighting matrix, that is, , results in the following update recursion: which is referred to as the -norm NSAF (-NSAF) as an unweighted case. The -NSAF uniformly attracts the tap coefficients of to zero. The zero attraction process leads to the improved convergence of the -NSAF when the majority of entries of a system are zero; that is, a system is sparse.

Second, to approximate the actual sparsity condition of an underlying system, that is, -norm of the system, the weights of are chosen inversely proportional to magnitude of the actual coefficients of the system as given by where denotes the th coefficients of the system, that is, . However, since the actual coefficients of the system are unavailable, the estimate of the current filter weights is utilized instead of the actual weights, which is referred to as the reweighting scheme [22], as follows: where denotes the th tap weight of the and is a small positive value to avoid singularity when . Then, the weighting matrix consists of the values of as the th diagonal elements and has a time-varying feature. Finally, the update recursion is given by where and the vector division operation in last term accounts for a componentwise division. Then, this recursion is called the reweighted -norm NSAF (-RNSAF)

Table 1 lists the number of multiplications and divisions of the NSAF [6], -NSAF, and -RNSAF per iteration. As shown in Table 1, the use of -norm constraint leads to an acceptable increase in computation.

4. Numerical Results

The performance of the proposed sparsity-regularized NSAFs is validated by carrying out computer simulations in a system identification scenario in which the unknown channel is randomly generated. The lengths of the unknown system are and in experiments where of them are nonzero. The nonzero filter weights are positioned randomly and their values are taken from a Gaussian distribution . Here, is used in the simulations except Figure 5 in which various values are considered. The adaptive filter and the unknown system are assumed to have the same number of taps. The input signals are obtained by filtering a white, zero-mean, Gaussian random sequence through a first-order system . The signal-to-noise ratio (SNR) is calculated by where . The measurement noise is added to such that SNR = 10, 20, and dB. In order to compare the convergence performance, the normalized mean square deviation (MSD), is taken and averaged over 50 independent trials. The cosine-modulated filter banks [23] with the subband number of are used in the simulations. The prototype filter of length is used. For comparison purpose, the proportionate NSAF (PNSAF) [12] is considered, which has been developed for sparse system identification. The step-size is set to for SAF algorithms except the PNSAF where the step sizes (Figure 2) and (Figure 6) are chosen to achieve similar steady-state MSD with the -RNSAF for comparison purpose. For the -RNSAF, is chosen. In addition, is used for the PNSAF. The values are obtained by repeated trials to minimize the steady-state MSD.

Figure 2 shows the normalized MSD curves of the NLMS, NSAF, -NSAF, and -RNSAF, in cases of and  dB. For the -NSAF and -RNSAF, is chosen. As shown in Figure 2, the not only -RNSAF outperforms the conventional NLMS, NSAF, PNSAF, and -NSAF, but also the -NSAF has better performance than other conventional algorithms, in terms of the convergence rate and the steady-state misalignment.

In Figure 3, to verify the effect of on convergence performance, the normalized MSD curves of the -RNSAF for different values are illustrated, in case of and  dB. For different values (,  ,  , and ), the -RNSAF is not excessively sensitive to . The analysis of an optimal value remains a future work.

Next, the performance of the proposed -norm regularized NSAFs is compared to the original NSAF under different SNR conditions. Figure 4 depicts the normalized MSD curves of the NSAF, -NSAF, and -RNSAF under and 20 dB, respectively. The value for the -NSAF and -RNSAF is set to . It is clear that both the -NSAF and -RNSAF are superior to the NSAF under several SNR cases. Furthermore, the -RNSAF performs well compared to -NSAF.

In Figure 5, the convergence properties of the NSAF and -RNSAF are compared under various sparsity conditions of an underlying system. With the same length of the system, that is, , different sparsity conditions (, , , and ) are considered under  dB. The value of is set to for the -RNSAF. Figure 5 shows that the NSAF is independent of the sparsity condition. On the other hand, the results indicate that the more sparse the underlying system, the better the -RNSAF.

The comparison of performance of the NSAF, -NSAF, and -RNSAF with a long system, here, the filter length , is presented in Figure 6. For the -NSAF and -RNSAF, is chosen. A similar result of Figure 2 is observed in Figure 6.

Finally, the tracking capabilities of the algorithms of a sudden change in the system are tested for and dB. Figure 7 shows the results when the unknown system is right-shifted for taps. Same value of   in Figure 2 is used. As can be seen, the -NSAF and -RNSAF keep track of weight change without losing the convergence rate nor the steady-state misalignment compared to the conventional NLMS, NSAF, and PNSAF. To be specific, the -RNSAF achieves better performance than the -NSAF in terms of both convergence rate and steady-state misalignment.

5. Conclusion

A new family of the NSAFs which takes into account the sparsity condition of an underlying system has been presented by incorporating a weighted -norm constraint of filter weights in the cost function. The update recursion is obtained by employing subgradient calculus on the weighted -norm constraint term. Subsequently, two sparsity regularized NSAFs, that is, the unweighted -NSAF and -RNSAF have been developed. The numerical results indicate that the proposed -NSAF and -RNSAF achieve highly improved convergence performance over the conventional algorithms for sparse system identification.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research was supported by new faculty research program funded by Gangneung-Wonju National University (2013100162).