Abstract

This paper proposes a new approach to identify time varying sparse systems. The proposed approach uses Zero-Attracting Least Mean Square (ZA-LMS) algorithm with an adaptive optimal zero attractor controller which can adapt dynamically to the sparseness level and provide appreciable performance in all environments ranging from sparse to nonsparse conditions. The optimal zero attractor controller is derived based on the criterion that confirms largest decrease in mean square deviation (MSD) error. A simple update rule is also proposed to change the zero attractor controller based on the level of sparsity. It is found that, for nonsparse system, the proposed approach converges to LMS (as ZA-LMS cannot outperform LMS when the system is nonsparse) and, for highly sparse system, as the proposed approach is based on optimal zero attractor controller, it converges either similar to ZA-LMS or even better than ZA-LMS (depending on the value of zero attractor controller chosen for ZA-LMS algorithm). The performance of the proposed algorithm is better than ZA-LMS and LMS when the system is semisparse. Simulations were performed to prove that the proposed algorithm is robust against variable sparsity level.

1. Introduction

A sparse system is characterized by an impulse response with more number of zero and near to zero magnitude coefficients and a few large coefficients. In other words, it is the impulse response with large fraction of energy concentrated in a small region. For example, in underwater acoustics, the channels impulse has sparse structure with multipath in such a way that most of the energy is concentrated only in small regions [1]. Another example of sparse system is acoustic echo signal measured in a loudspeaker enclosed microphone (LEM) system. Here, the echo path is made of only 9–12% of active coefficients due to the large propagation delay and it varies with respect to the movement of objects, speakers, temperature, and so forth [2]. Other prominent examples of sparse systems are network echo channel [3] where only 90–100 filter coefficients have large magnitude among 1024 sequence length impulse responses and wireless multipath channels which are made of multipath with only few active paths [4]. The impulse response not only is sparse but also is said to be time varying [14]. If the algorithms used for identifying such systems can be made to make use of sparseness, then an improved performance can be obtained.

Traditional adaptive filters like Least Mean Square (LMS), Normalized LMS (NLMS), and Affine Projection Algorithms (APA) fail to make use of sparsity level to improve their performance [5]. Literatures reveal that several variants were developed to make use of sparsity to improve their performance. Some of the well-known ones are the proportionate type algorithms and their variants [68], partial update [9], norm [1012], norm [13, 14] based, and exponentiated gradient algorithms [15]. Among these variants, norm based algorithms are very popular due to their convex property and they provide uniform attraction to all filter taps [16]. They work by including an extra term called zero attractor term into the original cost function and thus they have improved performance in terms of faster convergence and lower steady state mean square error (MSE) than their conventional counterparts when the system is sparse [10, 11]. The chief advantage of ZA-LMS is that its computational complexity is comparatively lesser than proportionate type adaptive filters [17], ZA-APA and ZA-NLMS algorithms, and their variants [16] which act as a major criterion especially when the system is long such as echo cancellation application. However, the major drawback of ZA-LMS is that it works well only when the system is highly sparse and the performance deteriorates when the sparsity level is decreased and it becomes worse than LMS under nonsparse condition [18, 19]. Another difficulty of ZA-LMS is that the convergence and steady state error depend on the value of zero attractor controller [18] which motivates for a proper selection rule.

Several attempts were made to improve the performance of ZA-LMS under nonsparse condition. One such approach is the reweighted ZA-LMS (RZA-LMS) [10]. Here, the zero attractor value is changed in such a way that the attraction is applicable for zero taps only. The RZA-LMS suffers from the difficulty of selecting an appropriate shrinkage factor especially for time varying sparse system [16, 20]. Combinational approach is found to be another alternate. In convex combination of ZA-LMS and LMS proposed in [21], the mixing parameter is updated in such a way that the algorithm follows the one that provide fast convergence and lower steady state MSE always. Computational complexity is the major drawback of this approach. Moreover, the performance of the algorithm depends on the zero attractor controller which again necessitates the requirement of an optimal zero attractor controller. Several selection rules for the zero attractor controller were proposed for ZA-LMS but they are not practically feasible [18, 19].

Thus, this paper proposes an alternate approach to deal with time varying sparse systems. Here, the optimal zero attractor controller is first found by choosing a criterion that provides largest decrease in the MSD error from one iteration to the other. In order to adapt to the time varying sparsity, a simple rule for the update of the zero attractor controller is proposed. It is found from [10] that the difference between optimal weights and norm of filter weights becomes positive only if the system is highly sparse and it becomes negative for nonsparse system. Therefore this is used as a metric to update the zero attractor controller. Thus, robustness in the context of variable sparsity is achieved and is further proved by simulations.

The rest of the paper is organized as follows. Section 2 reviews ZA-LMS algorithm. This is followed by Section 3, in which the adaptive zero attractor controller based ZA-LMS is proposed. An optimal zero attractor controller based on MSD is obtained. A practical optimal zero attractor controller is also derived. Further, an update rule is proposed in this section. Section 4 deals with simulations and conclusions are provided in Section 5.

2. Review of ZA-LMS Algorithm

Consider an unknown system with input of length . The desired response is modeled as a multiple linear regression model given by where is the optimal weight vector of length that need to be estimated and is the noise source. Let be the estimated output for the given input and estimated weights . If denotes the error signal which is obtained as the difference between desired and estimated response, the ZA-LMS updates the weights by the recursion given by [10] as where is the step size and is the component-wise sign function given by From (1), it is found that the update equation consists of three terms. The first two terms are similar to that of the conventional LMS and the third term is the zero attractor term which is responsible for attraction of coefficients to zero thereby accelerating the convergence speed and is the zero attractor controller which decides the strength of attraction.

Convergence analysis of ZA-LMS [18] indicates that the zero attractor controller parameter plays a major role in reducing the convergence and steady state error tradeoff. For sparse system, a small value of lowers the steady state error at the cost of slower convergence and if faster convergence is required, then is increased but at the same time steady state error also increases when the system is sparse. This urges for an optimal . Also, it is evident from [18] that ZA-LMS cannot outperform standard LMS when the system is nonsparse. Besides, should be as per the sparsity level when the system changes from sparse to semisparse or nonsparse. Thus, constant value of is not suitable especially for time varying sparse system and robust algorithm can only be achieved by changing the value of zero attractor controller as per the level of sparsity. Therefore, an adaptive zero attractor controller based ZA-LMS in order to improve its robustness against variable sparsity level is proposed.

3. Proposed Algorithm

This section proposes an adaptive ZA-LMS algorithm. Then, a theoretical optimal zero attractor controller is deduced based on largest decrease in MSD. A practical optimal zero attractor controller is obtained and a simple update rule for the proposed algorithm is proposed.

The proposed algorithm is based on varying zero attractor controller. Thus, by replacing by a time varying function, the update recursion of adaptive ZA-LMS becomes Here, is assumed to be constant in order to have stable operation [18].

4. Assumptions

The following are the assumptions used in this work:(A.1)The input is assumed to be independent and identically distributed (i.i.d.) and is white with zero mean and variance .(A.2)The noise is also i.i.d. and is assumed to be white with zero mean and variance .(A.3)The weight error vector is independent of the input.These assumptions are commonly used in analyzing all adaptive filters [22]. Using these assumptions, the optimal zero attractor controller is derived.

5. Optimal Zero Attractor Controller

The optimal value is based on the objective of where is the weight error vector given by . The update recursion in terms of weight error vector can be written asSquaring on both sides of (5) and if expectation is taken, we obtain (6) after substituting asAs is constant which is assumed as [18, 22], the optimal zero attractor controller is obtained by differentiating (6) with respect to on both sides and equating it to zero. Thus, the optimal zero attractor controller is given byThe optimal value obtained consists of nonlinear terms. In order to find a feasible solution, letThe value of is always positive and is equal to [10, 21]. The step size is chosen such that [18] in order to have stability. Thus, . In order to find , the filter weights are divided as nonzero () and zero () filter coefficients such that and [10, 18, 21]. If is substituted in and if the weights are assumed to follow Gaussian distribution, then For ,, if Price’s Theorem is used () then where is the variance of the weights. Hence, the first term varies with respect to zeros of the filter coefficients and second term varies with respect to nonzero filter coefficients. If the number of nonzero coefficients is high then the value of will be negative as more bias is obtained on nonzero terms in ZA-LMS algorithm. On the other hand, if the number of zero coefficients is high, then a positive value of is obtained as the first term in (10) dominates the second term [10]. Thus, the rule for updating zero attractor controller term of ZA-LMS is given by Since the update equation for includes which is a nonlinear term and which involves which is not known in advance, time average method is adopted to find the value of . Thus,where is the smoothing factor which varies as .

6. Justification of Adaptive Rule

The update rule for the zero attractor controller must be in such a way that the zero attractor controller works at optimal value in case of sparse and semisparse system and to zero in case of nonsparse system. Equation (11) shows the update rule. It can be seen that in the case of highly sparse and semisparse systems the value of is positive as the first term in (10) is higher than the second one. The optimal zero attractor controller is obtained as a function of variance of the weights. A large value of is obtained if the system is highly sparse due to the presence of large zero and small nonzero coefficients and small value is obtained for semisparse system as the system has more or less equal number of zero and nonzero coefficients. On the other hand, if the nonzero taps dominate, then is said to be negative as the second term is higher than the first term. Thus, the zero attractor controller becomes zero which is the required criterion for nonsparse system. Thus, the proposed algorithm is found to be robust under variable sparsity conditions.

7. Simulations

The proposed algorithm is further tested through simulations which are made for identification of the unknown systems. The adaptive filter and the unknown system are assumed to have the same lengths. The input and noise are both white Gaussian source with zero mean variance unity and , respectively, such that the SNR = 30 db. It is also assumed that the variance of the noise source is known. The results are averaged over 100 independent runs. Normalized MSD is used to evaluate the performance of the algorithm. The Normalized MSD is defined as .

In the first experiment, an unknown system with 32 coefficients is taken. In order to evaluate the performance of the proposed algorithm, the performance of LMS [5], ZA-LMS [10] is also simulated with our proposed adaptive ZA-LMS algorithm. All the three conditions, namely, sparse, semisparse, and nonsparse, are taken for analysis. The sparse system has one nonzero coefficient and the position is chosen randomly. The semisparse system has equal number of zero and nonzero filter coefficients and the nonsparse system has 32 nonzero filter coefficients. The step sizes is chosen as for all the algorithms and is set as (as per (42) of [18]) for ZA-LMS algorithm and for adaptive ZA-LMS algorithm. The simulation results are shown in Figure 1.

For the first 3000 iterations, the system is said to be sparse and, for the next 3000 iterations, the system changes to semisparse and for the last 3000 iterations the nonsparse condition is applied. When the system is highly sparse during the first 3000 iterations, adaptive ZA-LMS performs better than standard LMS and ZA-LMS with constant whereas when the system is semisparse, adaptive ZA-LMS maintains the best performance with lesser steady state error among the three filters. After 6000 iterations, when the system is nonsparse, the performance of ZA-LMS decreases while the proposed adaptive ZA-LMS maintains the performance comparable with the standard LMS. The same experiment is repeated for a step size of for all the algorithms and for ZA-LMS and similar performance curve is plotted in Figure 2.

Several interesting findings can be observed from Figures 1 and 2. First of all, in all different environmental conditions, the proposed algorithm gives lowest steady state error with faster convergence which confirms the robustness of the algorithm against variable sparsity conditions. This is an expected occurrence since, as per (11), is changed based on . If the present weights result in positive value of , then the optimal value of is obtained from the variance of weights. On the other hand, if the system changes to nonsparse, then, is negative; thus (11) makes the algorithm work with so as to have convergence similar to LMS. Thus, it is found that our proposed algorithm gives lower steady state error than LMS and ZA-LMS in both sparse and semisparse conditions that are seen in Figures 1 and 2. Secondly, as per (4), the proposed algorithm should behave similar to LMS algorithm under nonsparse condition and this is satisfied in Figures 1 and 2.

In order to prove that the proposed algorithm works with an optimal value of zero attractor controller, the second experiment is conducted and the results are plotted in Figure 3. Here, the system and the parameters chosen are similar to the ZA-LMS algorithm as discussed in previous literatures [10, 18]. The unknown system has 16 filter coefficients with one nonzero coefficient for a sparse system and 10 for a semisparse system. The nonsparse system has all nonzero filter coefficients. The step size is chosen as and [10, 18] and all other parameters are the same as the first experiment.

Thus, from Figure 3, it is evident that the proposed adaptive ZA-LMS under sparse condition adapts itself and converges to ZA-LMS when it is operated under optimal condition and to LMS under semisparse and nonsparse condition. Thus, the effectiveness of (11) in selecting an optimal zero attractor controller under all conditions of sparsity ranging from sparse to nonsparse is proved.

Another interesting way to analyze the performance of the proposed algorithm is to plot the time evolution of for the above three environmental conditions with different sparsity levels. For this, versus samples and versus samples are plotted as shown in Figures 4 and 5 for the first experiment. From Figures 4 and 5 it could be observed that and converge to 0 when the system is nonsparse as predicted. This is because as ZA-LMS cannot outperform LMS during nonsparse condition any value of yields higher steady state error. In case of highly sparse and semisparse systems and converge to an optimal value with higher value for highly sparse system and to an intermediate value for semisparse system, respectively.

Next, the proposed approach is tested to evaluate the performance of the proposed approach for different values of SNR. For this, the SNRs 10 db and 20 dB are chosen and the MSD analysis is made as shown in Figures 6 and 7. As expected, the proposed algorithm is robust against different SNR also.

The next experiment is done to evaluate the performance of the proposed approach for echo cancellation application with 512 coefficients [3]. The sparse system consists of 40 nonzero filter coefficients as it is one of the real time situation that prevails in echo cancellation application [3] and the semisparse system is made of equal number of nonzero and zero filter taps. The nonsparse system is made of 512 nonzero filter coefficients. The value of and SNR are chosen to be (criterion 1 from [18] gives ), 0.001, and 30 dB, respectively. Thus as seen from Figure 8, when the system is highly sparse, the performance of the proposed algorithm is similar to ZA-LMS as it is operated based on criterion 1 of [18]. However criterion 1 specifies only the upper limit but there is no procedure to select the optimal value which can be found only by trial and error. Moreover in case of semisparse and nonsparse systems, the performance of ZA-LMS deteriorates as constant value of zero attractor controller is used. These disadvantages are eliminated in our proposed algorithm where the algorithm adapts itself to the optimal value at all level of sparsity thus claiming to be suitable for echo cancellation application which is more prone to time varying sparsity.

Figure 9 evaluates the tracking capability of the proposed algorithms. For the first 3000 samples, the highly sparse system of the first experiment is taken and the system is suddenly changed from 1 zero to 3 zero filter taps after 3000 samples and to 5 zero taps after 6000 samples. It is found that the algorithm also has good tracking capability.

8. Conclusions

An adaptive ZA-LMS is proposed in this paper. The proposed algorithm has an adaptive zero attractor controller term which is changed based on the characteristics of the filter coefficients. A simple update rule is also proposed which makes the algorithm work with optimal zero attractor controller depending on the number of zero and nonzero filter coefficients. Thus, the algorithm provides better performance than LMS in highly sparse system by exploiting the sparse nature and behaves like LMS under nonsparse condition thus providing robustness against variable sparsity which is proved through simulations in the context of identification of an unknown system.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.