Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 937252, 15 pages
http://dx.doi.org/10.1155/2014/937252
Research Article

Smooth Approximation -Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

Graduate School of Engineering, Kochi University of Technology, Kami-shi 782-8502, Japan

Received 10 December 2013; Accepted 30 January 2014; Published 26 March 2014

Academic Editors: G. Jovanovic Dolecek, C. Saravanan, and D. Tay

Copyright © 2014 Yingsong Li and Masanori Hamamura. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a smooth approximation -norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation -norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases.

1. Introduction

With the development of wireless communication, there have been increasing demands for higher transmission rates in modern communication systems. This has led to the development of new standards for various wireless devices, such as smartphones, laptops, and iPads [15]. Given these requirements, broadband signal transmission is an essential technique for next-generation wireless communication systems [6]. In broadband wireless communications, a “hilly terrain” (HT) delay profile consists of a sparsely distributed multipath channel in which most of taps are zero or close to zero, while only a few taps are dominant [4]. In this paper, we consider the communication problems which involve the estimation and equalization of channels with a large delay spread but with a small nonzero support, which is also known as sparse channel estimation.

Recently, a rising method for sparse channel estimation has been proposed and extensively investigated by the use of compressed sensing (CS) to improve the performance of such sparse wireless communication channels [79]. We found that these CS channel estimation algorithms were sensitive to the channel interferences. Another effective class of methods that have been widely studied in channel estimation is adaptive filtering algorithms [1013], such as least mean square (LMS), recursive least squares (RLS), and Kalman filter algorithms. However, these standard adaptive filtering algorithms cannot utilize the sparse property of the wireless communication channel and hence they perform poorly in dealing with the sparse signals. To utilize the sparse characteristic of such channels, some improved adaptive filtering algorithms by the use of partial updating techniques have been proposed and investigated in wireless communications [1416]. However, this partial updating degraded the estimation performance in contrast to the standard LMS and RLS algorithms.

Motivated by the widely developed CS techniques [17, 18], some efforts have been put into combining the CS technique into the adaptive filtering algorithms in order to improve the performance of standard adaptive filtering performance for sparse signal recovery. For example, a Kalman filter compressed sensing (KF-CS) algorithm has been proposed and applied in magnetic resonance imaging (MRI) by the combination of CS and standard Kalman filter [19]. In this algorithm, Kalman filter estimates the support set which has significant effect on the estimator errors. Furthermore, another algorithm denoted as least square compressed sensing (LS-CS) has been developed and well investigated by using the CS and RLS techniques [20, 21]. Unfortunately, these algorithms are highly complex because of the computational complexity of Kalman filter and RLS algorithms. LMS algorithm has attracted much more attention in recent years due to its low computational complexity and reliable recovery capability. Inspired by the CS theory [17, 18] and the KF-CS and LS-CS algorithms, several sparsity-aware LMS algorithms have been proposed with additional norm constrained terms in the cost function of standard LMS algorithms [6, 2227]. It was found in these studies that these linear constrained sparsity-aware LMS algorithms can achieve faster convergence speed and better steady-state performance compared to the standard LMS algorithm. However, these sparsity-aware LMS algorithms are sensitive to the noise and the sparsity characteristics of the channel, which results in high steady-state misadjustment due to the estimation error that occurs in the adaptation. The affine projection algorithm (APA) is another popular method in adaptive filtering applications [2831], with its complexity and estimation performance intermediary between the LMS and RLS algorithms. The APA reuses old data resulting in fast convergence, and is also an improved normalized LMS (NLMS) algorithm that converges faster than the standard LMS algorithm. Subsequently, -norm penalized APA has been proposed to render the standard APA suitable for sparse signal estimation applications [32]. However, these -norm penalized APAs impose the condition that the proportion of nonzero taps must be very small as compared to the proportion of dominant taps in the associated parameter vector in channel estimation.

In this paper, we propose a smooth approximation -norm constrained affine projection (SL0-APA) algorithm for sparse channel estimation. The proposed SL0-APA is similar to the algorithms proposed in [32], which are known as zero-attracting affine projection algorithm (ZA-APA) and reweighted zero-attracting affine projection algorithm (RZA-APA). It differs by the regularization term which is a smooth approximation -norm obtained from a continuous function that is an accurate approximation of -norm. By exploiting the information of the sparsity channel and using the concepts of the smooth approximation of -norm, we can improve the performance of the previous sparsity-aware APAs with respect to both the convergence speed and the steady-state performance. We also provide a convergence analysis and the mean-square-error analysis of our proposed SL0-APA. Furthermore, we experimentally investigate the effect of adding a smooth approximation -norm penalty term to the cost function on learning the convergence behavior and the steady-state error performance of the SL0-APA. Accordingly, we experimentally illustrate that the SL0-APA is superior to ZA-APA and RZA-APA in terms of steady-state error and the convergence speed. Besides, the theoretical analysis is also presented and compared to the computer simulation results. Finally, the computational complexity of the proposed SL0-APA is mathematically given and is experimentally evaluated.

The remainder of the paper is structured as follows. Section 2 briefly reviews the standard APA, ZA-APA, and RZA-APA based on a sparse multipath communication system. In Section 3, we first propose a SL0-APA by the use of a smooth approximation -norm penalty on the cost function of the standard APA. Next, we provide a theoretical expression of the convergence analysis and the mean-square-error (MSE) analysis of our proposed SL0-APA based on the energy-conservation approach. In Section 4, the proposed SL0-APA is experimentally investigated over a sparse channel to demonstrate the estimation performance of the SL0-APA, including the convergence speed, steady-state error, and the computational complexity. Finally, Section 5 is the conclusion.

2. Conventional Channel Estimation Algorithms

In this section, we consider a sparse multipath communication system shown in Figure 1 to discuss traditional channel estimation algorithms. The input signal containing the most recent samples is transmitted over a finite impulse response (FIR) channel with channel impulse response (CIR) , where denotes the transposition. The input signal is also used as an input for an adaptive filter with coefficients to produce an estimation output , and the received signal is obtained at the receiver.

937252.fig.001
Figure 1: A typical sparse multipath communication system.
2.1. Affine Projection Algorithm (APA)

The channel estimation technique called the standard APA estimates the unknown sparse channel using the input signal and the output signal . In the standard APA, let us assume that we keep the last input signal to form the matrix ) as follows [28]: where denotes the projection order of the APA. Furthermore, we also define some vectors representing reusing results at a given instant , such as the output of the channel, the output of the filter, the received signal , and the additive white Gaussian noise vector and these vectors are expressed as

From (1)–(5), the instantaneous error can be written as

As for the channel estimation, the purpose of the APA is to minimize

The APA maintains the next coefficient as close as possible to the current coefficient and minimizes the a posteriori error to zero at the same time. Here, the Lagrange multiplier is used to find out the solution that minimizes the cost function of the APA: where is a vector of Lagrange multiplier and . Equation (8) can be rewritten as

Then, the gradient of with respect to is given by

After setting the gradient of with respect to equal to zero, we get

Multiplying on both sides of (11), we have

By taking the constraint condition of (7) into consideration, we have

Taking (3), (6), and (12) into account, we can get

Then

The update equation is now given by (11) with being the solution of (14) and is expressed as where . The above update equation corresponds to the conventional APA with unity convergence factor [28]. In the practical engineering applications, a convergence factor , also known as step-size, is adopted to tradeoff the mean square misadjustment and convergence speed, and thus, the update equation (16) can be rewritten as

In general, the step-size should be chosen in the range to control the convergence speed and the steady-state behavior of the APA. It is worth noting that the APA becomes familiar normalized least mean square (NLMS) when the .

2.2. Zero-Attracting Affine Projection Algorithm (ZA-APA)

To improve the performance of the standard APA and to utilize the sparsity property of the sparse multipath communication channel, an -penalty term is cooperated into the cost function of (8), which is known as zero-attracting affine projection algorithm (ZA-APA) [32]. In the ZA-APA, the cost function is defined by combining the cost function of standard APA with -penalty of the channel estimator and is given by where is the vector of Lagrange multiplier with . is a regularization parameter to balance the estimation error and the sparse -penalty of . In order to minimize the cost function , we use the Lagrange multiplier to calculate its gradient, which is expressed as where is a component-wise sign function defined as

As is known to us, the minimum is obtained by letting . Thus, we can get

Multiplying both sides by , we can obtain

Considering the constraint condition of (7), we can get the following expression:

From the above discussion, we know that . Thus, the Lagrange multipliers vector is obtained:

Substituting (24) into (21) and assuming that , we can obtain the update function of the ZA-APA:

To balance the convergence speed and steady-state error, a step-size is introduced and integrated into (25). Then, (25) can be rewritten as

Comparing the update equation (26) of the ZA-APA with the update (17) of the standard APA, we find that there are two additional terms in (26) which attract the tap coefficients to zero when the tap magnitudes of the sparse channel are close to zero. These two additional terms are zero attractors whose attracting strengths are controlled by . Intuitively, the zero attractor can speed the convergence of ZA-APA when the majority taps of the channel of are zero or close to zero, such as sparse channel.

2.3. Reweighted Zero-Attracting Affine Projection Algorithm (RZA-APA)

Unfortunately, the ZA-APA cannot distinguish the zero taps and the nonzero taps of the sparse channel, and it exerts the same penalty on all the channel taps, which forces all the taps to zero uniformly [22, 32]. Therefore, the performance of the ZA-APA is degraded when the channel is a less sparse one. In order to improve the performance of the ZA-APA and to solve this problem, a heuristic approach first reported in [33] and employed in [22, 32] to reinforce that the zero attractor was proposed and was denoted as reweighted zero-attracting affine projection algorithm (RZA-APA). In the RZA-APA, is adopted instead of . Thus, the cost function of the RZA-APA can be written as where is a regularization parameter, is a positive threshold, and is the vector of the Lagrange multiplier with size of . The Lagrange multiplier is used for calculating the minimization of and the gradient of can be expressed as

Let and assume , and then we can get

By multiplying on both sides of (29), the following equation can be obtained:

Taking (7) and (30) into consideration, we can get

Thus, the Lagrange multiplier vector is obtained: where . Substituting (32) into (29), we can get the update equation of the RZA-APA:

Similarly, a step-size is introduced and cooperated into (33) to balance the convergence speed and the steady-state error of the RZA-APA. Then, (33) can be rewritten as

From the analysis and the a priori knowledge of the sparse channel, we know that the RZA-APA is more sensitive to taps with small magnitudes. Note that the reweighted zero attractor mainly affects taps whose magnitudes are comparable to while it has less shrinkage exerted on . Thus, the RZA-APA can improve steady-state performance compared to the ZA-APA.

3. Proposed Smooth Approximation -Norm Constrained Affine Projection Algorithm (SL0-APA)

On the basis of the discussion of the ZA-APA and RZA-APA, we find that the RZA-APA can improve the performance of ZA-APA for sparse channel estimation because is more similar to -norm [22, 32, 33]. On the other hand, solving -norm is a NP-hard problem [18]. Fortunately, smooth approximation -norm (SL0) with low complexity has been proposed as an accurate approximation of to reconstruct sparse signals in CS theory [34, 35]. Inspired by the SL0 algorithm and in order to exploit the sparse characteristic of the multipath channel in a more accurate way, a smooth approximation -norm constrained affine projection algorithm (SL0-APA) is proposed by exerting the SL0 on the cost function of standard APA to further improve the performance of the RZA-APA.

3.1. Proposed SL0-APA

Similar to the ZA-APA and RZA-APA discussed above, the cost function of the SL0-APA is written as where is the vector of the Lagrange multiplier with size of and is a regularization parameter to tradeoff the estimation error and the sparse -penalty of . Here, the smooth approximation of -norm is a continuous function defined as follows: where is a small positive constant which is used for avoiding division by zero, and the gradient of this continuous functions for SL0 is obtained:

To obtain the minimum of the , we use Lagrange multiplier to calculate the gradient of . Then the gradient of the cost function of the SL0-APA is written as

Let the left-hand side of (38) be equal to zero. We can get the following equation:

Multiplying on both sides of (39), we can get

By taking (7) into consideration, (40) can be rewritten as

From the discussion of the ZA-APA and RZA-APA, we can get the Lagrange multiplier vector from (41) by taking into account:

Substituting (42) into (39) and assuming that , the update function of the SL0-APA can be achieved: where . Similar to the ZA-APA and RZA-APA, a step-size is introduced into (43) to create a balance between the convergence speed and steady-state error of the SL0-APA:

It is important to mention that our proposed SL0-APA is superior to APA, ZA-APA, and RZA-APA for sparse channel estimation because we utilize a smooth approximation of , which is proved to be an approximate and near-accurate approximation of -norm in comparison with the sum-log function in the RZA-APA. Moreover, it is easy to calculate the gradient, as we can easily find a continuous gradient for this smoothed -norm function.

3.2. Analysis of the Proposed SL0-APA

In this section, we analyze the mean-square-error (MSE) behavior of the SL0-APA. Here, energy-conservation approach [3638] is employed to obtain the theoretical expressions for the MSE of the SL0-APA. Let us consider the received signal that is derived from the following linear model: where is the sparse channel vector of the multipath communication system that we wish to estimate and is the additive Gaussian noise at instant . Our objective is to evaluate the steady-state MSE performance of the proposed SL0-APA. The steady-state MSE is defined as where denotes the expectation and is the estimated error at time . Taking (45) and (47) into account, we obtain

Subtracting from both sides of the SL0-APA update function (44), we get the misalignment vector:

Substituting (48) into (49), we can get

Taking expectations on both sides of (50), we get

We assume that the additive noise is statistically independent of the input signal , and hence we have . Therefore, (51) can be simplified as

From previous studies on sparse LMS algorithms [22, 39], in the steady state, we have

Thus, the in (52) can be written as

In addition, when the channel length is far larger than 1, , the can be written as [37, 40, 41]

Since for sparse channel estimation, the inner expectation reduces to

Here, we define where is the power of the input signal. Thus, where is the trace of matrix and is the identity matrix. Moreover, we can obtain

Then we can approximate by

Therefore, (52) can be rewritten as

It is found that the matrix is approximately bounded between and . Therefore, we see that such convergence is guaranteed only if is less than 1 [28], which is given by where is the maximum eigenvalue of the autocorrelation matrix of . We can observe that the stability condition of the SL0-APA is independent of the parameter . We assume that the estimated vector converges when . Then, (61) can be rewritten as

From (63), we can obtain which can be regarded as

Note that (65) implies that the optimum solution of the SL0-APA is biased, as was also shown for zero-attracting least mean square (ZA-LMS) algorithms [22]. We then proceed to derive the steady-state MSE for our proposed SL0-APA. Firstly, multiplying both sides of (44) by from the left, we can get

Furthermore,

Additionally, we define the a posteriori error vector and the a priori error vector as

Combining (67) and (68), we have

In addition,

By substituting (70) into (69), we have

From (69), we can also write the as follows:

Substituting (72) to (44), we have

On the basis of the discussion mentioned above, we notice that . By considering the power of both sides of (73), using the steady-state condition when , and assuming that , , and are independent of in the steady state, we get

Substituting (71) into the left-hand side (LHS) of (74), we have

Moreover, substituting (70) into the right-hand side (RHS) of (74), we have

By combining (75) and (76), we get

We also assume that the additive Gaussian noise is statistically independent of the input signal . Thus (77) can be simplified as

Here, we also assume that the is statistically independent of at the steady state. Moreover, we use the definition of [36], where where and is the top entry of [36]. Then, the LHS of (78) can be rewritten as

Similar to the calculation of (80), the first term in the RHS of (78) can be written as

In addition, the second term of RHS of (78) can be rewritten as

Then the last term of the right-hand side of (78) can be expressed as

When the is small, we can get

Therefore, the MSE of the proposed SL0-APA with small step-size can be written as

When the step-size is large, [36]. In this case,

Thus, the MSE of the proposed SL0-APA with large step-size can be written as

4. Results and Discussions

In this section, we present the computer simulation results to illustrate the performance of the proposed SL0-APA over a sparse multipath communication channel. Moreover, the simulation results for predicting the mean-square error of the proposed SL0-APA are also provided to verify the effectiveness of the theoretical expressions obtained in Section 3.2. In addition, the computational complexity of the SL0-APA is presented and compared with past sparsity-aware algorithms, namely, the ZA-APA, RZA-APA, and standard APA, NLMS algorithms.

4.1. Performance of the Proposed SL0-APA

Firstly, we set up a simulation example to discuss the convergence speed of the proposed SL0-APA in comparison with the previously proposed sparse channel estimation algorithms including the APA, ZA-APA, RZA-APA, and NLMS algorithms. In the setup of this experiment, we consider a sparse multipath communication channel whose length is equal to 16 and whose number of dominant taps is set to two different sparsity levels, namely, , similarly to [6, 22, 25, 26]. The dominant channel taps are obtained from a Gaussian distribution subjected to , and the positions of the dominant channel taps are random within the length of the channel. The input signal of the channel is a Gaussian random signal, while the output of the channel is corrupted by an independent white Gaussian noise . An example of a typical sparse multipath channel with a channel length of and a sparsity level of is shown in Figure 2. In the simulations, the power of the received signal is , while the noise power is given by . In all the experiments, the difference between the actual and estimated channels based on the sparsity-aware algorithms and the sparse channel mentioned above is evaluated by the MSE defined as follows:

937252.fig.002
Figure 2: A typical sparse multipath channel.

In this subsection, we aim to investigate the convergence speed and the steady-state performance of the SL0-APA. The simulation parameters used to compare the convergence speed while maintaining the same MSE are listed as follows: , , , , , , , , , , , and , where is the step-size parameter for NLMS algorithm. It can be seen from Figure 3 that our proposed SL0-APA possesses the fastest convergence speed compared to the previously proposed channel estimation algorithms at the same steady-state error floor. In addition, all the affine projection algorithms, namely, APA, ZA-APA, RZA-APA, and SL0-APA, converge much more quickly in comparison with NLMS algorithm, because the affine projection algorithms reuse the old data signal that is implemented by the use of parameter . Thus, we discuss the effects of the affine projection order for SL0-APA and compare it with the APA and NLMS algorithms. The computer simulation results with different values of are shown in Figure 4. It reveals that the convergence speed is improved by the increment of the affine projection order . However, the steady-state performance has deteriorated from to . Thus, in our proposed SL0-APA, the affine projection , the step-size , the regularization parameter , and should be take into account to balance the convergence speed and the steady-state behavior.

937252.fig.003
Figure 3: Convergence of the proposed SL0-APA.
937252.fig.004
Figure 4: Affine projection order effects on the SL0-APA.

Next, we show the effects of the sparsity levels on the steady-state performance of the proposed SL0-APA at and . To obtain the same convergence speed, the simulation parameters used in this experiment are listed as follows: , , , , , , , and . We can see from Figure 5 that our proposed SL0-APA has the best steady-state performance compared to the ZA-APA, RZA-APA, APA, and NLMS algorithms. The SL0-APA can achieve 10 dB smaller MSE than the RZA-APA for and shown in Figure 5(a). When the sparsity level increases to 4, it is seen in Figure 5(b) that our proposed SL0-APA still outperforms other algorithms, while its steady-state error increases in comparison with that of . When the affine projection order increases to , we can see from Figure 6 that the convergence speed is significantly improved compared to that of shown in Figure 5. However, the steady-state error is also slightly increased when the increases. Furthermore, our proposed SL0-APA still has the best convergence speed and lowest steady-state error.

fig5
Figure 5: Performances of the SL0-APA with different sparsity levels for .
fig6
Figure 6: Performances of the SL0-APA with different sparsity levels for .

Finally, we use the theoretical expressions obtained in Section 3.2 to predict the mean-square-error (MSE) of the proposed SL0-APA with different and compare the theoretical results with the simulation ones. The MSE comparisons of the SL0-APA as a function of the step-size for the designated sparse multipath communication channel with the simulation parameters of , , , , and are shown in Figure 7. The theoretical results are obtained from (85) to (87) for small values of and large values of , respectively, while the simulation results are obtained by averaging 50 independent trials. We can see that the simulation results exhibit good agreement with the theoretical expressions with different step-size . In addition, we can see that the steady-state misadjustment between the computer simulation and the theory predicting is becoming larger with the decrement of the for small shown in Figure 7(a), but the steady-state error is becoming lower. For the large , both the steady-state error and the convergence speed are deteriorated by the increment of the step-size . Generally speaking, as increases, the MSE increases. Although a large zero attractor can help the SL0-APA to converge faster, it will lead to a higher misadjustment. Thus, in the most cases, we should choose the step-size carefully in order to balance convergence speed and steady-state performance.

fig7
Figure 7: Steady-state MSE performance of the SL0-APA with different step-size for .
4.2. Computational Complexity

In this subsection, we present the computational complexity of the proposed SL0-APA and compare it with the conventional sparsity-aware channel estimation algorithms, including the APA, ZA-APA, and RZA-APA. It is worth noting that when the affine projection order is equal to 1, these three affine projection algorithms converge to familiar NLMS, ZA-NLMS, and RZA-NLMS algorithms, respectively. Here, the computational complexity is the arithmetic complexity, which includes additions, multiplications, and divisions. We assume nonzero taps in a sparse channel model as an FIR filter with coefficients, and the order of these affine projection algorithms is . The computational complexity of the proposed SL0-APA and the relevant sparsity-aware algorithms are shown in Table 1.

tab1
Table 1: Computational complexity.

From Table 1, we see that our proposed SL0-APA with the best steady-state performance and fastest convergence speed needs more calculations than the RZA-APA. The additional computational complexity comes from the continuous function for SL0 approximation, which can be reduced by proper selection of this continuous function. Furthermore, the complexity of all the APAs is higher than the NLMS algorithms. In addition, the sparsity property of the channel can also help to reduce the computational complexity of the proposed SL0-APA.

5. Conclusion

In this paper, we proposed an SL0-APA to exploit the sparsity of sparse channel and to improve the performance on both the convergence speed and steady-state error of the APA, ZA-APA, and RZA-APA. This algorithm is mainly developed by introducing a smooth approximation -norm, which has a significant impact on the sparsity due to the incorporation of SL0 into the cost function of the standard APA as an additional constraint. The improvement can evidently accelerate the convergence speed by exerting such additional regularization term on the zero taps of the sparse channel. Then, we provided a mathematical analysis for predicting the mean square error of our proposed SL0-APA. We also showed the convergence behavior and the steady-state performance in comparison with the standard APA and relevant sparsity-aware channel estimation algorithms. In summary, the simulation results demonstrated that the proposed SL0-APA with moderate computational complexity accelerates convergence speed and improves steady-state performance in a designated sparse channel.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors derived the SL0-APA in Section 3.1 by considering the -norm-based LMS and NLMS [6, 26] and the smooth approximation -norm in [21]. In the paper [42], -norm constrained APA was also derived around the same time as their paper was submitted for publication. The authors thank the editors for the constructive comments to clarify this paper.

References

  1. L. Korowajczuk, LTE, WiMAX and WLAN Network Design, Optimization and Performance Analysis, John Wiley & Sons, New York, NY, USA, 2011.
  2. J. G. Proakis, Digital Communications, McGraw-Hill, 4th edition, 2001.
  3. F. Adachi, D. Grag, S. Takaoka, and K. Takeda, “New direction of broadband wireless technology,” Wireless Communications and Mobile Computing, vol. 7, no. 8, pp. 969–983, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. S. F. Cotter and B. D. Rao, “Sparse channel estimation via matching pursuit with application to equalization,” IEEE Transactions on Communications, vol. 50, no. 3, pp. 374–377, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. P. Maechler, P. Greisen, B. Sporrer, S. Steiner, N. Felber, and A. Burg, “Implementation of greedy algorithms for LTE sparse channel estimation,” in Proceedings of the Conference Record of the 44th Asilomar Conference on Signals, Systems and Computers (ASILOMAR '10), pp. 400–405, Pacific Grove, Calif, USA, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Gui and F. Adachi, “Improved least mean square algorithm with application to adaptive sparse channel estimation,” EURASIP Journal on Wireless Communications and Networking, vol. 2013, no. 1, article 204, 2013. View at Publisher · View at Google Scholar
  7. J. Meng, W. Yin, Y. Li, N. T. Nguyen, and H. Zhu, “Compressive sensing based high-resolution channel estimation for OFDM system,” IEEE Journal on Selected Topics in Signal Processing, vol. 6, no. 1, pp. 15–25, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, “Compressed channel sensing: a new approach to estimating sparse multipath channels,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1058–1076, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. G. Tauböck and F. Hlawatsch, “A compressed sensing technique for OFDM channel estimation in mobile environments: exploiting channel sparsity for reducing pilots,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 2885–2888, Las Vegas, Nev, USA, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Haykin, Adaptive Filter Theory, Prentice Hall, 4th edition, 2001.
  11. Y. Huang and J. Wen, “An analysis on partial PIC multi-user detection with LMS algorithms for CDMA,” in Proceedings of the 14th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '03), vol. 1, pp. 17–21, Beijing, China, September 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Huang, X. Chen, L. Xiao, S. Zhou, and J. Wang, “Kalman-filter-based channel estimation for orthogonal frequency-division multiplexing systems in time-varying channels,” IET Communications, vol. 1, no. 4, pp. 795–801, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. T. K. Akino, “Optimum-weighted RLS channel estimation for rapid fading MIMO channels,” IEEE Transactions on Wireless Communications, vol. 7, no. 11, pp. 4248–4260, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Godavarti and A. O. Hero, “Partial update LMS algorithms,” IEEE Transactions on Signal Processing, vol. 53, no. 7, pp. 2382–2399, 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Werner, M. L. R. de Campos, and P. S. R. Diniz, “Partial-update NLMS algorithms with data-selective updating,” IEEE Transactions on Signal Processing, vol. 52, no. 4, pp. 938–949, 2004. View at Publisher · View at Google Scholar · View at Scopus
  16. P. A. Naylor and A. W. H. Khong, “Affine projection and recursive least squares adaptive filters employing partial updates,” in Proceedings of the 38th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 950–954, Pacific Grove, Calif, USA, November 2004. View at Scopus
  17. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Scopus
  18. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at Google Scholar · View at Scopus
  19. N. Vaswani, “Kalman filtered compressed sensing,” in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 893–896, San Diego, Calif, USA, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. E. M. Eksioglu, “Sparsity regularised recursive least squares adaptive filtering,” IET Signal Processing, vol. 5, no. 5, pp. 480–487, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. R. E. Carrillo and K. E. Barner, “Iteratively re-weighted least squares for sparse signal reconstruction from noisy measurements,” in Proceedings of the 43rd Annual Conference on Information Sciences and Systems (CISS '09), pp. 448–453, Baltimore, Md, USA, March 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), pp. 3125–3128, Taipei, Taiwan, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. K. Shi and P. Shi, “Convergence analysis of sparse LMS algorithms with l1-norm penalty based on white input signal,” Signal Processing, vol. 90, no. 12, pp. 3289–3293, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. F. Y. Wu and F. Tong, “Gradient optimization p-norm-like constraint LMS algorithm for sparse system estimation,” Signal Processing, vol. 93, no. 4, pp. 967–971, 2013. View at Google Scholar
  25. O. Taheri and S. A. Vorobyov, “Sparse channel estimation with LP-norm and reweighted L1-norm penalized least mean squares,” in Proceedings of the 36th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '11), pp. 2864–2867, Prague, Czech Republic, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Gu, J. Jin, and S. Mei, “L0 norm constraint LMS algorithm for sparse system identification,” IEEE Signal Processing Letters, vol. 16, no. 9, pp. 774–777, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. R. Niazadeh, S. H. Ghalehjegh, M. Babaie-Zadeh, and C. Jutten, “ISI sparse channel estimation based on SL0 and its application in ML sequence-by-sequence equalization,” Signal Processing, vol. 92, no. 8, pp. 1875–1885, 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. P. S. R. Diniz, Adaptive Filtering Algorithms and Practical Implementation, Springer, 4th edition, 2013.
  29. H. Shin, A. H. Sayed, and W. Song, “Variable step-size NLMS and affine projection algorithms,” IEEE Signal Processing Letters, vol. 11, no. 2, pp. 132–135, 2004. View at Publisher · View at Google Scholar · View at Scopus
  30. R. Arablouei and K. Dogançay, “Affine projection algorithm with selective projections,” Signal Processing, vol. 92, no. 9, pp. 2253–2263, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Werner, J. A. Apolinário Jr., M. L. R. de Campos, and P. S. R. Diniz, “Low-complexity constrained affine-projection algorithms,” IEEE Transactions on Signal Processing, vol. 53, no. 12, pp. 4545–4555, 2005. View at Publisher · View at Google Scholar · View at Scopus
  32. R. Meng, R. C. Lamare, and V. H. Nascimento, “Sparsity-aware affine projection adaptive algorithms for system identification,” in Proceedings of the Sensor Signal Processing for Defence Conference (SSPD '11), pp. 1–5, London, UK, September 2011.
  33. E. J. Candès, M. B. Wakin, and S. P. Noyd, “Enhancing sparsity by reweighted l1-minimization,” Journal of Fourier Analysis and Applications, vol. 15, no. 5-6, pp. 877–905, 2008. View at Google Scholar
  34. H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed l0 norm,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. M. Hyder and K. Mahata, “An approximate l0 norm minimization algorithm for compressed sensing,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), pp. 3365–3368, Taipei, Taiwan, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  36. H. Shin and A. H. Sayed, “Mean-square performance of a family of affine projection algorithms,” IEEE Transactions on Signal Processing, vol. 52, no. 1, pp. 90–102, 2004. View at Publisher · View at Google Scholar · View at Scopus
  37. R. Meng, Sparsity-Aware Adaptive Filtering Algorithms and Application to System Identification, University of York, 2011.
  38. H. Shin, W. Song, and A. H. Sayed, “Mean-square performance of data-reusing adaptive algorithms,” IEEE Signal Processing Letters, vol. 12, no. 12, pp. 851–854, 2005. View at Publisher · View at Google Scholar · View at Scopus
  39. G. Su, J. Jin, Y. Gu, and J. Wang, “Performance analysis of l0 norm constraint least mean square algorithm,” IEEE Transactions on Signal Processing, vol. 60, no. 5, pp. 2223–2235, 2012. View at Publisher · View at Google Scholar · View at Scopus
  40. G. Barrault, M. H. Costa, J. C. M. Bermudez, and A. Lenzi, “A new analytical model for the NLMS algorithm,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), vol. 4, pp. IV41–IV44, Philadelphia, Pa, USA, March 2005. View at Publisher · View at Google Scholar · View at Scopus
  41. M. H. Costa and J. C. M. Bermudez, “An improved model for the normalized LMS algorithm with Gaussian inputs and large number of coefficients,” in Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP '02), vol. 2, pp. 1385–1388, Orlando, Fla, USA, May 2002. View at Scopus
  42. M. V. S. Lima, W. A. Martins, and P. S. Z. Diniz, “Affine projection algorithms for sparse system identification,” in Proceedings of the IEEE International Conference on Acoustic Speech and Signal Processing (ICASSP '13), pp. 5666–5670, Vancouver, Canada, May 2013.