Abstract

A norm combination penalized set-membership NLMS algorithm with and independently constrained, which is denoted as and independently constrained set-membership (SM) NLMS (L0L1SM-NLMS) algorithm, is presented for sparse adaptive multipath channel estimations. The L0L1SM-NLMS algorithm with fast convergence and small estimation error is implemented by independently exerting penalties on the channel coefficients via controlling the large group and small group channel coefficients which are implemented by and norm constraints, respectively. Additionally, a further improved L0L1SM-NLMS algorithm denoted as reweighted L0L1SM-NLMS (RL0L1SM-NLMS) algorithm is presented via integrating a reweighting factor into our L0L1SM-NLMS algorithm to properly adjust the zero-attracting capabilities. Our developed RL0L1SM-NLMS algorithm provides a better estimation behavior than the presented L0L1SM-NLMS algorithm for implementing an estimation on sparse channels. The estimation performance of the L0L1SM-NLMS and RL0L1SM-NLMS algorithms is obtained for estimating sparse channels. The achieved simulation results show that our L0L1SM- and RL0L1SM-NLMS algorithms are superior to the traditional LMS, NLMS, SM-NLMS, ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS algorithms in terms of the convergence speed and steady-state performance.

1. Introduction

Broadband communication technology has attracted much attention for modern wireless communications, and signal transmission on such a broadband wireless channel usually encounters a frequency selecting phenomenon [15]. Moreover, the frequency-selective behavior often leads to sparse properties in such broad channels [1, 6]. In those broadband wireless multipath communication channels, most of these unknown channels are often sparse [13], in which only a fewer coefficients are large in magnitude. In the sparse channel, only fewer channel taps are dominant, indicating that a number of channel taps are close to zeros or zeros. On the other hand, signal though such frequency selecting sparse channels and noise environments may reduce the communication quality. Therefore, precision channel estimation methods are desired to recovery unknown channel which is often illustrated by finite impulse response (FIR) [13]. Adaptive filter techniques are regarded to be useful channel estimation methods owing to their fast convergence speed and good estimation behavior [79]. After that, various adaptive channel estimation algorithms were presented in order to guarantee the stable propagation and effective signal transmission [813], such as LMS and SM-NLMS algorithms [1013]. Although these adaptive filtering algorithms can achieve robust estimation performance, they cannot deal with the sparse channel estimation well.

In order to make use of the prior sparseness characteristics of the multipath broadband channels, several sparse LMS algorithms were reported by modifying the cost function of the conventional LMS algorithms [1418], which is inspired by the compressive sensing (CS) [19, 20]. In [14], a sparse LMS algorithm has been reported by using the modified cost function via a -norm penalty, resulting in zero-attraction (ZA) LMS (ZA-LMS) algorithm. ZA-LMS can provide a good behavior for sparse adaptive channel estimations by exerting a zero-attraction on all channel taps. The estimation behavior of the ZA-LMS may be reduced because of the uniform penalty. Then, a reweighted ZA-LMS (RZA-LMS) algorithm was presented by considering a log-sum function as a constraint of the channel coefficient vector [14]. Subsequently, the RZA-LMS achieved an improved channel estimation behavior in accordance with convergence and estimation misalignment. After that, the zero-attracting technique has been realized by using -norm [21], smooth approximation -norm [15], and nonuniform constraints [18] to further develop the sparse properties in broadband channel. However, their estimation behavior may be degraded since the LMS is sensitive to the input signal scaling. Then, the zero-attracting methods were expanded to the NLMS [22], LMF [23], LMS/F [2427], leaky LMS [28], and affine projection algorithms [2933]. However, some of them are high in complexity and others need an extra balance parameter to adjust the combination of the LMF and LMS algorithms.

Recently, the zero-attraction methods and set-membership filtering theory have been used to take use of the spareness of the sparse channel to reduce computational complexity [10, 34, 35]. In [35], the ZA and RZA methods are further investigated on the SM-NLMS to develop RZASM- and ZASM-NLMS algorithms. As a result, the RZASM- and ZASM-NLMS algorithms can provide a better channel estimation behavior than NLMS and its variants. However, these two sparse SM-NLMS adaptive filtering algorithms cannot effectively adjust the zero-attracting according to the channel coefficients in real time.

We propose a norm combination penalized SM-NLMS algorithm with and independently constrained, which is realized by employing a constraint on the channel taps according to the value of the channel coefficients and it is named as and independently constrained set-membership NLMS (L0L1SM-NLMS) algorithm. The realization of our L0L1SM-NLMS is devised via integrating -norm and -norm into the SM-NLMS’s cost function for constructing a desired zero-attraction term which separates the channel taps into small and large groups. Then, the two different groups are attracted to zero based on -norm and -norm penalties. The proposed L0L1SM-NLMS algorithm is given in detail. Also, a reweighing method is introduced into the zero-attraction term to enhance the robustness of the L0L1SM-NLMS algorithm, resulting in a reweighting L0L1SM-NLMS (RL0L1SM-NLMS) algorithm. We give an evaluation of our developed L0L1SM-NLMS and RL0L1SM-NLMS algorithms in designated sparse channels. The obtained results by estimating sparse channels illustrate that our L0L1SM- and RL0L1SM-NLMS algorithms are more excellent compared with the LMS, NLMS, ZA-LMS, RZA-LMS, and ZA-, RZA-, SM-, ZASM-, and RZASM-NLMS algorithms as for the convergence and estimation misalignment.

The rest of the paper is given herein. Section 2 gives a review of the conventional SM-NLMS and SM filtering theory. Also, the previously proposed ZASM-NLMS algorithm is discussed. In Section 3, our developed L0L1SM- and RL0L1SM-NLMS algorithms are derived thoroughly. In Section 4, channel estimation behavior of our L0L1SM-NLMS and RL0L1SM-NLMS is investigated and discussed in detail. The last section is conclusion of this paper.

2. Conventional SM-NLMS Algorithm

2.1. SM Filtering Theory

A training signal is considered to give a discussion on the SM-NLMS. An AWGN signal and an expected signal are used to discuss the SM filtering (SMF) theory and typical adaptive channel estimation (ACE) system. is conveyed to an unknown FIR wireless communication channel and the multipath fading channel’s output is that is gotten by . At receiver side, the expected signal is contaminated by . The purpose of the ACE aims to minimize the estimation error which denotes a difference between and . Thus, we can get .

The typical adaptive filter (AF) algorithms are employed to get an estimation of the unknown FIR channel via minimizing an error function related to estimation error . For instance, the LMS algorithm uses the second-order estimation error . Moreover, the NLMS algorithm utilizes normalized power of to improve the estimation behavior of the LMS. As for SMF theory, a certain bound is exerted on [1013]. The SMF methods employ an interested subspace to create a model . Assume that a model space is comprised of input-vector-desired-output pairs (IVDOPs). In SMF theory, an error criterion is used for bounding . Parameter estimations are bounded based on a parameter for all the data in . Thereby, SMF algorithm should properly choose a special set in the parameter space and hence it is different with point estimation. According to the SMF principle, we have [1013] where denotes IVDOPs. As arbitrary , we have a solution of the possible vectors by the use of the following equation [1013]: where is a vector space with a dimension of . If IVDOPs are used for training the filter, the measurement set of the SMF algorithms can be written as [1013] The SMF algorithm is to find the solutions following an exact set which possesses observed IVDOPs In fact, is a subset of in each iteration.

2.2. SM-NLMS Algorithm

Based on SMF theory and NLMS algorithm, the SM-NLMS has been developed by finding a minimization of within a constraint of [10]. The SM-NLMS algorithm finds out the solution of the following optimization for We use Lagrange multiplier method to find out the minimization of (5). Therefore, the SM-NLMS’s updating equation is where where is to avoid dividing by zero. From (6), we can see that the SM-NLMS’s updating formula is similar to basic NLMS algorithm. Herein, acts as a step factor.

Inspired by the CS and ZA techniques, sparse SM-NLMS algorithm denoted as ZASM-NLMS was presented by exerting a -norm constraint on the SM-NLMS’s cost function, and hence, the ZASM-NLMS solves the following problem, given as [35] where denotes a ZA parameter. Similarly, we employ the Lagrange multiplier method to acquire the updating equation of the reported ZASM-NLMS whose updating equation is where . Since the updating equation for ZASM-NLMS is complex, a simple method can be employed to solve (8), which brings about an optimization problem without constraint. For the sake of compatibility with the traditional SM-NLMS, we can use the following equation to get the solution of (8) [35]: where , and let ; we get the ZASM-NLMS’s updating equation where denotes a ZA ability factor to give a balance between the estimation misalignment and the sparse constraint of and represents a sign function with a component-wise implementation, and it is given by [14]

In contrast to the update equation in (6), it is found that the ZASM-NLSM provides an extra term in its update equation, which is to provide a fast attraction to force the small coefficients in magnitude to zero rapidly. Moreover, the ZA strength is controlled by parameter and the ZASM-NLSM algorithm provides the same zero-attraction on all the channel coefficients. As a result, the ZASM-NLSM algorithm cannot effectively distinguish the zero and nonzero channel coefficients, which may degrade its performance for less sparse systems. To improve its performance, an improved sparse SM-NLMS is presented by using a sum-log function to replace the -norm in ZASM-NLSM algorithm, which was named as RZASM-NLSM algorithm. Although the RZASM-NLSM algorithm improves the performance of the ZASM-NLSM, it also increases the computational complexity.

3. Proposed L0L1SM-NLMS Algorithms

As is known to us, the previously presented ZASM- and RZASM-NLMS algorithms estimate the sparse channel by using an -norm and sum-log function penalty to achieve good channel estimation performance. However, they may neglect the sparse structure in-nature unknown channel in practical applications. Inspired by the CS and the ZA techniques [1420], we propose an adaption-norm penalized SM-NLMS algorithm by exerting -norm and -norm on the general cost function of the basic SM-NLMS to construct a desired independent norm penalty. The proposed algorithms are to produce desired zero-attraction terms which divides the filter coefficients into large and small groups. By dynamically assigning the -norm and -norm penalties to the different channel taps, the proposed algorithms can attract the large and small group channel taps to zero based on -norm and -norm penalties. Here, a mixture of -norm and -norm penalty is implemented by using an -norm penalty. The -norm is described as where we have a definition of . Then, -norm and -norm are We note that can count the nonzero channel coefficients in the sparse channel. To utilize of the advantages of the -norm and -norm, the proposed adaption-norm penalty SM-NLMS algorithm is given by where is a ZA parameter used for controlling the sparsity and convergence rate. As a result, the modified cost function of our developed algorithm is The Lagrange multiplier method is also adopted to find out a minimization of cost function in (17). Then, we have Form (18), we get By left-multiplying on both sides of (19) and combining it with equation (20), we have Then, substituting (21) into (19), we can get In order to prevent dividing by zero, we add a very small constant into (23). Then, the updating equation of our developed algorithm is However, we note that the ZA term in the update equation will inevitably result in error for obtaining accurate sparse channel estimation. Fortunately, we note that the parameter is a variable value according to our previous definition. Thus, we can redefine the -norm penalty further, which is illustrated as By considering the definition of (24), (23) is updated to be From (25), we can find that the zero attractor term suggests that the value of can be divided into small and large groups. Herein, we define an expectation value For the large group, we aim to minimize , which is subjected to . As for the small group, we use to balance the solution [18]. Thereby, is assigned to 0 or 1 for or , respectively. Till now, the proposed -norm is separated into -norm and -norm by considering a mean of the channel taps. Therefore, the proposed algorithm mentioned above can be regarded as -norm and -norm penalty SM-NLMS for the large and small group, respectively. Then, the updating equation of (25) is changed to be where and is a ZA strength controlling factor used to provide a balance between the convergence and spareness, and is When the proposed algorithm is close to stability, we have . Then, (27) is changed to be In fact, we can use a matrix to define all in each iteration of (29). Then, the proposed algorithm is also written as As we know, the proposed algorithm is complex for calculating . We can use a simple method to find out the solution of (16) by considering (24) and the separation of the -norm and -norm. Then, the L0L1SM-NLMS algorithm solves an unconstrained optimization problem Here, , and we have . Then, the updating equation of our L0L1SM-NLMS is obtained The developed L0L1SM-NLMS has a last zero attractor term, which is to exert -norm or -norm penalties on the channel taps according to sparse channel property in practical application. The -norm or -norm penalty is assigned via the matrix to the separated large and small groups. Thus, our L0L1SM-NLMS algorithm can be used to enhance the convergence rate and it can reduce the estimation bias for estimating sparse channels. The developed L0L1SM-NLMS algorithm synthesizes the advantages of -norm and -norm penalties and serves as a dynamic adaptive norm constraint to give an -norm penalty on the large channel taps and to exert an -norm on the small channel taps. Similar to the RZASM-NLMS, the proposed L0L1SM-NLMS algorithm can be further enhanced by introducing a reweighing factor into (33) to achieve a reweighing L0L1SM-NLMS algorithm (RL0L1SM-NLMS) algorithm. As a result, the related updating equation of RL0L1SM-NLMS is where is a ZA parameter and is a positive parameter to adjust the reweighing strength. By considering all the channel taps and the training signals, (34) is changed to be Here, the matrix is the same as that in (33). We observe that our RL0L1SM-NLMS algorithm has a reweighing factor in its update equation in comparison with L0L1SM-NLMS algorithm. Thus, we can use the reweighing factor to adjust the zero attractor ability by properly choosing parameter . A suitable can create an attracting constraint on the small or the large grouped channel coefficients to effectively adjust the -norm or -norm constraints, respectively.

4. Simulation Results

Here, we give an investigation on the behavior of our presented L0L1SM-NLMS and RL0L1SM-NLMS algorithms. Exactly, the convergence properties and steady-state misalignment of our developed L0L1SM- and RL0L1SM-NLMS algorithms are accessed via sparse channels with different sparse level . Furthermore, we also give the effects of the ZA parameters. The performance is obtained by the use of computer simulation and the behaviors of the L0L1SM- and RL0L1SM-NLMS algorithms are compared with classic LMS, NLMS, and SM-NLMS algorithms and popular ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS. We adopt the MSE criterion to evaluate the performance of sparse channel estimators. Herein, the MSE is

In all the experiments, denotes a Gaussian random signal which is independent of . The power of and white noise is 1 and , respectively. The signal-to-noise ratio (SNR) is set to be 20 dB in the experiments. In the first experiment, we consider a channel whose length is set to be , and only one nonzero tap () is set to find out the effects of and on our developed L0L1SM- and RL0L1SM-NLMS algorithms, respectively. Here, the simulation parameters are and in L0L1SM-NLMS algorithm. Figure 1 gives the effects of on our L0L1SM-NLMS algorithm. As we can see form Figure 1, there is a large MSE for . When decreases from to , the MSE of our developed L0L1SM-NLMS algorithm is reduced, indicating that our L0L1SM-NLMS algorithm achieves a low estimation misalignment. If continues to decrease, the MSE of our L0L1SM-NLMS algorithm is getting larger. Next, the effect of is shown in Figure 2 with and . It is clear to see that the MSE gradually decreases with reducing from to . When ranges from to , the MSE is getting larger with the decrement of . Thus, and should be properly selected to well improve the behavior of our presented L0L1SM- and RL0L1SM-NLMS algorithms.

Then, we will verify estimation behaviors of the L0L1SM- and RL0L1SM-NLMS algorithms over a sparse channel with different sparsity levels and their estimation behaviors are compared with LMS, NLMS, SM-NLMS, and their sparse forms. The convergence of our presented L0L1SM- and RL0L1SM-NLMS algorithms compared with NLMS algorithms is shown in Figure 3. We observe that both L0L1SM- and RL0L1SM-NLMS algorithms converge faster than those of the previously discussed LMS and NLMS algorithms. It is worth pointing out that our developed RL0L1SM-NLMS algorithm provides the fastest convergence speed. The estimation performance in terms of the steady-state misalignment is shown in Figures 4, 5, and 6 for , , and , respectively. In the experiments, the simulation parameters are for NLMS algorithm and its sparse forms, , , and , and for our RL0L1SM-NLMS algorithm. Herein, the parameter denotes a step size for NLMS algorithm and its sparse forms, and , , and are the ZA parameters for the already reported ZASM-, ZA-, and RZASM-NLMS algorithms, respectively. The steady-state misalignment of our L0L1SM- and RL0L1SM-NLMS gives lower estimation error floors compared to the traditional NLMS and SM-NLMS, and their sparse variants were recently developed because our proposed algorithms can adaptively adjust the ZA though dividing the sparse channel taps into small and large groups and exerting and -norm penalties on each group, respectively. The proposed RL0L1SM-NLMS algorithm can provide lowest estimation misalignment compared to the recent reported ZASM-NLMS and RZASM-NLMS algorithm when equals 2. When increases from 4 to 8, the steady-state misalignment becomes higher in comparison with that of . However, the steady-state error is better than those sparse SM-NLMS and NLMS algorithms and their related sparse algorithms realized by using ZA techniques.

The steady-state behaviors of our L0L1SM- and RL0L1SM-NLMS algorithms are fully verified and well compared with the existing LMS algorithms, including LMS, ZA-LMS, RZA-LMS, ZA-NLMS, and RZA-NLMS. Here, the parameters of our L0L1SM- and RL0L1SM-NLMS algorithms are the same as those in the last experiment, while other extra simulated parameters are listed as follows: , , and , where is a step size for LMS algorithm and and are the ZA parameters for ZA- and RZA-LMS algorithms, respectively. The estimation behaviors for , , and are described in Figures 7, 8, and 9, respectively. It is found that our RL0L1SM-NLMS algorithm achieves fast convergence and possesses the lowest estimation misalignment. Additionally, the proposed L0L1SM-NLMS algorithm is also better than the mentioned algorithms in terms of MSE. When the number of the dominant coefficients increases to 8, the steady-state misalignment of our developed algorithms increased. However, our developed algorithms still yield excellent steady-state behavior, implying that our presented L0L1SM- and RL0L1SM-NLMS algorithms are more powerful than the popular LMS and its sparse versions. In addition, the estimation behaviors of our constructed L0L1SM- and RL0L1SM-NLMS algorithms are discussed in Figure 10. We can see that the estimation misalignment of RL0L1SM-NLMS algorithm is better than the L0L1SM-NLMS algorithm for any sparsity level , indicating the proposed RL0L1SM-NLMS algorithm is more robust.

Finally, the estimation behavior of our L0L1SM- and RL0L1SM-NLMS algorithms is studied for estimating an echo channel which is illustrated in Figure 11 as an example. There are 16 nonzero taps () within the echo channel and its length is 256 (). In this experiment, the sparsity is illustrated as [28, 35]. The SNR is 30 dB and the used parameters in this experiment are , , , , , and the other parameters are the same as the above experiments. We use and for the first 10000 and second 10000 iterations, respectively. Herein, only the NLMS and its sparse forms are used for comparison since the NLMS algorithm is better than the LMS algorithm [22]. The computer simulation result is given in Figure 12. As is described in Figure 12, the steady-state misalignment of our developed L0L1SM- and RL0L1SM-NLMS algorithms is superior to the previously developed RZASM-NLMS algorithm. After the first 10000 iterations, the steady-state error of our algorithms is getting larger. However, their estimation behaviors are still better than mentioned channel estimation algorithms. Therefore, a conclusion is given that our developed L0L1SM- and RL0L1SM-NLMS algorithms have an excellent performance based on the evaluation criterion of convergence and estimation behavior for sparse adaptive channel estimation.

5. Conclusion

A L0L1SM-NLMS algorithm and a RL0L1SM-NLMS algorithm have been proposed and their derivations have been introduced in detail. The proposed L0L1SM-NLMS algorithm was realized by integrating a -norm and separating the -norm into and norm via a mean of the channel taps and the RL0L1SM-NLMS algorithm was implemented by using a reweighting factor in the L0L1SM-NLMS algorithm to enhance the ZA strength. Both the proposed algorithms can provide a ZA-like performance on the small channel coefficients and give rise to an -norm penalized ZA on the large channel taps. The proposed L0L1SM- and RL0L1SM-NLMS algorithms have been discussed via a sparse channel with different sparsity levels and a designated echo channel. The computer simulations obtained from these channels give a conclusion that our RL0L1SM-NLMS algorithm has the best behavior with respect to the convergence and steady-state channel estimation behavior. Furthermore, both the proposed L0L1SM- and RL0L1SM-NLMS algorithms provide better estimation behaviors than the traditional LMS, NLMS, and SM-NLMS and the previously reported popular sparse algorithms.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

This work was partially supported by the National Key Research and Development Program of China-Government Corporation Special Program (2016YFE0111100), the Science and Technology Innovative Talents Foundation of Harbin (2016RAXXJ044), the Foundational Research Funds for the Central Universities (HEUCFD1433, HEUCF160815), and Projects for the Selected Returned Overseas Chinese Scholars of MOHRSS and Heilongjiang Province of China.