Abstract

An improved norm-constrained set-membership normalized least mean square (INCSM-NLMS) algorithm is proposed for adaptive sparse channel estimation (ASCE). The proposed INCSM-NLMS algorithm is implemented by incorporating an -norm penalty into the cost function of the traditional set-membership normalized least mean square (SM-NLMS) algorithm, which is also denoted as -norm penalized SM-NLMS (LPSM-NLMS) algorithm. The derivation of the proposed LPSM-NLMS algorithm is given theoretically, resulting in a zero attractor in its iteration. By using this proposed zero attractor, the convergence speed is effectively accelerated and the channel estimation steady-state error is also observably reduced in comparison with the existing popular SM-NLMS algorithms for estimating exact sparse multipath channels. The estimation behaviors are investigated via a typical sparse wireless multipath channel, a typical network echo channel, and an acoustic channel. The computer simulation results show that the proposed LPSM-NLMS algorithm is better than those corresponding sparse SM-NLMS and traditional SM-NLMS algorithms when the channels are exactly sparse.

1. Introduction

Set-membership filtering (SMF) has been extensively studied for adjusting the filter coefficients whose output error is limited by a prior bound [13]. Based on the SMF technique, a normalized least mean square (NLMS) algorithm with SMF has been proposed to reduce the data updates and iterations for getting the same or lower steady-state error floor, which is denoted as set-membership NLMS (SM-NLMS) algorithm [1]. Since the prior bound is important for SM-NLMS algorithm, error-bound development methods have been proposed such as parameter-dependent technique [46]. In recent years, the SM-NLMS algorithm has been widely used for channel estimation, adaptive control, echo cancellation, and system identification [3]. Although the SM-NLMS algorithm can provide excellent estimation performance, its behavior may be degraded for estimating sparse system such as sparse multipath wireless communication channel.

As we know, broadband techniques have been becoming an amazing choice for next-generation mobile communications [79]. For example, the 5G communication system occupies a wide bandwidth. Additionally, the coherent detection needs accurate channel state information (CSI) at the receiver side [7]. Moreover, the CSI is always obtained by using adaptive channel estimations (ACE), which is realized by using least mean square (LMS), NLMS, and SM-NLMS algorithms. On the other hand, the measurement of broadband multipath channel given by Vuokko et al. shows that the broadband channel has few dominated channel coefficients whose magnitudes are nonzeros, while most of the channel taps are zeros [10]. Such broadband channel is called sparse channel. Recently, sparse channel estimation has been widely studied to exploit its in-nature sparse structure, which is implemented based on the compressed sensing (CS) [1115] and sparse adaptive filtering (SAF) algorithms [1623]. Both of these algorithms can achieve good channel estimation. However, the CS-based broadband channel estimation is more complex than the SAF algorithms because the CS-based channel estimations require measurement matrix construction which is bounded by restricted isometry property [24].

SAF algorithms are usually implemented on the basis of proportionate [25, 26] and zero-attracting (ZA) techniques [1623]. In the frame of proportionate technique, a typical algorithm is proportionate NLMS (PNLMS) which needs to assign an independent step size to each channel coefficient based on the latest estimation [25]. As a result, it results in the fact that the large step size is exerted on the large coefficients. However, it can improve the channel estimation performance at initial stage; its convergence slows down quickly and is even worse than the traditional NLMS algorithm. The zero-attracting technique has been proposed and introduced to exploit the sparse least mean square (LMS) algorithm which is realized by exerting an -norm penalty on the channel vector and cooperating it into the cost function of the traditional LMS algorithm [16]. The principle of the zero-attracting LMS algorithms is to use various norm penalties in their cost functions to form desired zero attractors which attract the small channel coefficients to zero quickly. According to the excellent performance, the ZA technique has been expanded to the NLMS [7], affine projection algorithm [2730], least mean fourth [31, 32], and other ZA adaptive filtering algorithms [33, 34]. Among these ZA adaptive filtering (AF) algorithms, sparse NLMS algorithm has been widely studied and applied for broadband channel estimation [7]. Recently, the ZA technique has been used for exploiting sparse SM-NLMS in channel estimation applications for achieving low complexity, including ZA SM-NLMS (ZASM-NLMS) and reweighted ZASM-NLMS (RZASM-NLMS) algorithms [23].

In this paper, an improved norm-constrained set-membership normalized least mean square (INCSM-NLMS) algorithm is proposed for adaptive sparse channel estimation (ASCE), which is implemented by modifying the cost function of traditional SM-NLMS algorithm via adding a -norm penalty term and is also denoted as -norm penalized SM-NLMS (LPSM-NLMS) algorithm. The derivation of the proposed LPSM-NLMS algorithm is given in detail. Our simulation results obtained from sparse channel estimations show that the proposed LPSM-NLMS algorithm outperforms the standard SM-NLMS, ZASM-NLMS and RZASM-NLMS algorithms with respect to the channel estimation behaviors.

The rest of this paper is constructed as follows. Section 2 reviews the SAF theory and the standard SM-NLMS algorithm for channel estimation. Section 3 gives the proposed LPSM-NLMS algorithm and its derivation in detail. Computer simulation results are given in Section 4 in comparison with the estimation behaviors of standard SM-NLMS, ZASM-NLMS, and RZASM-NLMS algorithms. Section 5 concludes this paper.

2. SAF Theory and SM-NLMS Algorithm

2.1. SAF Theory

Consider an input training signal , an additive white Gaussian noise (AWGN) signal , and a received signal to illustrate a classical adaptive channel estimation. The input signal is transmitted to an unknown FIR channel and the output of the channel denotes as which is obtained by . At the receiver side, the received signal is gotten with . The adaptive filtering is to mimic the instantaneous estimation error which is defined as the difference between the received signal and estimated output. Thus, we have , where denotes as the estimated output vector.

As we know, the classical adaptive filter algorithms are used to estimate the unknown channel through minimizing an error function which has a relationship with the instantaneous error [3, 35, 36]. For example, the LMS algorithm uses . Also, the NLMS algorithm introduces a normalized power of the input signal to improve the performance of the LMS algorithms. Recently, a special bound has been used on the magnitude of , such as SAF scheme [1, 2]. The SAF algorithms use an interested subspace to model . We give a model space , which is comprised of the input vector-desired output pairs (IVDOPs). As for the SAF theory, an error criterion has been utilized to bound . Parameter estimation is bounded by a specified parameter for all the data obtained from . Therefore, SMF algorithm is to choose a special set in the parameter space which is different from the point estimation. The illustration of the SMF principle is expressed aswhere is the IVDOP. As for any combination of , we can get the solution of possible vectors by using the following formula [13]:where is a vector space whose dimension is . If IVDOPs are utilized to train the filter, the measurement set for the SAF algorithms is written as [1, 2]The SAF algorithms can find out the solutions that belong to an exact set which contains observed IVDOPs:Actually, the set is a subset of the exact membership set in each iteration.

2.2. SM-NLMS Algorithm

The SAF technique is integrated into the NLMS algorithm to get the traditional set-membership NLMS (SM-NLMS) algorithm. From our previous knowledge of SAF algorithms, the updated equation of the traditional SM-NLMS algorithm [13] is similar to that of the classical NLMS algorithm [3, 35, 36]. However, the principle of the SM-NLMS algorithm is different from the classical NLMS algorithm. The SM-NLMS algorithm minimizes under a special constraint of . For any , we use the following optimization problem to get the solution of the SM-NLMS algorithm, which is expressed as [13]To find out the solution of (5), Lagrange multiplier method is employed to calculate its minimization. As a result, the cost function of the SM-NLMS algorithm can be written as We can get the update equation of the SM-NLMS algorithm written as which is also given by where Here, parameter is a regulation parameter which is always very small and is to avoid dividing by zero. From (9), we can see that the SM-NLMS algorithm improves the performance of the classical NLMS algorithm by introducing an adaption parameter [13].

3. Proposed LPSM-NLMS Algorithms

Although zero-attracting SM-NLMS (ZASM-NLMS) and reweighting ZASM-NLMS (RZASM-NLMS) algorithms have been proposed and they can well estimate the sparse channels, the sparsity of the multipath channel needs to be exploited further. In this section, we derive our proposed LPSM-NLMS algorithm based on the CS concepts reported in [36, 37]. The proposed LPSM-NLMS algorithm is carried out by incorporating a -norm [18] constrained channel coefficient vector into the cost function of the traditional SM-NLMS algorithm. Then, we can get the new cost function of our proposed LPSM-NLMS algorithm for , which is to implement the following minimization optimization problem [23]: where is a zero-attracting strength controlling factor. Similar to the ZASM-NLMS and RZASM-NLMS algorithms, the Lagrange multiplier method is employed to find out the solution of the above optimization problem. Then, the cost function of the proposed LPSM-NLMS algorithm is given by On the basis of the Lagrange multiplier method, we letThen, we can get By taking (13) and (14) into consideration, we can get Combining (13) and (15), we obtain where . From the cost function of the proposed LPSM-NLMS algorithm, we know that the difference between the SM-NLMS and LPSM-NLMS algorithms is the minimization of . Thus, we can simplify the optimization of the LPSM-NLMS to be where . is defined as . By using the Lagrange multiplier method on (17), we can get the updated equation of the LPSM-NLMS algorithm, which is given by where is the same as (9). Here, a zero-attracting factor is introduced to give a balance between the channel estimation behavior and the penalty strength of . Here, is defined as In order to avoid dividing by zero, (18) is rewritten as

It is worth noting that the proposed LPSM-NLMS algorithm introduces an additional term which is used for enforcing the filter coefficients to zero quickly. In LPSM-NLMS algorithm, we can control the zero-attracting strength by adjusting the parameter .

4. Results and Discussion

In order to verify the performance of the proposed LPSM-NLMS algorithm, we employ a sparse channel and a network echo channel to analyze our proposed LPSM-NLMS algorithm. Also, its performance is compared with NLMS, SM-NLMS, ZA-NLMS, ZASM-NLMS, and RZASM-NLMS algorithms with respect to steady-state behavior. Monte Carlo simulation with 100 runs is used for obtaining each simulation point.

In the first investigation, a sparse multipath channel is used for assessing the estimation behavior of the proposed LPSM-NLMS algorithm. Here, the sparse channel has 16 taps with dominant ones, which is similar to [7, 1623, 2634, 38]. The dominant channel taps are gotten from a Gaussian distribution, which is subjected to . Moreover, the dominant channel taps are randomly distributed within the length of the sparse channel. The input training signal is a white Gaussian random signal. The channel output is in the presence of an AWGN which is assumed to be independent with . The channel estimation performance is investigated under signal-to-noise ratio (SNR) of 10 dB, 20 dB, and 30 dB. The channel estimation performance is evaluated by mean square error (MSE) which is defined as . In this simulation, the simulation parameters optimized to obtain the same convergence rate at the initial stage are , , , , , and , where and are the step sizes of NLMS and ZA-NLMS algorithms, respectively. , , and are the zero attraction factors for ZA-NLMS, ZASM-NLMS, and RZASM-NLMS algorithms, respectively. The channel estimation behaviors with and are shown in Figures 1, 2, and 3 for SNR = 10 dB, SNR = 20 dB, and SNR = 30 dB, respectively. We can see that our proposed LP-NLMS algorithm achieves fast convergence speed and smallest MSE. When increases from 1 to 2, the MSE has slightly deteriorated, which is caused by the reduced sparsity. It is worth noting that the proposed LPSM-NLMS algorithm can obtain more gains when SNR ranges from 10 dB to 30 dB.

Next, we verify our proposed LPSM-NLMS algorithm over a typical network echo channel. An example of a typical network echo channel is shown in Figure 4, where the channel has a length of 256 and it has 16 dominant taps. In this experiment, we use [38, 39] to define the sparseness measure of the network echo channel. Two sparse levels with and are utilized to assess the estimation performance of the proposed LPSM-NLMS algorithm. To get the same convergence speed rate, the simulation parameters are , , and . Figure 5 illustrates the tracking behavior and the channel estimation performance of the proposed LPSM-NLMS algorithm, which is driven by a white input signal. It is found that the proposed LPSM-NLMS algorithm outperforms the traditional NLMS, SM-NLMS, ZA-NLMS, ZASM-NLMS, and RZASM-NLMS with respect to the convergence speed and the MSE. Even though the sparseness measure is reduced from to , our proposed LPSM-NLMS algorithm is still superior to the existing sparse SM-NLMS algorithms.

Finally, an acoustic impulse response (AIR) which is obtained by using the image method in [40, 41] is employed to further investigate the performance of the proposed LPSM-NLMS algorithm. Here, an example of AIR with a length is utilized to give a verification of the proposed LPSM-NLMS algorithm and the channel is obtained in a room of dimension at a sample frequency of 16 kHz. Figure 6(a) gives the normalized AIR channel (the MATLAB code for producing this channel can be found at http://www.commsp.ee.ic.ac.uk/~pl103/research.html). Moreover, loudspeaker-microphone distance is 2 m in a loudspeaker-room-microphone system (LRMS) whose reflection coefficient is set to 0.3. In this simulation, the parameters are , . We can see that our proposed LPSM-NLMS is still superior to the NLMS, SM-NLMS, ZA-NLMS, ZASM-NLMS, and RZASM-NLMS algorithms. Additionally, the proposed LPSM-NLMS converges faster than the traditional SM-NLMS algorithm and it can also achieve some gain in comparison with the previously presented RZASM-NLMS algorithm. Based on the performance of channel estimations obtained from the simulation results, we can say that our proposed LPSM-NLMS algorithm has less effect on the sparseness measure and is robust for sparse channel estimation applications.

5. Conclusion

In this paper, a robust LPSM-NLMS algorithm has been proposed and its channel estimation behaviors have been evaluated over a sparse multipath channel, a typical network echo channel, and a room echo channel. A -norm constraint of the channel coefficients has been utilized and incorporated into the cost function of the traditional SM-NLMS algorithm to construct the proposed LPSM-NLMS algorithm. As a result, a flexible zero attractor was given in the iteration. The computer simulation results help to verify the fact that the proposed LPSM-NLMS algorithm has better convergence speed and channel estimation behavior in comparison with the previously proposed sparse SM-NLMS and traditional SM-NLMS and NLMS algorithms.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

This work was partially supported by the Navy Defense Foundation of China (4010403020102 and 4010403020103), National Natural Science Foundation of China (61571149), the Science and Technology innovative Talents Foundation of Harbin (2013RFXXJ083 and 2016RAXXJ044), International Science and Technology Cooperation Program of China (2014DFR10240), Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province of China, and the Foundational Research Funds for the Central Universities (HEUCF131602 and HEUCFD1433).