Abstract

The leaky LMS algorithm has been extensively studied because of its control of parameter drift. This unexpected parameter drift is linked to the inadequacy of excitation in the input sequence. And generally leaky LMS algorithms use fixed step size to force the performance of compromise between the fast convergence rate and small steady-state misalignment. In this paper, variable step-size (VSS) leaky LMS algorithm is proposed. And the variable step-size method combines the time average estimation of the error and the time average estimation of the normalized quantity. Variable step-size method proposed incorporating with leaky LMS algorithm can effectively eliminate noise interference and make the early convergence, and final small misalignments are obtained together. Simulation results demonstrate that the proposed algorithm has better performance than the existing variable step-size algorithms in the unexcited environment. Furthermore, the proposed algorithm is comparable in performance to other variable step-size algorithms under the adequacy of excitation.

1. Introduction

The least-mean-square (LMS) algorithm is widely used in many fields, such as system identification [1, 2], echo cancellation [3], adaptive channel equalization [4], adaptive antenna array [5], and adaptive spectral line enhancement [6], due to its good robustness, low computational complexity, and simple structure. However, it is well known that the convergence rate and steady-state error of the algorithm are directly related to the adaptation step size of a conventional LMS algorithm. The convergence rate increases with the increase of the adaption step size, but the steady-state error changes in the opposite direction [7]. Therefore, the adaptive step size is a contradiction in the iterations, and variable step-size (VSS) LMS algorithm, which is adjusted according to the proximity between the actual value and the optimal value in the iterations, is therefore proposed solving this contradiction. In addition, the evaluation criteria for adjusting step size include squared instantaneous error [8, 9], sign changes of successive samples of the gradient [1012], cross-correlation of input and error [13], and so on.

Although VSS-LMS can balance the convergence rate and steady-state error of LMS algorithm to some extent, it may generate unbounded estimation of parameters due to the existence of inadequacy of excitation. In order to solve this problem, the leakage factor is introduced into the iterations [1416]. The leaky LMS algorithm can improve the robustness of channel equalization transmission [17, 18] and noise reduction [19], all of which depend on its handling of filter drift. Furthermore, some studies have shown that step size has a certain impact on the performance of leaky LMS algorithm [20], and this paper tries to enhance the performance of leaky LMS algorithm by variable step size.

In this paper, the convergence condition of the leaky LMS algorithm with variable step size is analyzed, and the quantization range of step size is given. In Section 2, based on the analysis of leakage LMS and variable step-size algorithm, a variable step-size leakage LMS algorithm is proposed. Section 3 analyzes that the performance of the proposed algorithm is conducted, and Section 4 details that the simulation is provided to show the validity of the analytical results. The conclusions are presented in Section 5.

2. VVS-L-LMS Adaptive Algorithm

In the conventional L-LMS algorithm [14], the coefficients of the adaptive filter are updated as: where is leakage factor and , is adaptive step size, is discrete time series, is weight coefficient vector of the adaptive filter at nth time, and is input signal vector with time series . and can be described as: where is filter order. The instantaneous estimation error is where the desired signal is where is independent noise sequence with zero mean and is time-varying optimal weight coefficient vector.

Substituting Equation (3) into Equation (1) and rearranging the terms yields

The range of step size can be provided by solving the state equation Equation (5) [20]: where , for ; therefore, could be negative [20]. The L-LMS algorithm that is same as the conventional LMS algorithm uses a fixed step size in its coefficient update recursion thus inheriting the limitation in the full-update algorithm of having to compromise between fast convergence speed and low level of steady state error.

This problem above can be tackled by changing fixed step-size in Equation (1) to time-varying step-size , which is adjusted based on a criterion that tries to measure the proximity of the adaptive filter parameters to the optimal ones. Therefore, Equation (1) can be written as:

According to Kwong’s VVS-LMS algorithm, recursive expression of step size is where the positive constant parameters and control the prediction error. But Kwong’s VVS-LMS algorithm cannot exclude the influence of noise on the instantaneous error. And in order to solve this problem, it is proposed to use instead of ; can be expressed as: where where is smoothing value for the cross-correlation function of error is used to replace the power of error . is input signal power, and they can be described as: where is the weight parameter that controls the convergence time. And the criteria for the step recursion are as follows: where and . According to Equation (6), and should satisfy sufficient conditions which guarantee the stability of leaky LMS algorithm and and .

3. Performance Analysis

In this section, the performance of proposed algorithm in stationary and nonstationary environment is described. And in stationary environment, the optimal weight coefficient which is in Equation (4) can be described as a constant:

Furthermore, in stationary environment, varies randomly as described: where , but is very close to 1. is a sequence of zero means with covariance , and is the identity matrix. According to the above description, in the stationary environment can be viewed as in the nonstationary environment where and the

3.1. Convergence of the Mean Weight Vector

In order to discuss the convergence problem of mean weight vector, translated weight vector needs to be introduced, which can reflect the weight error. In addition, the autocorrelation matrix of the input signal is defined as , which can also be described as where . In the previous formula, is the eigenvalue matrix of R where , , and is unitary matrix. Then, the rotation matrix, , , , , , is defined to describe the convergence of the mean weight vector. Substitute Equation (3), Equation (4), and Equation (15) into Equation (7) and the following expression will be found by rewriting:

Using assumptions which describes that and are independent to find expectations on both sides of Equation (16) yields:

According to Equation (17), the following conditions can be obtained to ensure the convergence of the mean weight vector: where is the maximum eigenvalue of the input signal autocorrelation matrix . According to the above, can be rewritten as

Therefore, is the trace of the input signal matrix , and the condition Equation (18) for guaranteeing the convergence of weight vectors can be rewritten as

But convergence of mean weight vector is not a sufficient condition to guarantee convergence of mean square error.

3.2. Mean Square Error Behavior

In this section, the necessary and sufficient conditions for the mean square error to be stable are discussed. MSE is given by [21]: where is the minimum of the mean square error where and is the excess mean square error (EMSE). In addition, it can be seen from Equation (21) that the convergence of MSE is directly related to diagonal matrix . Assuming that is independent of and , the expression of can be simplified into a sum of second-order moments [8] by using Gaussian factoring theorem. So taking the expectation value of yeilds: where and where is cross-correlation between the input vector and the expected vector; it can be written as . A second moment including is defined, and . The vector can be described as:

According to Equation (17) and Equation (22), can be rewritten as: where O is a 0 matrix. The matrix can be described as: where ; the th part of is . . is steady state of , and is also steady state of and assume that:

In Equation (24), has no effect on our research, so it can be ignored. And making the following definition is to study further: where can be described as follows:

But,

Under the condition that can guarantee MSE converges, must satisfy the condition as:

In order to make sure that converges, we are concerned that we need to make sure that Equation (24) is exponential stable. And is

And if the above conditions are satisfied, the following must be true: where is the roots of and the can be got from Equation (28):

According to Equation (37), the conditions that can guarantee the root of Equation (37) in the unit circle can be described as:

Following the same argument in Equation (38). The above conditions can be equivalent to:

convergence can be guaranteed if the above conditions Equation (39) and Equation (40) are satisfied. And Equation (21) can be rewritten as: where and Equation (11) can be rewritten as:

Taking the expectation of both sides of Equation (42) fields: where

Taking the expectation of both sides of Equation (12) fields: where where where where . According to argument in [8], the sufficient conditions to guarantee MSE convergence can be described as: where . If the input signal is Gaussian white noise signal, then where is the input signal variance.

3.3. Steady State Misadjustment

In this section, misadjustment of steady state is studied. And the misadjustment is defined as: and the can be represented as:

Accordingly, the solution of converges to , and they can be described as:

And in the stationary environment , the can be expressed as: where and can be computed as: where

And using Equation (17) and Equation (27), the can be solved:

Substituting Equation (53)–Equation (56) into Equation (51) yields:

For a leakage factor approaching 1, the above equation can be approximated as:

From Equation (49), the following formula can be got:

Substituting Equation (59) into Equation (58) yields

4. Simulation Results

This part analyzes the proposed algorithm in nonstationary environment and focuses on system identification. The performance of the proposed algorithm is compared with MVSS [9], FSS [8], conventional leaky LMS [14] algorithm, and conventional VSS. In order to compare the performance of each algorithm fairly, the algorithms mentioned above are all placed in the same experimental environment, and the parameters of each algorithm are selected to meet the needs of the experiment. Monte Carlo method was used to conduct 200 independent experiments for the following experiments to ensure the reliability of experimental results.

4.1. Example 1: White Input, Low SNR

In the Example 1, the unknown moving average system which is excited by a zero-mean uncorrelated white Gaussian signal of unity variance has four time-invariant coefficients. And the FIR filter has the same order as the unknown system . The desired signal is corrupted by zero-mean, uncorrelated Gaussian noise of variance which is also the sequence of input signals. The VSS, proposed algorithm, and MVSS choose which is stable in both stationary and nonstationary environments. The for VSS is set to according to the literature, and the for proposed algorithm, and MVSS be chosen as and the . The fixed step-size (FSS) algorithm and conventional leaky LMS algorithm all use the step size . Variable step-size iteration algorithm boundary lines are and . The leakage factor . Figure 1 shows that both the proposed algorithm and MVSS have good convergence and have basically the same misalignment as FSS algorithm at low SNR. In addition, the leakage algorithm misalignment is basically the same as that of FSS with white noisy input [22] and that is the reason for the big gap between the leaky algorithm and the other algorithms. Figure 2 illustrates that the step-size convergence process of the proposed algorithm is similar to MVSS algorithm and better than VVS algorithm at low SNR.

4.2. Example 2: Correlated Input, Low SNR

In this example, the unknown system to be identified and the filter are set up in the same way as in the Example 1. But the filter input signal is changed from white noise input to correlation input which can be described as: where is uncorrelated Gaussian white noise with unit variance. And Figure 3 depicts this input signal.

The step size of FSS algorithm is . To ensure that the convergence value of conventional leaky LMS is close to that of the FSS algorithm, the step size can be set to , and the leakage factor is . The parameters of VSS algorithm with and are set to and , respectively. The proposed algorithm and MVSS algorithm have the same and as VSS algorithm, and . The step-size range of the variable step-size algorithm above is and . Figure 4 shows that the convergence speed of the proposed algorithm is close to that of MVSS under the condition of correlated input, and they are both slightly faster than that of FSS algorithm. Under the action of leakage factor, the same MSE convergence value can be obtained only when the step length of leakage algorithm is larger than that of FSS algorithm. As can be seen from Figure 5, the convergence of MVSS step size is slightly faster than that of the proposed algorithm. Under the effect of leakage factor, the peak step size of the proposed algorithm has a big gap with that of MVSS algorithm.

4.3. Example 3: High SNR

Example 3 has the same parameters of filter and unknown system as Examples 1 and 2, except that it has a higher SNR which is  dB.

The FSS algorithm adopts the same step size which is as the leaky LMS algorithm, and the leaage factor of the leaky LMS algorithm is . All variable step-size algorithms, including MVSS, FSS, and proposed algorithm, use the same step-size range which is and . And the of MVSS, FSS, and proposed algorithm is , , and , respectively. The of MVSS and proposed algorithm are all set to . Figure 6 shows that in the case of large SNR, the gap between the convergence the MSE values of FSS and of leaky LMS algorithm becomes larger. And the proposed algorithm converges slightly faster than other variable step-size algorithms. Figure 7 mainly illustrates that the proposed algorithm has advantages in step-size convergence under the condition of high SNR.

4.4. Example 4: Unexcited Input High Order

The parameters of the system to be identified and the filter in Example 4 are the same as those in the above three examples except the order which higher order numbers are used in this example. The input signal used in Example 4 is a weak white signal with zero mean and , and the noisy signal which destroys the desired signal is zero-mean, uncorrelated Gaussian noise of variance.

The FSS algorithm and the leaky LMS algorithm use the same step size . And the leakage factor of leaky LMS algorithm and proposed algorithm is . The step size of all variable step-size algorithms including FSS, MVSS, and proposed algorithm are set to and , and the of the all variable step-size algorithms is . In addition, the of all variable step-size algorithms is selected as , , and , respectively. The of MVSS and proposed algorithm is chosen as , as can be seen from Figure 8, under the condition of unexcited input, the MSE which leaky algorithms with leakage factor converges to closer to that of FSS. And the proposed algorithm converges faster and can obtain better MSE compared with other algorithms. The existence of leakage factor can better deal with the parameter drift in the case of underexcitation, and the use of variable step size can make the algorithm convergence faster. Figure 9 shows that the step-size convergence of the proposed algorithm is faster than that of other variable step-size algorithms under the condition of underexcitation.

5. Conclusion

Based on the studies in this paper, the following conclusions can be drawn: (1) The proposed algorithm combines the time average estimation of error autocorrelation with the time average estimation of normalized quantity. By adjusting the parameters, the proposed algorithm can eliminate uncorrelated noise well. (2) For the same MSE, the convergence speed of the proposed algorithm is close to that of other variable-step algorithms under the condition of low SNR and high SNR. (3) The leakage factor can reduce the parameter drift caused by unexcitation. And in the case of underexcitation, the convergence rate of the proposed algorithm is faster than the other variable step-size algorithms, and the results are closer to wiener solution. In conclusion, the proposed algorithm has better performance in the case of unexcitation.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.