Table of Contents Author Guidelines Submit a Manuscript
International Journal of Antennas and Propagation
Volume 2019, Article ID 2131040, 8 pages
https://doi.org/10.1155/2019/2131040
Research Article

The Precompression Processing of LMS Algorithm in Noise Elimination

1College of Weaponry Engineering, Naval University of Engineering, Wuhan 430033, China
291278 Unit, Lvshunkou in DaLian 116000, China

Correspondence should be addressed to Chunsheng Lin; moc.361@hz_dna_scl

Received 2 June 2019; Revised 8 August 2019; Accepted 17 August 2019; Published 19 November 2019

Academic Editor: Francisco Falcone

Copyright © 2019 Pengfei Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this study, the authors propose a novel precompression processing (PCP) of the least mean squares (LMS) algorithm based on a regulator factor. The novelty of the PCP algorithm is that the compressed input signals vary from each other on different components at each iteration. The input signal of the improved LMS algorithm is precompressed based on the regulator factor. The precompressed input signal is not only related to the regulator factor α and the current value of the input signal at each iteration but also related to the amplitude of the input signal before this iteration. The improved algorithm can eliminate the influence of input signal mutation on the filter performance. In the numerical simulations, we compare the improved LMS algorithm and NLMS algorithm in the cases of normal input signal and input signal with mutation and the influence of different regulator factors on the noise elimination. Results show that the PCP algorithm has good noise elimination effect when the input signal changes abruptly and the regulator factor α = 0.01 can meet the requirements.

1. Introduction

Adaptive filters are widely used in system identification, adaptive line spectrum enhancement, echo cancellation, and other fields [13]. In signal processing, researchers have been trying to pursue adaptive algorithms with fast convergence speed, good stability, and lower computational complexity. Because of its robustness, low computational complexity, and good convergence to steady-state signals, the least mean squares (LMS) algorithm has widely been used in adaptive filters.

Since the least mean squares (LMS) algorithm was proposed by Widrow and Hoff in 1960, there have been too many researches on variable step size and variable order of the LMS algorithm. In 1997, Gan proposed a new approach in adjusting the step size of the least mean squares (LMS) using the fuzzy logic technique, and the earlier work was extended by giving a complete design methodology and guidelines for developing a reliable and robust fuzzy step size LMS (FSS-LMS) algorithm [4]. In 1999, So proved that bias removal can be achieved by proper scaling of the optimal filter coefficients, and a modified least mean squares (LMS) algorithm is then developed for accurate system identification in noise [5]. And Keratiotis and Lind proposed an optimum time-varying step-size sequence for adaptive filters employing the least mean squares algorithm [6]. In 2013, Kang et al. proposed a new bias-compensated normalized least mean squares (NLMS) algorithm for parameter estimation with a noisy input. The algorithm is obtained from an approximated cost function based on the statistical properties of the input noise, and a condition checking constraint is involved to decide whether the weight coefficient vector must be updated [7]. In 2016, Huang et al. proposed a novel component-wise variable step-size (CVSS) diffusion distributed algorithm for estimating a specific parameter over sensor networks [8]. In 2018, Wu et al. proposed a multistage least mean squares (MLMS) algorithm based on polynomial fitting in time-varying systems [9].

The LMS algorithm has greatly been improved according to different applications. In noise elimination, the input signal sequence may mutate, the conventional LMS algorithm will be greatly affected in this case, and the impact of mutation signal on the filter cannot be eliminated, thus affecting the filtering effect. And the NLMS algorithm has good adaptability to local signal fluctuation, but it cannot judge the big mutation of an individual signal very well. The precompression processing (PCP) of the least mean squares (LMS) algorithm based on a regulator factor can reduce the effect of signal mutation on the noise filtering well. The simulation results show that regardless of the degree of signal mutation, the improved LMS algorithm can eliminate the impact of mutation, while other algorithms will be seriously affected by the mutation signal. And the improved LMS algorithm has good noise filtering effect compared with other algorithms. The advantages of the improved LMS algorithm eliminating the mutational signal are obvious.

2. The LMS Algorithm

In 1960, Widrow and Hoff proposed the least mean squares (LMS) algorithm, which uses instantaneous values to estimate gradient vectors [11, 11]. The obvious advantage of the LMS is that the algorithm is simple, and it does not need to calculate the correlation matrix and inverse matrix. At the same time, it easily achieves stability and robustness because of which it is widely used [12, 13].

2.1. The Main Rationale of the LMS Algorithm

The structure of the LMS adaptive filter is shown in Figure 1 [14].

Figure 1: The structure of the LMS adaptive filter.

In Figure 1, is the input vector; is the weight vector; and , , and denote the error signal, desired signal, and output of the adaptive filter, respectively.where the notation “T” denotes the transposition of matrix;

Estimating the gradient vector with instantaneous values, the following can be obtained:

According to the relationship between the gradient vector and the weight vectors of the adaptive filter, the following can be obtained:where μ denotes the step size.

Substituting formulas (1) and (2) into formula (5), the weight vectors can be rewritten as

It can be seen from formula (5) that the adaptive LMS algorithm is a model with feedback form. The signal flow diagram is shown in Figure 2.

Figure 2: The signal flow diagram of the LMS algorithm.
2.2. The Normalized LMS (NLMS) Algorithm

The normalized LMS (NLMS) algorithm is that the product vector is normalized with respect to the square Euclidean norm of the input vector, and NLMS can be regarded as LMS with time-varying step parameters; the weight vectors of the adaptive filter can be obtained:where is a preset correction, and guarantees the convergence of the NLMS algorithm.

The NLMS algorithm has good stability and is widely used in engineering applications.

2.3. The Improved LMS Algorithm
2.3.1. The Main Rationale of the Improved LMS Algorithm

In noise elimination, the input signal sequence may mutate, so the NLMS algorithm will be greatly affected in this case, thus affecting the filtering effect. In order to eliminate the impact of the mutation signal on the filter, this study proposes the precompression processing (PCP) of the least mean squares (LMS) algorithm based on the regulator factor. The improved algorithm reduces the effect of signal mutation on noise filtering and has good noise filtering effect even when the input signal changes abruptly.

Setwhere denotes the input signals.where α is the regulator factor.

Set

Then, the input signal is converted to formula (9):

So, the output of the adaptive filter is as follows:

The error signal is obtained as follows:

And the weight vector of the adaptive filter is

It can be seen from formulas (7) and (8) that An is not only related to the regulator factor α and the amplitude of the nth value of the input sequence, that is, the amplitude of but also related to , that is, related to the amplitude of the input sequence before . By controlling the value of α, we can adjust the influence of the current input signal on the value of An. For example, when the input signal increases abruptly, the value of An increases a little. adjusts the value of slightly. Through formula (9), we can see that the input signal is converted to , and the value of An is much bigger than the amplitude of , so the influence of the mutational signal will be controlled and the improved LMS algorithm can get good adaptability to the input signal.

2.3.2. Steps of Algorithm

The steps of calculating the improved LMS algorithm are as follows:(1)The input signal of the improved LMS algorithm is precompressed based on the regulator factor α. It is .(2)When n = 0, the starting value of the filter weight vector is , arbitrarily. And the value of the step size μ is set.(3)The error signal is obtained through the filter weight vector , the input signal , and the desired signal :(4)The updated value of the filter weight vector is calculated using the recursive method:(5)When n increases by 1, the updated value of the filter weight vector is substituted into Step 3, and then the following steps are carried out successively until the objective function of the adaptive filter is minimized, that is, until the steady state is achieved.

From the steps of the improved LMS algorithm and formulas (7)–(10), six computations are added in each cycle compared with the traditional LMS algorithm, that is, four multiplications, one addition, and one extraction of a root; and only one computation is added compared with the NLMS algorithm.

3. Simulation Results

Assume that the desired signal is composed of multiple sinusoidal signals and Gaussian white noise with zero mean and unit variance, and the input signal is a Gaussian white noise signal with zero mean and unit variance, and the output error signal is the signal that the noise is eliminated. In the following simulation, the paper simulates the effect of noise removal of the normalized least mean squares (NLMS) algorithm and the improved LMS algorithm.

3.1. Filtering Simulation of the Input Signal without Mutation

When no signal mutation occurs in the input signal, the regulator factor α = 0.01.

It can be seen from Figure 3 that NLMS and the improved LMS algorithms have little difference in the effect of noise removal. When no signal mutation occurs in the input signal, the two algorithms can all extract the primitive signal from the mixed signal, and it is concluded that the improved LMS algorithm is smoother than the NLMS algorithm from Figure 4. The filter performance of the improved LMS algorithm is better than that of the NLMS algorithm.

Figure 3: The error signal of the NLMS and the improved LMS algorithms. (a) Signal. (b) of the NLMS algorithm. (c) of the improved LMS algorithm.
Figure 4: Local comparison of the NLMS algorithm and the improved LMS algorithm.

It can be seen from Figures 5 and 6 that the residual noise of the improved LMS algorithm is slightly smaller than that of the NLMS algorithm, and the residual noise of the NLMS algorithm is slightly smaller than that of the conventional LMS algorithm, but the advantages are not obvious.

Figure 5: The residual noise of the NLMS algorithm. (a) Signal and . (b) Residual noise.
Figure 6: The residual noise of the improved LMS algorithm. (a) Signal and . (b) Residual noise.

In conclusion, when no signal mutation occurs in the input signal, the improved LMS algorithm and the NLMS algorithm can eliminate the noise and obtain the primitive signal. The filtering effect of the improved LMS algorithm is slightly better than that of the NLMS and conventional LMS algorithms, but its advantages are not obvious.

3.2. Filtering Simulation of the Input Signal with Mutation

When mutation occurs in the input signal, assume the input signal is as shown in formula (13).

It can be seen from Figures 7 and 8 that the improved LMS algorithm has obvious advantages when the input signal changes abruptly. The improved LMS algorithm can recognize the mutation signal and eliminate noise well. However, the NLMS algorithm cannot recognize the mutation signal, so the error signal of the NLMS algorithm is changed abruptly at the 50000th point of the input signal, and the error signal of the NLMS algorithm at this point changes up to 5. Although the NLMS algorithm has good adaptability to local signal fluctuation, it cannot judge the big mutation of the individual signal very well.

Figure 7: The error signal of the NLMS and the improved LMS algorithms with signal mutation. (a) Signal. (b) of the NLMS algorithm. (c) of the improved LMS algorithm.
Figure 8: Local comparison of the NLMS algorithm and improved LMS algorithm.

It can be seen from Figures 912 that the residual noise of the improved LMS algorithm is less than that of the NLMS algorithm when mutation occurs in the input signal, and the weight vectors of the improved LMS algorithm are more stable than those of the NLMS algorithm. The advantages are obvious at the points of signal mutation.

Figure 9: The residual noise of the NLMS algorithm with signal mutation. (a) Signal and . (b) Residual noise.
Figure 10: The residual noise of the improved LMS algorithm with signal mutation. (a) Signal and . (b) Residual noise.
Figure 11: The weight vectors of the NLMS algorithm with signal mutation.
Figure 12: The weight vectors of the improved LMS algorithm with signal mutation.

The simulation results show that regardless of the degree of signal mutation, the improved LMS algorithm can eliminate the impact of mutation, while other algorithms will be seriously affected by the mutation signal, so we get the conclusion that the precompression processing of the input signal can effectively eliminate the impact of signal mutation.

3.3. Effect of Regulator Factor on the Filtering Effect

(1)When , respectively, the simulation of the filtering effect is as shown in Figures 13 and 14.

Figure 13: The residual noise of the improved LMS algorithm with signal mutation when (a) α = 0.001, (b) α = 0.005, and (c) α = 0.01 residual noise.
Figure 14: The error signal of the improved LMS algorithm with signal mutation when (a) signal, (b) α = 0.001, (c) α = 0.005, and (c) α = 0.01.

It can be seen from Figures 13 and 14 that when α = 0.001, 0.005, and 0.01, respectively, the original signal can be obtained well and the differences are tiny.(2)When α = 0.01, 0.05, and 0.5, respectively, the simulation of the filtering effect is as shown in Figures 15 and 16.

Figure 15: The residual noise of the improved LMS algorithm with signal mutation when (a) α = 0.01, (b) α = 0.05, and (c) α = 0.5 residual noise.
Figure 16: The error signal of the improved LMS algorithm with (a) signal mutation when (b) α = 0.01, (c) α = 0.05, and (d) α = 0.5.

It can be seen from Figures 15 and 16, when α = 0.5, the filtering effect of the filter becomes worse, and the curve of the error signal is less smooth than that of the error signal when a = 0.1 and 0.01. Estimating the range of mutation signal and setting the value of α reasonably can make the filter to achieve better filtering performance. In general, a = 0.01 is enough.

4. Conclusion

The input signal of the improved LMS algorithm is precompressed based on the regulator factor in this study. The improved algorithm can eliminate the influence of input signal mutation on the filter performance. In the numerical simulations, we compare the improved LMS algorithm and the NLMS algorithm in the cases of the normal input signal and input signal with mutation and the influence of different regulator factors on the filtering effect. Results show that regardless of the degree of signal mutation, the improved LMS algorithm can eliminate the impact of mutation, while the NLMS algorithm will be seriously affected by the mutation signal, and the regulator factor α = 0.01 can meet the requirements.

Data Availability

The data used to support the findings of this study are available from the first author upon request via email (xiaoatai1991@163.com).

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

P.L. and C.L. conceptualized and validated the study and administered the project; P.L. participated in the data curation and formal analysis, prepared the methodology, analyzed the data using software, wrote the original draft, and critically reviewed and edited the manuscript; C.L. obtained research funding and supervised the project; P.L. and X.W. investigated the study; P.L., N.Z., and X.W. were responsible for resources.

References

  1. L Ning, Convergence Performance Analysis and Applications of the Adaptive Least Mean Square (LMS) Algorithm, Harbin Engineering University, Harbin, China, 2009.
  2. X. Xu, Research on Adaptive Filtering Algorithms and Application, Fudan University, Shanghai, China, 2014.
  3. H. Guo, The Study on Algorithms and Applications of Adaptive Filter, Northwest Normal University, Lanzhou, China, 2007.
  4. W.-S. Gan, “Designing a fuzzy step size LMS algorithm,” IEE Proceedings—Vision, Image and Signal Processing, vol. 144, no. 5, pp. 261–266, 1997. View at Publisher · View at Google Scholar · View at Scopus
  5. H. C. So, “Modified LMS algorithm for unbiased impulse response estimation in nonstationary noise,” Electronics Letters, vol. 35, no. 10, pp. 791-792, 1999. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Keratiotis and L. Lind, “Optimum variable step-size sequence for LMS adaptive filters,” IEE Proceedings—Vision, Image and Signal Processing, vol. 146, no. 1, pp. 1–6, 1999. View at Publisher · View at Google Scholar · View at Scopus
  7. B. Kang, J. Yoo, and P. Park, “Bias-compensated normalised LMS algorithm with noisy input,” Electronics Letters, vol. 49, no. 8, pp. 538-539, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. W. Huang, D. Liu, Xi Yang, and D. Liu, “Diffusion LMS with component-wise variable step-size over sensor networks,” IET Signal Processing, vol. 10, no. 1, pp. 37–45, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. G. Wu, J. Liang, and W. Li, “Multi-stage least mean square algorithm based on polynomial fitting in time-varying systems,” Electronics Letters, vol. 54, no. 4, pp. 241-242, 2018. View at Publisher · View at Google Scholar · View at Scopus
  10. B. Widrow, “Adaptive filters,” in Aspects of Network and System Theory, pp. 563–586, Holt Rinehart & Winston, New York, NY, USA, 1971. View at Google Scholar
  11. B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1985.
  12. Z. He, Adaptive Signal Processing, Science Press, Christchurch, New Zealand, 2002.
  13. Y. Gu, Studies on the Convergence Performance of Least Mean Square (LMS) Algorithm and Its Applications, Tsinghua University, Beijing, China, 2003, Ph.D Dissertation.
  14. S. Haykin and B. Widrow, Least-Mean-Square Adaptive Filters, Wiley-Interscience, Hoboken, NJ, USA, 2003.