Abstract

An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.

1. Introduction

Ordinary least squares methods are extensively used in many signal processing applications to extract the system parameters from input/output data [1, 2]. These methods yield an unbiased solution of adaptive least squares problem having no interference in both inputs and outputs or having interference only in the outputs of the unknown system and clean inputs. However, if interference exists in both input and output of the unknown system or adaptive filtering problem, the ordinary least squares solution gets biased [3].

Total least squares (TLS) method [4] is an efficient technique to achieve an unbiased estimate of the system parameters when both input and output are contaminated by noise. Golub and Van Loan [5] provided an analytical procedure to get an unbiased solution of the TLS problem using singular value decomposition (SVD) of data matrices. This technique is extensively used in data processing and control applications [4, 6, 7]. However, application of TLS methods in signal processing is still limited because computation of SVD requires a high complexity of for an matrix.

TLS solutions of adaptive filtering problem gained importance after the pioneer work done by Pisarenko [8]. He presented an efficient solution of adaptive TLS problem by adaptively computing the eigenvector corresponding to smallest eigenvalue of augmented input/output signal’s autocorrelation matrix. Since then, several algorithms have been proposed based on the adaptive implementations of Pisarenko. The adaptive TLS algorithms proposed in [911] are able to achieve an unbiased TLS solution of adaptive filtering problem with a complexity of . However they are sensitive to the correlation properties of input signals and have a drawback of bad performance under correlated inputs.

In this paper, an iterative algorithm is presented to find an optimal TLS solution of adaptive FIR filtering problem. A stochastic technique similar to that of least mean squares (LMS) algorithm of adaptive least squares filtering is employed to develop a total least mean squares (TLMS) algorithm for adaptive total least squares problem. Instead of basing the approach on the minimum mean squares error as the LMS algorithm does, the proposed (TLMS) algorithm is based on the total mean squares, obtained by minimizing the weighted cost function for the TLS solution of adaptive filtering problem. The proposed algorithm has maintained the complexity of adaptive TLS algorithms with an additional quality of having steady state convergence under correlated inputs. Convergence analysis is presented to show the global convergence of the proposed algorithm under all kinds of inputs provided the stepsize parameter is suitably chosen.

This paper is outlined as follows: we start with a mathematical formulation of adaptive least squares problem in Section 2 and derivation of the TLMS algorithm is given in Section 3, including its convergence analysis in Section 3.1. After that efficiency of the proposed algorithm is tested in Section 4 by applying it for an unknown system identification problem and comparing the results with conventional LMS and normalized LMS (NLMS) algorithms. Concluding remarks are given in Section 5.

2. Mathematical Formulation of Adaptive Total Least Squares Problem

Consider an unknown system to be identified by adaptive FIR filter of length and response vector , at time , with an assumption that both input and output are corrupted by an additive white Gaussian noise (AWGN). The noise free input vector is formed from the input signals , such that The desired output of the unknown system is then given by where is system’s output and an added white Gaussian noise of zero mean and variance .

The primary assumption of an adaptive least squares (ALS) problem is that perturbations occur in the output signals only and that the input signals are exactly known. This assumption is not practical enough, because perturbations due to sampling or modeling or measurement errors affect the input signals too. A sensible choice to overcome such situations is to introduce perturbations in input signals in addition to perturbations of output signals. A schematic diagram of an adaptive filter with perturbed input is depicted in Figure 1.

If , denote the perturbations in input vector , where is an additive white Gaussian noise (uncorrelated from the output noise) of zero mean and variance , then noisy input vector is It is clear from Figure 1 that for every input signal , the filter produces an estimated output , which is compared with to produce a least squares error signal . Define the autocorrelation matrix of noisy input vector as and the cross-correlation vector of output signal with as .

At this stage the least squares solution, obtained by minimizing the cost function , gives a poor estimation of the solution of adaptive filtering problem because of the presence of noise in filter input. Casting adaptive filtering problem as total least squares problem can, however, restructure the poor estimation of solution under noisy input [10, 11]. The following definition is made to adopt a more general signal model for ATLS-based filtering.

Definition 1 (augmented data vector). Define an augmented data vector as

An alternate form of , in terms of augmented data vector of Definition 1, is obtained as follows: where denote the extended parameter vector.

The TLS solution of adaptive filtering problem is an eigenvector associated with the smallest eigenvalue of extended autocorrelation matrix : where .

Instead of minimizing the mean square error , adaptive total least squares problem is concerned with minimizing the total mean square error and cost function , where the total error is given by

The TLS cost function is then defined in terms of total error as The adaptive total least squares problem is a minimization problem of the form [10, 11]:

Note that an optimal solution of the TLS problem (9) is an eigenvector corresponding to the smallest eigenvalue of . In practice SVD technique is used to solve TLS problems since it offers lower sensitivity to the computational errors; however, it is computationally expensive [5]. An alternate choice to estimate eigenvector corresponding to smallest eigenvalue is to use an adaptive algorithm [1, 2].

3. Derivation of Total LMS Algorithm for Adaptive Filtering Problem

In adaptive least squares problem, conventional LMS algorithm is a steepest descent method which uses an instantaneous cost function for computation of gradient vector [1]. Using a similar implementation in TLS problem, the total LMS (TLMS) algorithm is obtained by having an instantaneous estimate of the cost function (8) as   . The recursive update equation of TLMS algorithm is then given as where is the stepsize parameter or convergence parameter. Note that Using and , then above equation becomes Substituting (12) in (10), the updated equation of TLMS algorithm becomes Once is computed using (13), the TLS solution update is obtained by the following formula: The detailed TLMS algorithm is summarized in Table 1. A complexity measure of the algorithms shows that it is a computationally linear algorithm, requiring a total of multiplications/divisions per iteration. This computational simplicity of adaptive TLMS algorithm makes it a better choice than computationally expensive SVD based TLS algorithm, which requires computations per iteration [5].

3.1. Convergence Analysis

In (13), inner product with yields, Since , Cauchy-Schwarz inequality [12] gives that is, Let then (15) becomes: or, Since , which shows that is a geometric progression. It would converge to an optimal solution if or This shows that the proposed algorithm is a variable stepsize algorithm, with . An appropriate way to choose is to initialize the algorithm such that is less than [13]. According to this result for , and , while .

4. Application of TLMS Algorithm in System Identification

To examine the performance of proposed TLMS algorithm, an unknown system identification model, shown in Figure 2, is used.

A white Gaussian input signal of variance is passed through a coloring filter with frequency response [1]: where , is a correlation parameter and controls the eigenvalue spread of input signals. corresponds to the case when eigenvalue spread of input signals is close to 1, and eigenvalue spread increases with an increase in the value of .

A white Gaussian noise of SNR = 30 dB is added in the input signal to get noisy signal , and an output signal is obtained by corrupting the output signal with an additive white Gaussian noise of SNR 30 dB. Proposed TLMS algorithm is compared with LMS and NLMS algorithms of [1] to get an FIR vector for a filter of length . Least squares misalignment is compared with the total least squares misalignment , and simulations results are recorded for 2000 iterations with an ensemble average of 1000 independent runs.

4.1. Convergence Behavior Corresponding to Stepsize Parameter

Although TLMS algorithm converges for all values of , satisfying (22), but steady state convergence TLMS algorithm is observed when stepsize parameter is a power of . In Figures 3(b)3(d), four learning curves of total misalignment of TLMS algorithm are shown, corresponding to , and , and it is observed that robustness increases uniformly with an increase in the value of . On the other hand if is chosen randomly, then a change in the convergence behavior is random, though Figure 3(a) shows that algorithm still converges.

4.2. Convergence Behavior Corresponding to Correlation Parameter

To check effect of changes in correlation parameter on the steady state convergence behavior of TLMS algorithm, different simulations are presented in Figure 3, each showing four learning curves of total misalignment of TLMS algorithm corresponding to , and . In the first two simulations in Figures 3(a) and 3(b), it is 0.6 in Figure 3(c), and 0.9 in Figure 3(d). It is clear from the results of all these simulation curves that increase in correlation of data signals has not affected the steady state performance of the algorithm. Although the convergence speed seems to slow down, but all the curves converge to optimal solution.

4.3. Comparison

Figure 4 shows the comparison of misalignment of three algorithms, that is, LMS, NLMS, and TLMS algorithms. The first two compute a least squares solution of adaptive total least squares problem, while the third one computes TLS solution of adaptive total least squares problem. Taking , stepsize parameter for LMS algorithm is chosen as , for NLMS algorithm as , and for TLMS algorithm, it is . The results in this simulation show that the convergence of TLMS algorithm increases with an increase in the iteration, and it presents a better solution of adaptive TLS problem.

5. Conclusion

In this paper, an efficient TLMS algorithm is presented for the total least squares solution of adaptive filtering problem. The proposed algorithm is derived by using cost function of weighted instantaneous error signals and an efficient computation of misalignment in terms of mean squares deviation. TLMS algorithm has better ability to tackle with perturbations of both input and output signals, because it is chiefly derived for the purpose. Since in real life problems, both input and output signals are contaminated by noise, therefore TLMS algorithm has great applicability. Convergence analysis shows that the proposed algorithm has global convergence, provided that the stepsize parameter is chosen appropriately. Furthermore, it is computationally simple and requires only complexity, while other algorithms for TLS problems either require higher complexity or are sensitive to correlation properties of data signals.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous referees for their valuable comments, appreciation, and recommendation to publish this paper. This work is supported by School of Mathematical Sciences, Universiti Sains Malaysia.