About this Journal Submit a Manuscript Table of Contents
International Journal of Antennas and Propagation
Volume 2012 (2012), Article ID 891932, 10 pages
http://dx.doi.org/10.1155/2012/891932
Research Article

Efficient Rank-Adaptive Least-Square Estimation and Multiple-Parameter Linear Regression Using Novel Dyadically Recursive Hermitian Matrix Inversion

1School of Electrical Engineering and Computer Science, Louisiana State University, 3183 Patrick F. Taylor Hall Baton Rouge, LA 70803, USA
2Department of Computer Science, National Tsing Hua University, Hsinchu 30013, Taiwan
3Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada H3A 0G4
4Department of Television Networks and Transmission, Communications Research Centre, Ottawa, ON, Canada K2H 8S2

Received 27 July 2012; Accepted 3 October 2012

Academic Editor: Ai Bo

Copyright © 2012 Hsiao-Chun Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Least-square estimation (LSE) and multiple-parameter linear regression (MLR) are the important estimation techniques for engineering and science, especially in the mobile communications and signal processing applications. The majority of computational complexity incurred in LSE and MLR arises from a Hermitian matrix inversion. In practice, the Yule-Walker equations are not valid, and hence the Levinson-Durbin algorithm cannot be employed for general LSE and MLR problems. Therefore, the most efficient Hermitian matrix inversion method is based on the Cholesky factorization. In this paper, we derive a new dyadic recursion algorithm for sequential rank-adaptive Hermitian matrix inversions. In addition, we provide the theoretical computational complexity analyses to compare our new dyadic recursion scheme and the conventional Cholesky factorization. We can design a variable model-order LSE (MLR) using this proposed dyadic recursion approach thereupon. Through our complexity analyses and the Monte Carlo simulations, we show that our new dyadic recursion algorithm is more efficient than the conventional Cholesky factorization for the sequential rank-adaptive LSE (MLR) and the associated variable model-order LSE (MLR) can seek the trade-off between the targeted estimation performance and the required computational complexity. Our proposed new scheme can benefit future portable and mobile signal processing or communications devices.

1. Introduction

Least-square estimation (LSE) or multiple-parameter linear regression (MLR) is a crucial mathematical technique for science and engineering ubiquitously. A wide variety of applications in signal processing devices, mobile communications, computational physics, and statistics arise from the LSE or MLR. It has been adopted as a powerful tool in nuclear science [1, 2], system identification [3], data mining [4], linear system modeling [5], medical imaging analysis [6], and so forth. In particular, for signal processing and communications, least-square approach dominates a large majority of applications, such as the equalizations [716] as well as the interference estimation [1719]. An overview of linear regression can be found in [20].

However, all of the aforementioned applications of LSE or MLR require the predetermined model order (the number of parameters to be estimated), and therefore the associated computational complexities are fixed and not flexible. In practice, the predetermined model order should be large enough to assure that the square error can be minimized due to the sufficient degree of freedom. Therefore, the model-order selection is essential to all LSE and MLR problems. Intuitively, we may adjust the model order such that an appropriate model order can be achieved to satisfy a preset square-error requirement. In the meantime, the computational complexity can be variable and depends on the error tolerance using such an adjustable model order. Nevertheless, the computational complexity would augment tremendously during the model order adjustment. How to make the most use of the calculated estimates from the previous model order and estimate the new parameters thereupon draws the researchers’ interest always.

Rank-one update scheme or iterative Levinson-Durbin algorithm is the well-known approach for computing the linear prediction coefficients with adjustable model orders [21]. The Levinson-Durbin algorithm is based on the Yule-Walker equations [22] and the strict assumption of Hermitian Toeplitz correlation matrix [21], but such an assumption is valid only when the multiple time series (the observed data) possess the exact temporal relationship (one is the delayed version of another) and the sample size approaches infinity under the ergodicity [23]. Besides, the Levinson-Durbin algorithm can only increase the model order by one at each iteration. A new class of extended Levinson-Durbin algorithms has been developed to achieve computational efficiency: in speech or audio coders, an adaptive multi-stage Levinson-Durbin algorithm was investigated in [24], and it was shown to be more robust than the conventional Levinson-Durbin algorithm when the input signal has large spectral dynamics; low-complexity approaches were also provided to determine the optimal delay and calculate the linear prediction filter coefficients in [7, 25]; iterative Levinson-Durbin algorithms were investigated for the frequency-domain decision feedback and MMSE equalizers in [812]. However, the Levinson-Durbin family of algorithms are still restricted by their limited model-order scalability due to their common rank-one update characteristics and Hermitian Toeplitz correlation matrix assumption.

For a general linear regression problem, the temporal relationship among multiple observed time series does not necessarily sustain. Hence, the Hermitian Toeplitz correlation matrix can exist only for some circumstances. To be more precise, the correlation matrix for a general linear regression problem should be Hermitian only [22, 26]. If the correlation matrix is Hermitian, the most efficient technique for the least-square solutions involves Cholesky factorization [21]. Recursion-based factorization methods were proposed to reduce the memory usage and computational complexity [27, 28]. However, they are not related to the direct Hermitian matrix inversion, which is the backbone of the linear estimation and the multiple-parameter linear regression. The existing Cholesky-factorization-based Hermitian matrix inversion procedure has to recalculate all the estimates when the model order is enlarged. In this paper, we present a novel rank-adaptive Cholesky factorization scheme, which can be employed to establish a low-complexity variable model-order LSE or MLR algorithm. In our newly derived LSE or MLR algorithm, the model order can be adjusted dyadically (model order is , ) while that in the Levinson-Durbin algorithm can be incremented one by one. To the best of our knowledge, our new algorithm is the only estimation or regression method which can adjust the model order dyadically. The magnificent dyadic-scalability property of our proposed LSE and MLR algorithm can well serve for the future signal processing or communications applications which demand the adaptability in both memory storage and computational complexity.

The rest of this paper is organized as follows. In Section 2, the problem formulation for the LSE and MLR is presented where the Wiener-Hopf equations will result in a Hermitian correlation matrix inversion. In Section 3, we derive the dyadic recursion formula for such a Hermitian matrix inversion. In Section 3.2, we design a novel dyadic rank-adaptive Hermitian matrix inversion algorithm based on the results in Section 3.1. The computational complexity analyses for the fixed model-order and rank-adaptive Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion are manifested in Section 4. The frequent communications and signal processing applications using the sequential rank-adaptive LSE thereupon for channel estimation and equalization are introduced in Section 5. The numerical evaluations for the comparisons of the computational complexities and the least mean-square errors between our new dyadic recursion scheme and the conventional method are delineated in Section 6. Ultimate concluding remarks will be drawn in Section 7.

Notations: , denote the sets of positive integers and complex numbers, respectively. denotes a column vector and denotes a matrix; , denote the transpose and the Hermitian adjoint of , respectively. The symbols , , and are reserved as the integer indices. The symbol represents a square matrix with the dimensionality , and this matrix is in a larger matrix which contains as its -block. represents the mathematical definition and denotes the complex-conjugate operator. The computational complexity function specifies the number of complex multiplications incurred when of dimension is calculated.

2. Least-Square Estimation and Multiple-Parameter Linear Regression

The LSE and MLR can be formulated as follows: given the degree of freedom or model order and the observed time series , and the target time series (desired response) , where is the sample size for all time series, we want to search for the best-fitting coefficients , for , such that is minimized. In general, , , and are complex-valued, , . Thereby, in matrix form, the LSE or MLR problem can be formulated as where and is the collection of the best-fitting (optimal) coefficients. Equation (1) can be rewritten as where According to [22], the least-square solution to the LSE or MLR problem stated in (4) is given by the Wiener-Hopf equation as follows: and the least mean-squared error is given by It is noted from (7) that the major portion of the computational complexity for solving involves the calculation of . Therefore, the computational complexity can be further reduced if the special mathematical properties of the correlation matrix are exploited.

In particular, when the LSE or MLR is adopted for filter or equalizer design, the temporal relationship , for , for all , is assumed. Thus, as the sample size is large (), the correlation matrix is both Hermitian and Toeplitz under the stationary and ergodic assumption [23]. However, very often, the multiple time series, , , specify the different observations generated from independent sources, or the sample size is not large. Consequently, According to (9), the correlation matrix is not necessarily Toeplitz. Based on (5), the only ever-sustaining property is that is always Hermitian. Instead of employing the QR factorization to compute with high complexity [21], we can benefit from this Hermitian property to carry out. Since is Hermitian, we can use the Cholesky factorization such that where and are the lower- and upper-triangular matrices, respectively. If exists, according to (11), we can calculate as where , which is a more efficient procedure than the inversion based on the QR factorization [21].

3. Novel Dyadically Recursive Hermitian Matrix Inversion

3.1. Recursion of Hermitian Matrix Inversion Using Cholesky Outer Product

From the previous studies in Section 2, we cannot adopt the Levinson-Durbin algorithm for LSE or MLR when the correlation matrix is not Toeplitz. To the best of our knowledge, no exiting literature had ever addressed any form of recursive formula for the procedure of Hermitian matrix inversion. In this section, we would like to introduce such a new procedure, which can dyadically extend the inverse of any Hermitian matrix from those of its submatrices with a half dimension. This novel dyadic recursion for Hermitian matrix inversion can reduce the computational complexity as we will analyze later on.

Any lower-triangular matrix of dimension can be decomposed as where is the matrix containing all zeros, , are the two submatrices both of which are lower-triangular, and is the submatrix. According to (11) and (13), we have where According to (11)–(14), we obtain where

3.2. New Efficient Dyadic Recursion for Hermitian Matrix Inversion

In this paper, we assume that the ill-posed problem can be neglected during the matrix inversion and no additional pivoting technique is required. Therefore, the computational complexity for the pivoting is not considered here. From the results in the previous subsection, we can design the following efficient dyadic recursion algorithm for any Hermitian matrix inversion.

Step 1 (Initialization). Set and . . Hence, .

Step 2. Construct the correlation matrix of dimension such that

Step 3 (Dyadic Expansion). Compute Use the Cholesky factorization to calculate Then, calculate accordingly.
Compute

Step 4 (Dyadically Recursive Hermitian Matrix Inversion). Construct

Step 5. Repeat Steps 24 for , and stop at , which is determined by the preset square-error tolerance. Use the ultimate to calculate .

4. Computational Complexity Analyses for Hermitian Matrix Inversion

In this section, we will study the computational complexities of the two algorithms for the Hermitian matrix inversion, namely (i) conventional Cholesky factorization and (ii) our new dyadic recursion presented in Section 3.2. Since the correlation matrix assumption for the Levinson-Durbin method is different and it also has to be based on the Yule-Walker equations as previously discussed in Section 2, we cannot compare the Levinson-Durbin method with the two underlying schemes here on a fair ground. Without loss of generality, we assume that all of the observed time series are normalized with respect to their energies such that , for in our subsequent analyses. Thus, the diagonal elements in the correlation matrices of any dimension are always 1.

4.1. Computational Complexity for Hermitian Matrix Inversion Using Conventional Cholesky Factorization

The Hermitian matrix inversion using the conventional Cholesky factorization has to be based on the predetermined model order . Since the diagonal elements of are 1, it yields According to (26), the corresponding computational complexity in terms of the total complex multiplications is To compute , we can write where , for , are the values to be determined from . Since where is the identity matrix, the computational complexity for calculating is also Once is available, the inverse matrix can be calculated accordingly and the corresponding computational complexity for this multiplication is Thus, from (27), (29), and (30), the total computational complexity for calculating using the conventional Cholesky factorization is

4.2. Computational Complexity for Hermitian Matrix Inversion Using New Efficient Dyadic Recursion

From the previous discussion in Section 3.2, we can calculate the computational complexity for our new scheme thereupon. First, let us focus on the incurred computational complexity in terms of complex multiplications when we apply the recursion in Section 3.2 to solve from the given and , for . According to (22), the computational complexity for computing is Next, the calculation of involves the computational complexity Then, we can compute according to (23); the corresponding computational complexity, similar to (27), is In addition, similar to (29), the computational complexity for calculating is also According to (24), we can calculate the computational complexity for such that Thus, the computational complexity involved in the calculation of is equivalent to Consequently, according to the steps in Section 3.2 and (30), as well as (32)–(37), the total computational complexity for our new scheme in one dyadic recursion to solve can be calculated as

4.3. Computational Complexities for Sequential Rank-Adaptive Hermitian Matrix Inversions

From the studies in Sections 4.1 and 4.2, we have found that the asymptotical complexity ratio where . It is noted that from (39), our new dyadic recursion scheme does not possess any advantage over the conventional Cholesky method. The reason is quite obvious. The ratio presented in (39) involves the fixed model order for the conventional Cholesky factorization. If the model order is predetermined as a constant, our new scheme described in Section 3.2 still needs to calculate all the intermediate matrices and , for all while the conventional Cholesky factorization method does not.

However, in practice, the model order should be variable and the sequential rank-adaptive LSE or MLR should take place. The least-square error will decrease when the model order increases, that is, . A stop criterion for the error tolerance (the maximum squared error which is allowed) can be introduced to terminate the model order enlargement. If such a sequential rank-adaptive LSE or MLR is considered, we need to recalculate the corresponding computational complexities for the sequential computations of , . Assume that the stop criterion terminates the sequential computations of at . We can compute the total computational complexity for such a sequential procedure using the conventional Cholesky factorization as On the other hand, if our new dyadic recursion scheme is used for the sequential calculations of up to , the corresponding computational complexity is where is given by (37). Similarly, we may rederive the complexity ratio for the sequential calculations such that

5. Applications of Sequential Rank-Adaptive Hermitian Matrix Inversions for Channel Estimation and Equalization

Based on the sequential rank-adaptive Hermitian matrix inversion approach in Section 4.3, we can design a new communication channel estimation and equalization scheme with model order adaptability. Other similar applications can be easily extended using our efficient dyadic Hermitian matrix inversion algorithm in Section 3.2. Here, we introduce this proposed new channel estimation and equalization method using our dyadic recursion for adjusting the model order.

A basic transmission model for communication systems can be formulated as where denotes the linear convolution; , , , and represent the transmitted training sequence, the received signal, the channel impulse response, and the background noise, respectively; is the sample size. Without loss of generality, the windowing of observed input data is assumed to comply with the covariance method [22]. For the channel estimation problem, the multiple time series can be set as where and is the estimated channel filter length. Thereby, the desired response is the received signal such that On the other hand, for the equalization problem, the multiple time series are set as where and is the estimated equalizer length. The desired response is therefore the transmitted training signal instead such that

According to (43), we can acquire the data specified by (44) and (45) for channel estimation and that specified by (46) and (47) for equalization. Thus, setting , we can carry out , , , according to (5)–(8). Note that the least mean-squared error given by (8) is a function of both and , and hence we can write For a fixed sample size , we can increase the model order . For a sufficiently large , we may achieve the error floor such that We can design a simple threshold criterion to determine the minimum model order for satisfying the condition in (49). For the dyadic model order adaptation, we propose a stop criterion Therefore, given a predetermined threshold , the minimum model order for the dyadic adaptation is obtained as where and denotes the infimum operation [29]. Ultimately, we can apply (5)–(8) and (50)-(51) to build a sequential rank-adaptive LSE or MLR procedure which is capable of adapting the model order dyadically.

6. Numerical Evaluations

In this section, we would like to provide the numerical evaluations for the illustrations of the computational complexities associated with the two Hermitian matrix inversion algorithms (conventional Cholesky factorization and our new efficient dyadic recursion) and the LSE applications using the adaptive model order for channel equalization. Table 1 lists the comparison of the two methods for a few model orders. It is noted that when a Hermitian matrix inversion is carried out for a fixed model order, the conventional Cholesky factorization approach is more computationally efficient than our new dyadic recursion. However, in comparison of the total computational complexity for the sequential calculation of variable model order Hermitian matrix inversion, our new dyadic recursion is more efficient than the conventional Cholesky factorization when the same terminal model order is set, since the latter does not utilize any information from the previous calculation of the Hermitian inversion with a smaller model order.

tab1
Table 1: Computational complexities of hermitian matrix inversion algorithms in comparison.
6.1. Comparison of Computational Complexities

Figures 1 and 2 illustrate the total computational complexities in terms of complex multiplications for the Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion method; they depict the complexities versus the model order for the fixed model order (one-time LSE or MLR) and the variable model order (sequential rank-adaptive LSE or MLR). According to Figures 1 and 2, it is clear that our dyadic recursion algorithm is more efficient than the conventional Cholesky factorization for sequent LSE (MLR) and vice versa. The computational complexity margin ratios in between for the two situations are also depicted in Figure 3. The asymptotical complexity margin ratios of around derived in (39) and (42) can be observed therein.

891932.fig.001
Figure 1: Computational complexity comparison (model order is fixed for one-time LSE or MLR) for Hermitian matrix inversion using conventional Cholesky factorization and our proposed new dyadic recursion .
891932.fig.002
Figure 2: Computational complexity comparison (accumulated complexity for the sequential rank-adaptive LSE or MLR with terminal model order ) for Hermitian matrix inversion using conventional Cholesky factorization and our proposed new dyadic recursion .
891932.fig.003
Figure 3: Computational complexity margin ratio versus model order (fixed model order: , variable model order: , equality line is for the margin ratio = ; is the fixed or the terminal model order therein).
6.2. Sequential Rank-Adaptive LSE for Channel Equalization

Next, we would like to present some simulations for illustrating the application of our proposed sequential rank-adaptive Hermitian matrix inversions for channel equalization as discussed in Section 5. Assume that the modulation type of the transmitted signal is BPSK and the channel noise is additive white Gaussian noise (AWGN). The channel transfer function is arbitrarily chosen as , where the channel gain is normalized as unity such that , and it complies with the LOS (light-of-sight) channel model such that the leading coefficient is the largest among all in magnitude. The received signals are generated by the computer using (43). The signal sample size is .

We carry out 200 Monte Carlo trials with randomly generated AWGN (various signal-to-noise ratios from 0 to 30 dB) and the aforementioned transmission model. The least mean-squared errors on average are calculated using (8). Here, we compare three schemes, namely (i) one-time Hermitian matrix inversion using the Cholesky factorization with the fixed model order (specified as “fixed ”), (ii) one-time Hermitian matrix inversion using the Cholesky factorization with the fixed model order (specified as “fixed ”), and (iii) sequential rank-adaptive Hermitian matrix inversion using our proposed dyadic recursion with variable model order (specified as “Variable Model Order”). Two different threshold values are employed, namely . It is noted that the allowed maximum model order for our dyadic recursion scheme is defaulted as , where is the flooring operator.

Figures 47 show the simulation results. According to Figures 4 and 5, our proposed new dyadic recursion method achieves the smallest least mean-squared errors most of time especially for low threshold values. The trade-off of our scheme is the increased average computational complexities, which are shown in Figures 6 and 7. From Figures 6 and 7, it can be observed that we may save a huge computational burden for a bad channel condition (low signal-to-noise ratio) for a very slight equalization quality deterioration. According to Figures 17, we can justify that our dyadic recursion algorithm is more efficient than the conventional Cholesky factorization in the Hermitian matrix inversion for the sequential rank-adaptive LSE or MLR. Besides, the sequential rank-adaptive LSE or MLR with variable model orders using our recursion scheme can also seek the trade-off between the estimation performance and the required computational complexity.

891932.fig.004
Figure 4: Various least mean-squared errors versus signal-to-noise ratio (the threshold for our variable model scheme is ).
891932.fig.005
Figure 5: Various least mean-squared errors versus signal-to-noise ratio (the threshold for our variable model scheme is ).
891932.fig.006
Figure 6: The corresponding computational complexities to Figure 4 versus signal-to-noise ratio (the threshold for our variable model scheme is ).
891932.fig.007
Figure 7: The corresponding computational complexities to Figure 5 versus signal-to-noise ratio (the threshold for our variable model scheme is ).

7. Conclusion

Hermitian matrix inversion is the pivotal computation for least-square estimation and multiple-parameter linear regression. Conventional Cholesky factorization can be applied for one-time LSE and MLR with a predetermined fixed model order. In this paper, we decompose any Hermitian matrix into submatrices with a half of the original dimension and derive the new dyadic recursion algorithm for the Hermitian matrix inversion accordingly. Moreover, for the theoretical comparison, we elaborate the computational complexity analyses to derive the asymptotical complexity ratios for the Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion scheme. We show that our new method can achieve the complexity margin ratio of for the sequential rank-adaptive Hermitian matrix inversions and that of for the one-time Hermitian matrix inversion over the conventional Cholesky factorization method. We also present the applications of the sequential rank-adaptive LSE with variable model order using our new dyadic recursion procedure for channel estimation and equalization in telecommunications. Our proposed new variable model order LSE scheme can seek the trade-off between the estimation performance and the computational complexity.

Acknowledgments

This work was supported by Information Technology Research Award for National Priorities (NSF-ECS 0426644) from National Science Foundation, Faculty Research Initiation Award from the Southeastern Center for Electrical Engineering Education, Research Enhancement Award from the Louisiana-NASA Space Consortium and Faculty Research Award from Louisiana State University.

References

  1. Y. Zhou, S. C. Huang, and M. Bergsneider, “Linear ridge regression with spatial constraint for generation of parametric images in dynamic positron emission tomography studies,” IEEE Transactions on Nuclear Science, vol. 48, no. 1 I, pp. 125–130, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Zhou, S. C. Huang, and Y. I. Hser, “Generalized ridge regression versus simple ridge regression for generation of kinetic parametric images in PET,” in Proceedings of IEEE Nuclear Science Symposium, vol. 3, pp. 1551–1555, 1999.
  3. B. J. Park, W. Pedrycz, and S. K. Oh, “Identification of fuzzy models with the aid of evolutionary data granulation,” IEE Proceedings: Control Theory and Applications, vol. 148, no. 5, pp. 406–418, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. C. K. Mai, I. V. M. Krishna, and A. V. Reddy, “Polyanalyst application for forest data mining,” in Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IGARSS '05), vol. 2, pp. 756–759, July 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. G. N. Fouskitakis, “Pseudolinear estimation of fractionally integrated ARMA (ARFIMA) models with automotive application,” IEEE Transactions on Signal Processing, vol. 47, no. 12, pp. 3365–3380, 1999. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Sebastiani, F. Godtliebsen, R. A. Jones, O. Haraldseth, T. B. Müller, and P. A. Rinck, “Analysis of dynamic magnetic resonance images,” IEEE Transactions on Medical Imaging, vol. 15, no. 3, pp. 268–277, 1996. View at Scopus
  7. L. Szczecinski, “Low-complexity search for optimal delay in linear FIR MMSE equalization,” IEEE Signal Processing Letters, vol. 12, no. 8, pp. 549–552, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. T. Walzman and M. Schwartz, “Automatic equalization using the discrete frequency domain,” IEEE Transactions on Information Theory, vol. 19, no. 1, pp. 59–68, 1973. View at Scopus
  9. D. K. Kim and P. G. Park, “Adaptive self-orthogonalizing per-tone decision feedback equalizer for single carrier modulations,” IEEE Signal Processing Letters, vol. 13, no. 1, pp. 21–24, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. F. Pancaldi and G. M. Vitetta, “Frequency-domain equalization for space-time block-coded systems,” IEEE Transactions on Wireless Communications, vol. 4, no. 6, pp. 2907–2916, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. F. Pancaldi and G. M. Vitetta, “Block channel equalization in the frequency domain,” IEEE Transactions on Communications, vol. 53, no. 3, pp. 463–471, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. F. Pancaldi and G. M. Vitetta, “Block equalization techniques in the frequency domain,” in Proceedings of IEEE Wireless Communications and Networking Conference (WCNC '04), vol. 4, pp. 2295–2300, 2004.
  13. J. G. Proakis, Digital Communications, McGraw-Hill, 4th edition, 2001.
  14. J. Salz, “Optimal mean-square decision feedback equalization,” Bell System Technical Journal, vol. 52, no. 8, pp. 1341–1373, 1973. View at Scopus
  15. Y. Kojima, K. Takeda, and F. Adachi, “Polynomial prediction RLS channel estimation for DS-CDMA frequency-domain equalization,” in Proceedings of IEEE 70th Vehicular Technology Conference Fall (VTC '09) Fall, pp. 1–5, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Chen, S. Kwong, and C. W. Kok, “Blind MMSE equalization of FIR/IIR channels using oversampling and multichannel linear prediction,” ETRI Journal, vol. 31, no. 2, pp. 162–172, 2009. View at Scopus
  17. X. Huang and H. C. Wu, “Robust and efficient intercarrier interference mitigation for OFDM systems in time-varying fading channels,” IEEE Transactions on Vehicular Technology, vol. 56, no. 5 I, pp. 2517–2528, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. H. C. Wu and Y. Wu, “Distributive pilot arrangement based on modified M-sequences for OFDM intercarrier interference estimation,” IEEE Transactions on Wireless Communications, vol. 6, no. 5, pp. 1605–1609, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. H. C. Wu and Y. Wu, “A new ICI matrices estimation scheme using hadamard sequence for OFDM systems,” IEEE Transactions on Broadcasting, vol. 51, no. 3, pp. 305–314, 2005. View at Publisher · View at Google Scholar · View at Scopus
  20. K. D. Concannon and W. J. Thompson, “Regression lines: more than meets the eye,” Computing in Science and Engineering, vol. 2, no. 4, pp. 78–81, 2000. View at Scopus
  21. G. H. Golub and C. F. V. Loan, Matrix Computations, John Hopkins University Press, 2nd edition, 1991.
  22. S. Haykin, Adaptive Filter Theory, Prentice-Hall, 4th edition, 2001.
  23. S. M. Kay, Modern Spectrum Estimation, Theory and Application, Prentice-Hall, 1987.
  24. R. Yu, X. Lin, and C. C. Ko, “A multi-stage levinson-durbin algorithm,” in Proceedings of the 36th Asilomar Conference on Signals Systems and Computers, vol. 1, pp. 218–221, usa, November 2002. View at Scopus
  25. L. M. Lee and H. C. Wang, “An extended Levinson-Durbin algorithm for the analysis of noisy autoregressive process,” IEEE Signal Processing Letters, vol. 3, no. 1, pp. 13–15, 1996. View at Scopus
  26. B. Farhang-Boroujeny, Adaptive Filters, Wiley, 2003.
  27. I. F. Gustavson and I. Jonsson, “Minimal-storage high-performance Cholesky factorization via blocking and recursion,” IBM Journal of Research and Development, vol. 44, no. 6, pp. 823–850, 2000. View at Scopus
  28. B. S. Andersen, J. Waśniewski, and F. G. Gustavson, “A recursive formulation of Cholesky factorization of a matrix in packed storage,” ACM Transactions on Mathematical Software, vol. 27, no. 2, pp. 214–244, 2001. View at Publisher · View at Google Scholar · View at Scopus
  29. H. L. Royden, Real Analysis, Prentice-Hall, 3rd edition, 1988.