Abstract

Least-square estimation (LSE) and multiple-parameter linear regression (MLR) are the important estimation techniques for engineering and science, especially in the mobile communications and signal processing applications. The majority of computational complexity incurred in LSE and MLR arises from a Hermitian matrix inversion. In practice, the Yule-Walker equations are not valid, and hence the Levinson-Durbin algorithm cannot be employed for general LSE and MLR problems. Therefore, the most efficient Hermitian matrix inversion method is based on the Cholesky factorization. In this paper, we derive a new dyadic recursion algorithm for sequential rank-adaptive Hermitian matrix inversions. In addition, we provide the theoretical computational complexity analyses to compare our new dyadic recursion scheme and the conventional Cholesky factorization. We can design a variable model-order LSE (MLR) using this proposed dyadic recursion approach thereupon. Through our complexity analyses and the Monte Carlo simulations, we show that our new dyadic recursion algorithm is more efficient than the conventional Cholesky factorization for the sequential rank-adaptive LSE (MLR) and the associated variable model-order LSE (MLR) can seek the trade-off between the targeted estimation performance and the required computational complexity. Our proposed new scheme can benefit future portable and mobile signal processing or communications devices.

1. Introduction

Least-square estimation (LSE) or multiple-parameter linear regression (MLR) is a crucial mathematical technique for science and engineering ubiquitously. A wide variety of applications in signal processing devices, mobile communications, computational physics, and statistics arise from the LSE or MLR. It has been adopted as a powerful tool in nuclear science [1, 2], system identification [3], data mining [4], linear system modeling [5], medical imaging analysis [6], and so forth. In particular, for signal processing and communications, least-square approach dominates a large majority of applications, such as the equalizations [716] as well as the interference estimation [1719]. An overview of linear regression can be found in [20].

However, all of the aforementioned applications of LSE or MLR require the predetermined model order (the number of parameters to be estimated), and therefore the associated computational complexities are fixed and not flexible. In practice, the predetermined model order should be large enough to assure that the square error can be minimized due to the sufficient degree of freedom. Therefore, the model-order selection is essential to all LSE and MLR problems. Intuitively, we may adjust the model order such that an appropriate model order can be achieved to satisfy a preset square-error requirement. In the meantime, the computational complexity can be variable and depends on the error tolerance using such an adjustable model order. Nevertheless, the computational complexity would augment tremendously during the model order adjustment. How to make the most use of the calculated estimates from the previous model order and estimate the new parameters thereupon draws the researchers’ interest always.

Rank-one update scheme or iterative Levinson-Durbin algorithm is the well-known approach for computing the linear prediction coefficients with adjustable model orders [21]. The Levinson-Durbin algorithm is based on the Yule-Walker equations [22] and the strict assumption of Hermitian Toeplitz correlation matrix [21], but such an assumption is valid only when the multiple time series (the observed data) possess the exact temporal relationship (one is the delayed version of another) and the sample size approaches infinity under the ergodicity [23]. Besides, the Levinson-Durbin algorithm can only increase the model order by one at each iteration. A new class of extended Levinson-Durbin algorithms has been developed to achieve computational efficiency: in speech or audio coders, an adaptive multi-stage Levinson-Durbin algorithm was investigated in [24], and it was shown to be more robust than the conventional Levinson-Durbin algorithm when the input signal has large spectral dynamics; low-complexity approaches were also provided to determine the optimal delay and calculate the linear prediction filter coefficients in [7, 25]; iterative Levinson-Durbin algorithms were investigated for the frequency-domain decision feedback and MMSE equalizers in [812]. However, the Levinson-Durbin family of algorithms are still restricted by their limited model-order scalability due to their common rank-one update characteristics and Hermitian Toeplitz correlation matrix assumption.

For a general linear regression problem, the temporal relationship among multiple observed time series does not necessarily sustain. Hence, the Hermitian Toeplitz correlation matrix can exist only for some circumstances. To be more precise, the correlation matrix for a general linear regression problem should be Hermitian only [22, 26]. If the correlation matrix is Hermitian, the most efficient technique for the least-square solutions involves Cholesky factorization [21]. Recursion-based factorization methods were proposed to reduce the memory usage and computational complexity [27, 28]. However, they are not related to the direct Hermitian matrix inversion, which is the backbone of the linear estimation and the multiple-parameter linear regression. The existing Cholesky-factorization-based Hermitian matrix inversion procedure has to recalculate all the estimates when the model order is enlarged. In this paper, we present a novel rank-adaptive Cholesky factorization scheme, which can be employed to establish a low-complexity variable model-order LSE or MLR algorithm. In our newly derived LSE or MLR algorithm, the model order can be adjusted dyadically (model order is , ) while that in the Levinson-Durbin algorithm can be incremented one by one. To the best of our knowledge, our new algorithm is the only estimation or regression method which can adjust the model order dyadically. The magnificent dyadic-scalability property of our proposed LSE and MLR algorithm can well serve for the future signal processing or communications applications which demand the adaptability in both memory storage and computational complexity.

The rest of this paper is organized as follows. In Section 2, the problem formulation for the LSE and MLR is presented where the Wiener-Hopf equations will result in a Hermitian correlation matrix inversion. In Section 3, we derive the dyadic recursion formula for such a Hermitian matrix inversion. In Section 3.2, we design a novel dyadic rank-adaptive Hermitian matrix inversion algorithm based on the results in Section 3.1. The computational complexity analyses for the fixed model-order and rank-adaptive Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion are manifested in Section 4. The frequent communications and signal processing applications using the sequential rank-adaptive LSE thereupon for channel estimation and equalization are introduced in Section 5. The numerical evaluations for the comparisons of the computational complexities and the least mean-square errors between our new dyadic recursion scheme and the conventional method are delineated in Section 6. Ultimate concluding remarks will be drawn in Section 7.

Notations: , denote the sets of positive integers and complex numbers, respectively. denotes a column vector and denotes a matrix; , denote the transpose and the Hermitian adjoint of , respectively. The symbols , , and are reserved as the integer indices. The symbol represents a square matrix with the dimensionality , and this matrix is in a larger matrix which contains as its -block. represents the mathematical definition and denotes the complex-conjugate operator. The computational complexity function specifies the number of complex multiplications incurred when of dimension is calculated.

2. Least-Square Estimation and Multiple-Parameter Linear Regression

The LSE and MLR can be formulated as follows: given the degree of freedom or model order and the observed time series , and the target time series (desired response) , where is the sample size for all time series, we want to search for the best-fitting coefficients , for , such that is minimized. In general, , , and are complex-valued, , . Thereby, in matrix form, the LSE or MLR problem can be formulated as where and is the collection of the best-fitting (optimal) coefficients. Equation (1) can be rewritten as where According to [22], the least-square solution to the LSE or MLR problem stated in (4) is given by the Wiener-Hopf equation as follows: and the least mean-squared error is given by It is noted from (7) that the major portion of the computational complexity for solving involves the calculation of . Therefore, the computational complexity can be further reduced if the special mathematical properties of the correlation matrix are exploited.

In particular, when the LSE or MLR is adopted for filter or equalizer design, the temporal relationship , for , for all , is assumed. Thus, as the sample size is large (), the correlation matrix is both Hermitian and Toeplitz under the stationary and ergodic assumption [23]. However, very often, the multiple time series, , , specify the different observations generated from independent sources, or the sample size is not large. Consequently, According to (9), the correlation matrix is not necessarily Toeplitz. Based on (5), the only ever-sustaining property is that is always Hermitian. Instead of employing the QR factorization to compute with high complexity [21], we can benefit from this Hermitian property to carry out. Since is Hermitian, we can use the Cholesky factorization such that where and are the lower- and upper-triangular matrices, respectively. If exists, according to (11), we can calculate as where , which is a more efficient procedure than the inversion based on the QR factorization [21].

3. Novel Dyadically Recursive Hermitian Matrix Inversion

3.1. Recursion of Hermitian Matrix Inversion Using Cholesky Outer Product

From the previous studies in Section 2, we cannot adopt the Levinson-Durbin algorithm for LSE or MLR when the correlation matrix is not Toeplitz. To the best of our knowledge, no exiting literature had ever addressed any form of recursive formula for the procedure of Hermitian matrix inversion. In this section, we would like to introduce such a new procedure, which can dyadically extend the inverse of any Hermitian matrix from those of its submatrices with a half dimension. This novel dyadic recursion for Hermitian matrix inversion can reduce the computational complexity as we will analyze later on.

Any lower-triangular matrix of dimension can be decomposed as where is the matrix containing all zeros, , are the two submatrices both of which are lower-triangular, and is the submatrix. According to (11) and (13), we have where According to (11)–(14), we obtain where

3.2. New Efficient Dyadic Recursion for Hermitian Matrix Inversion

In this paper, we assume that the ill-posed problem can be neglected during the matrix inversion and no additional pivoting technique is required. Therefore, the computational complexity for the pivoting is not considered here. From the results in the previous subsection, we can design the following efficient dyadic recursion algorithm for any Hermitian matrix inversion.

Step 1 (Initialization). Set and . . Hence, .

Step 2. Construct the correlation matrix of dimension such that

Step 3 (Dyadic Expansion). Compute Use the Cholesky factorization to calculate Then, calculate accordingly.
Compute

Step 4 (Dyadically Recursive Hermitian Matrix Inversion). Construct

Step 5. Repeat Steps 24 for , and stop at , which is determined by the preset square-error tolerance. Use the ultimate to calculate .

4. Computational Complexity Analyses for Hermitian Matrix Inversion

In this section, we will study the computational complexities of the two algorithms for the Hermitian matrix inversion, namely (i) conventional Cholesky factorization and (ii) our new dyadic recursion presented in Section 3.2. Since the correlation matrix assumption for the Levinson-Durbin method is different and it also has to be based on the Yule-Walker equations as previously discussed in Section 2, we cannot compare the Levinson-Durbin method with the two underlying schemes here on a fair ground. Without loss of generality, we assume that all of the observed time series are normalized with respect to their energies such that , for in our subsequent analyses. Thus, the diagonal elements in the correlation matrices of any dimension are always 1.

4.1. Computational Complexity for Hermitian Matrix Inversion Using Conventional Cholesky Factorization

The Hermitian matrix inversion using the conventional Cholesky factorization has to be based on the predetermined model order . Since the diagonal elements of are 1, it yields According to (26), the corresponding computational complexity in terms of the total complex multiplications is To compute , we can write where , for , are the values to be determined from . Since where is the identity matrix, the computational complexity for calculating is also Once is available, the inverse matrix can be calculated accordingly and the corresponding computational complexity for this multiplication is Thus, from (27), (29), and (30), the total computational complexity for calculating using the conventional Cholesky factorization is

4.2. Computational Complexity for Hermitian Matrix Inversion Using New Efficient Dyadic Recursion

From the previous discussion in Section 3.2, we can calculate the computational complexity for our new scheme thereupon. First, let us focus on the incurred computational complexity in terms of complex multiplications when we apply the recursion in Section 3.2 to solve from the given and , for . According to (22), the computational complexity for computing is Next, the calculation of involves the computational complexity Then, we can compute according to (23); the corresponding computational complexity, similar to (27), is In addition, similar to (29), the computational complexity for calculating is also According to (24), we can calculate the computational complexity for such that Thus, the computational complexity involved in the calculation of is equivalent to Consequently, according to the steps in Section 3.2 and (30), as well as (32)–(37), the total computational complexity for our new scheme in one dyadic recursion to solve can be calculated as

4.3. Computational Complexities for Sequential Rank-Adaptive Hermitian Matrix Inversions

From the studies in Sections 4.1 and 4.2, we have found that the asymptotical complexity ratio where . It is noted that from (39), our new dyadic recursion scheme does not possess any advantage over the conventional Cholesky method. The reason is quite obvious. The ratio presented in (39) involves the fixed model order for the conventional Cholesky factorization. If the model order is predetermined as a constant, our new scheme described in Section 3.2 still needs to calculate all the intermediate matrices and , for all while the conventional Cholesky factorization method does not.

However, in practice, the model order should be variable and the sequential rank-adaptive LSE or MLR should take place. The least-square error will decrease when the model order increases, that is, . A stop criterion for the error tolerance (the maximum squared error which is allowed) can be introduced to terminate the model order enlargement. If such a sequential rank-adaptive LSE or MLR is considered, we need to recalculate the corresponding computational complexities for the sequential computations of , . Assume that the stop criterion terminates the sequential computations of at . We can compute the total computational complexity for such a sequential procedure using the conventional Cholesky factorization as On the other hand, if our new dyadic recursion scheme is used for the sequential calculations of up to , the corresponding computational complexity is where is given by (37). Similarly, we may rederive the complexity ratio for the sequential calculations such that

5. Applications of Sequential Rank-Adaptive Hermitian Matrix Inversions for Channel Estimation and Equalization

Based on the sequential rank-adaptive Hermitian matrix inversion approach in Section 4.3, we can design a new communication channel estimation and equalization scheme with model order adaptability. Other similar applications can be easily extended using our efficient dyadic Hermitian matrix inversion algorithm in Section 3.2. Here, we introduce this proposed new channel estimation and equalization method using our dyadic recursion for adjusting the model order.

A basic transmission model for communication systems can be formulated as where denotes the linear convolution; , , , and represent the transmitted training sequence, the received signal, the channel impulse response, and the background noise, respectively; is the sample size. Without loss of generality, the windowing of observed input data is assumed to comply with the covariance method [22]. For the channel estimation problem, the multiple time series can be set as where and is the estimated channel filter length. Thereby, the desired response is the received signal such that On the other hand, for the equalization problem, the multiple time series are set as where and is the estimated equalizer length. The desired response is therefore the transmitted training signal instead such that

According to (43), we can acquire the data specified by (44) and (45) for channel estimation and that specified by (46) and (47) for equalization. Thus, setting , we can carry out , , , according to (5)–(8). Note that the least mean-squared error given by (8) is a function of both and , and hence we can write For a fixed sample size , we can increase the model order . For a sufficiently large , we may achieve the error floor such that We can design a simple threshold criterion to determine the minimum model order for satisfying the condition in (49). For the dyadic model order adaptation, we propose a stop criterion Therefore, given a predetermined threshold , the minimum model order for the dyadic adaptation is obtained as where and denotes the infimum operation [29]. Ultimately, we can apply (5)–(8) and (50)-(51) to build a sequential rank-adaptive LSE or MLR procedure which is capable of adapting the model order dyadically.

6. Numerical Evaluations

In this section, we would like to provide the numerical evaluations for the illustrations of the computational complexities associated with the two Hermitian matrix inversion algorithms (conventional Cholesky factorization and our new efficient dyadic recursion) and the LSE applications using the adaptive model order for channel equalization. Table 1 lists the comparison of the two methods for a few model orders. It is noted that when a Hermitian matrix inversion is carried out for a fixed model order, the conventional Cholesky factorization approach is more computationally efficient than our new dyadic recursion. However, in comparison of the total computational complexity for the sequential calculation of variable model order Hermitian matrix inversion, our new dyadic recursion is more efficient than the conventional Cholesky factorization when the same terminal model order is set, since the latter does not utilize any information from the previous calculation of the Hermitian inversion with a smaller model order.

6.1. Comparison of Computational Complexities

Figures 1 and 2 illustrate the total computational complexities in terms of complex multiplications for the Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion method; they depict the complexities versus the model order for the fixed model order (one-time LSE or MLR) and the variable model order (sequential rank-adaptive LSE or MLR). According to Figures 1 and 2, it is clear that our dyadic recursion algorithm is more efficient than the conventional Cholesky factorization for sequent LSE (MLR) and vice versa. The computational complexity margin ratios in between for the two situations are also depicted in Figure 3. The asymptotical complexity margin ratios of around derived in (39) and (42) can be observed therein.

6.2. Sequential Rank-Adaptive LSE for Channel Equalization

Next, we would like to present some simulations for illustrating the application of our proposed sequential rank-adaptive Hermitian matrix inversions for channel equalization as discussed in Section 5. Assume that the modulation type of the transmitted signal is BPSK and the channel noise is additive white Gaussian noise (AWGN). The channel transfer function is arbitrarily chosen as , where the channel gain is normalized as unity such that , and it complies with the LOS (light-of-sight) channel model such that the leading coefficient is the largest among all in magnitude. The received signals are generated by the computer using (43). The signal sample size is .

We carry out 200 Monte Carlo trials with randomly generated AWGN (various signal-to-noise ratios from 0 to 30 dB) and the aforementioned transmission model. The least mean-squared errors on average are calculated using (8). Here, we compare three schemes, namely (i) one-time Hermitian matrix inversion using the Cholesky factorization with the fixed model order (specified as “fixed ”), (ii) one-time Hermitian matrix inversion using the Cholesky factorization with the fixed model order (specified as “fixed ”), and (iii) sequential rank-adaptive Hermitian matrix inversion using our proposed dyadic recursion with variable model order (specified as “Variable Model Order”). Two different threshold values are employed, namely . It is noted that the allowed maximum model order for our dyadic recursion scheme is defaulted as , where is the flooring operator.

Figures 47 show the simulation results. According to Figures 4 and 5, our proposed new dyadic recursion method achieves the smallest least mean-squared errors most of time especially for low threshold values. The trade-off of our scheme is the increased average computational complexities, which are shown in Figures 6 and 7. From Figures 6 and 7, it can be observed that we may save a huge computational burden for a bad channel condition (low signal-to-noise ratio) for a very slight equalization quality deterioration. According to Figures 17, we can justify that our dyadic recursion algorithm is more efficient than the conventional Cholesky factorization in the Hermitian matrix inversion for the sequential rank-adaptive LSE or MLR. Besides, the sequential rank-adaptive LSE or MLR with variable model orders using our recursion scheme can also seek the trade-off between the estimation performance and the required computational complexity.

7. Conclusion

Hermitian matrix inversion is the pivotal computation for least-square estimation and multiple-parameter linear regression. Conventional Cholesky factorization can be applied for one-time LSE and MLR with a predetermined fixed model order. In this paper, we decompose any Hermitian matrix into submatrices with a half of the original dimension and derive the new dyadic recursion algorithm for the Hermitian matrix inversion accordingly. Moreover, for the theoretical comparison, we elaborate the computational complexity analyses to derive the asymptotical complexity ratios for the Hermitian matrix inversion using the conventional Cholesky factorization and our new dyadic recursion scheme. We show that our new method can achieve the complexity margin ratio of for the sequential rank-adaptive Hermitian matrix inversions and that of for the one-time Hermitian matrix inversion over the conventional Cholesky factorization method. We also present the applications of the sequential rank-adaptive LSE with variable model order using our new dyadic recursion procedure for channel estimation and equalization in telecommunications. Our proposed new variable model order LSE scheme can seek the trade-off between the estimation performance and the computational complexity.

Acknowledgments

This work was supported by Information Technology Research Award for National Priorities (NSF-ECS 0426644) from National Science Foundation, Faculty Research Initiation Award from the Southeastern Center for Electrical Engineering Education, Research Enhancement Award from the Louisiana-NASA Space Consortium and Faculty Research Award from Louisiana State University.