Abstract
Variable forgetting factor (VFF) least squares (LS) algorithm for polynomial channel paradigm is presented for improved tracking performance under nonstationary environment. The main focus is on updating VFF when each time-varying fading channel is considered to be a first-order Markov process. In addition to efficient tracking under frequency-selective fading channels, the incorporation of proposed numeric variable forgetting factor (NVFF) in LS algorithm reduces the computational complexity.
1. Introduction
Time-varying frequency-selective fading wireless channels can be modeled by using the tapped-delay-line filter, in which each channel tap coefficient is considered to be an independent autoregressive process [1]. The analytical and simulation results presented in [2] manifest that the first-order Markov channel provides a mathematically tractable model for the time-varying channels. Under such Rayleigh fading environment, the linear least squares algorithm (linear polynomial model-based approach [3]) using variable forgetting factor (LSn-VFF) is developed for channel estimation in [4]. The lag error variance due to time variations and additive white Gaussian noise (AWGN) variance in channel estimation error have a tradeoff relation with each other. The VFF can be determined with the degree of nonstationarity and signal-to-noise ratio by using the well-known least mean square (LMS) algorithm. The LSn-VFF algorithm is reported to perform well only at high signal-to-noise ratios [4]. However, it increases the computational complexity.
Based on an extended estimation error criterion, which accounts for the nonstationarity of signal, a method for determining the numeric variable forgetting factor (NVFF) is presented in [5]. When the signal experiences nonstationarity, NVFF decreases automatically to estimate the global trend quickly using the extended estimation error criterion. On the contrary, NVFF increases under stationary conditions by increasing the memory for accurate estimation. In this correspondence, we propose a channel estimation method using NVFF least squares algorithm combined with polynomial time-varying channel paradigm.
This correspondence is organized as follows. In Section 2, we first describe the time-varying frequency-selective wireless system model. Details about the presented second-order polynomial model-based least squares algorithm using VFF (LSn2-VFF) are given in Section 3, and we also introduce mathematical formulation of NVFF based on the extended estimation error criterion. In Section 4, simulation results are presented to compare the performance of VFF and NVFF under nonstationary environment. Finally, conclusions are given in Section 5.
2. System Model
Let the received signal be where, is the transmitted data symbol vector with , is the frequency-selective time-varying channel coefficient vector, which changes after each symbol period , and is the zero-mean AWGN with variance . The is conjugate transposition of the matrix, and is the length of multipath fading channel. Each channel coefficient is an independent stationary ergodic first-order Markov process with correlation coefficient , where is the maximum Doppler frequency, and is the Bessel function of first-kind and zeroth order [1]. It follows that where, is the zero-mean process noise with variance . Using estimated channel coefficient vector , the estimated received signal is The estimation error is with zero-mean and variance .
By invoking Taylorβs theorem, the time variations of each channel coefficient is explicitly represented in terms of the polynomial paradigm [3]. It results in , where is the time-variation parameter for th coefficient such that where, . The least squares algorithm using above second-order channel model is called LSn2 estimation algorithm, which is a modification of least squares algorithm using the first-order channel model (LSn) in [4]. The tracking capability of LSn2 algorithm in the time-varying environment can be further improved by incorporating the variable forgetting factor (LSn2-VFF) without the explicit knowledge of process noise variance.
3. LS Algorithms with Polynomial Channel Model
3.1. LSn2-VFF Algorithm for Time-Varying Channels
The polynomial channel model-based LS-VFF algorithm uses forgetting factor to update the channel state after each symbol duration. The adaptive weight vector in LSn2-VFF algorithm is implemented by exploiting the least squares algorithm proposed in [4] by Song et al.; it follows that where ,
The minimum mean square error (MMSE) in channel estimation is . For unknown process noise variance, VFF is updated as where ΞΌ is the step size [6], which controls convergence and stability of the LMS algorithm in (7). The variation range for VFF is to ensure the bounded nonnegative value of VFF, which increases the computational complexity of LSn2-VFF algorithm. However, it requires the knowledge of at receiver.
3.2. LSn2-NVFF Algorithm for Time-Varying Channels
Equations (5)β(6) are used in combination with NVFF to develop LSn2-NVFF algorithm. The speed of adaptation is proportional to the asymptotic memory length [5]. The memories corresponding to and are denoted by and , respectively. If process noise variance is small in comparison to variance of AWGN that is (see the appendix), then . The extended estimation error is determined by
To ensure that the averaging in above equation is not obscuring the nonstationarity introduced by time-varying channel, is kept smaller than minimum asymptotic memory length, that is, . The NVFF is determined by using the extended estimation error in (9). It follows that
It takes relatively long time for accurate parameter estimation for a value of NVFF close to unity, when the signal experiences stationarity [5]. Therefore, controls the speed of adaptation. However, a small value of NVFF appears beneficial under nonstationary environment, which is bounded (lower) by to guarantee positive nonzero values of NVFF.
4. Simulation Results
For simulations, the BPSK independent and identically distributed data is considered to be input. The presented results are based on the ensemble average of 250 independent simulation runs. Note that we have kept , and . The channel tracking performances of LSn2-VFF and LSn-VFF algorithms are compared in Figure 1 for and , where , and step-size ΞΌ are empirically chosen as 0.99, 0.75 and 0.005, respectively, for all cases (see [4]). The actual channel coefficient is denoted as βTRUEβ in Figures 1 and 2. The tracking performance results presented in Figure 1 depict that LSn2 algorithm (with second-order channel model) combats lag noise more efficiently than LSn algorithm, but at the cost of increased computational complexity. It is apparent from the simulation results shown in Figures 1 and 2 that LSn-VFF and LSn-NVFF algorithms supersede LSn2-VFF and LSn2-NVFF algorithms.
Both variable forgetting factors overwhelm the loss in tracking capability caused due to first-order channel model. The simulation results are in good agreement with previous studies [4]. For smoothly fading channels, LSn-NVFF algorithm reduces tracking weight error relatively more as compared to LSn-VFF algorithm (as shown in Figure 3). At , the tracking performances of both adaptive algorithms are observed to be approximately equal. However for fast fading channels, LSn-VFF outperforms LSn-NVFF algorithm due to the inaccurate extended estimation error used in determining NVFF.
5. Concluding Remarks
The LSn-NVFF algorithm not only performs better than LSn-VFF algorithm, but also precludes the need of LMS algorithm in variable forgetting factor updating at each iteration, which in turn reduces computational burden. It is inferred from simulation results that the higher-order polynomial model-based LS algorithms (e.g., LSn2) in conjunction with NVFF are not providing any additional advantage. However, the linear least squares algorithm using NVFF is found to be efficient under slow and smoothly time-varying fading channels.
Appendix
Using (1) and (3), the estimation error is The above equation can be simplified using (2) as Under optimum conditions, it is assumed that . It leads to Therefore, the estimation error variance is However, for .