Journal of Stochastics

Volume 2014, Article ID 502406, 7 pages

http://dx.doi.org/10.1155/2014/502406

## Adaptive Algorithm for Multichannel Autoregressive Estimation in Spatially Correlated Noise

Electrical Engineering Department, Shahid Chamran University, Ahvaz, Iran

Received 24 April 2014; Accepted 5 June 2014; Published 19 June 2014

Academic Editor: Chi-Yi Tsai

Copyright © 2014 Alimorad Mahmoudi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper addresses the problem of multichannel autoregressive (MAR) parameter estimation in the presence of spatially correlated noise by steepest descent (SD) method which combines low-order and high-order Yule-Walker (YW) equations. In addition, to yield an unbiased estimate of the MAR model parameters, we apply inverse filtering for noise covariance matrix estimation. In a simulation study, the performance of the proposed unbiased estimation algorithm is evaluated and compared with existing parameter estimation methods.

#### 1. Introduction

The noisy MAR modeling has many applications such as high resolution multichannel spectral estimation [1], parametric multichannel speech enhancement [2], MIMO-AR time varying fading channel estimation [3], and adaptive signal detection [4].

When the noise-free observations are available, the Nuttall-Strand method [5], the maximum likelihood (ML) estimator [6], and the extension of some standard schemes in scalar case to multichannel can be used for estimation of MAR model parameters. The relevance of the Nuttall-Strand method is explained in [7] by carrying out a comparative study between these methods. The noise-free MAR estimation methods are sensitive to the presence of additive noise in the MAR process which limits their utility [1].

The modified Yule-Walker (MYW) method is a conventional method for noisy MAR parameter estimation. This method uses estimated correlation at lags beyond the AR order [1]. The MYW method is perhaps the simplest one from the computational point of view [8], but it exhibits poor estimation accuracy and relatively low efficiency due to the use of large-lag autocovariance estimates [8]. Moreover, numerical instability issues may occur when it is used in online parameter estimation [9].

The least-squares (LS) method is another method for noisy MAR parameter estimation. The additive noise causes the least-squares (LS) estimates of MAR parameters to be biased. In [10] an improved LS (ILS) based method has been developed for estimation of noisy MAR signals. In this method, bias correction is performed using observation noise covariance estimation.

The method proposed in [10], denoted by vector ILS based (ILSV) method, is an extension of Zheng’s method [11] to the multichannel case. In the ILSV method, the channel noises can be correlated and no constraint is imposed on the covariance matrix of channel noises. Nevertheless this method has poor convergence when the SNR is low.

In [12], the ILSV algorithm is modified using symmetry property of the observation covariance matrix which is named an advanced least square vector (ALSV) algorithm. One step called symmetrisation is added to the ILSV algorithm to estimate the observation noise covariance matrix.

In [13, 14], methods based on errors-in-variables and cross coupled, Kalman and , filters are suggested, respectively. These methods are also an extension of the scalar methods presented in [15] and [16, 17], respectively. In these methods, the observation noise covariance matrix is assumed to be a diagonal matrix. This assumption means that the channel noises are uncorrelated. These methods are not suitable for spatially correlated noise case.

Note that cross coupled filter idea consists of using two mutually interactive filters. Each time a new observation is available, the signal is estimated using the latest estimate value of the parameters, and conversely the parameters are estimated using the latest a posteriori signal estimate.

In this paper, we combine low-order and high-order YW equations in a new scheme and use adaptive steepest descent (SD) algorithm to estimate the parameters of the MAR process. We also apply the inverse filtering idea for observation noise covariance matrix estimation. Finally we can theoretically perform the convergence analysis of the proposed method. Using computer simulations, the performance of the proposed algorithm is evaluated and compared with that of the LS, MYW, ILSV, and ALSV methods in spatially correlated noise case. Note that we do not do comparisons with [13, 14] because they assume that the channel noises are uncorrelated.

The proposed estimation algorithm provides more accurate parameter estimation. The convergence behaviour of the proposed method is better than those of the ILSV and the ALSV methods.

#### 2. Data Model

A real noisy -channel AR signal can be modeled as where is the input noise vector and is the MAR output noise vector. ’s are the MAR parameter matrices, and is the observed noise vector. The noises and are assumed to be zero-mean stationary white noises with the covariance matrices given by where is the Kronecker delta and and are the unknown input noise covariance matrix and the observation noise covariance matrix which are generally nondiagonal. This means that the observation noise is spatially correlated. is the expectation operator and denotes the transpose operation.

In this data model, we have the following assumptions.(i)The AR order and the number of channels are known.(ii)The MAR parameter matrices are constrained such that the roots of lie inside the unit circle, where , is the identity matrix, and det denotes determinant of a matrix. This condition on the matrices guarantees the stability of the MAR model [1]. Note that the roots of (3) are poles of the MAR model.(iii)The observation noise is uncorrelated with the input noise ; that is, for all and .

Using (1), we can write the following linear regression for : where is the parameter matrix defined as and is the regression vector defined as The equation error vector is where

The covariance matrices of and are given as where .

The main objective in this paper is to estimate , , and using the given samples .

#### 3. Steepest Descent Algorithm for Noisy MAR Models

The proposed estimation algorithm uses steepest descent method. It iterates between the following two steps until convergence: (I) given noise covariance matrix, estimate MAR matrices using steepest descent method; (II) given MAR matrices, estimate noise covariance matrix based on inverse filtering idea.

##### 3.1. MAR Parameter Estimation Given Observation Noise Covariance Matrix

The Yule-Walker equations for the noisy MAR process given by (4) are Evaluating (11) for and using give the following system of linear equations: where and is a block-diagonal matrix given by

Evaluating (11) for and using give the following modified Yule-Walker (MYW) equations: where and is an arbitrary nonnegative integer.

Combining the low-order Yule-Walker equations given by (12) and the high-order Yule-Walker equations given by (15), we obtain where

Premultiplying (17) by , we obtain where And of dimension can be partitioned as where the dimensions of and are and , respectively.

Substituting (19) and (22) into (20), we get If is assumed to be known, we can iteratively estimate using steepest descent algorithm as follows: where is the step size parameter and , can be estimated from the observations. We can also control the stability and rate of convergence of the algorithm by changing . The above equation is converged if is selected between zero and where is the maximum eigenvalue of the matrix. Furthermore if we set and in (24), it is changed to where , which is previously derived in [8].

##### 3.2. Application of Inverse Filtering for Estimation of Observation Noise Covariance Matrix Given MAR Matrices

If the observed process is filtered by MAR matrices, the output process of the filter is obtained as By evaluating (9) for , we can obtain We can rewrite (27) as We solve (28) as follows: where and vec is a column vectorization operator; that is, By assuming , symmetry property of , we can improve estimate as [12] where unvec is the inverse operation of vec.

Using (26) is evaluated from the observed signal as In (32), is unknown. So, we will use a recursive algorithm for estimating and . This algorithm is discussed in Section 3.3.

Now, we assume that and are known and derive an estimate for .

Evaluating (9), for , we obtain where

##### 3.3. The Algorithm

The results derived in previous sections and subsections are summarized to propose a recursive algorithm for estimating and ’s . The improved least-squares algorithm for multichannel processes based on inverse filtering (IFILSM) is as follows.

*Step 1. *Compute autocovariance estimates by means of given samples :
and use them to construct the covariance matrix estimates , , , and .

*Step 2. *Initialize , where denotes the iteration number. Then, compute the estimate

*Step 3. *Set and calculate the estimate of observation noise covariance matrix :

*Step 4. *Perform the bias correction

*Step 5. *If
where is the Euclidean norm and is an appropriate small positive number, the convergence is achieved and the iteration process must be terminated; otherwise, go to Step 3.

*Step 6. *Estimate via (33):

#### 4. Simulation Result

In simulation study, we have compared the IFILSM method with the other existing methods. Two examples have been considered: first frequency estimation and second synthetic noisy MAR model estimation. In all simulations, we set for the proposed IFILSM algorithm.

##### 4.1. Estimating the Frequency of Multiple Sinusoids Embedded in Spatially Colored Noise

Consider several sinusoidal signals in spatially colored noise as follows: It is well known that can be modeled as an MAR () process with zero driving noise vector (). So, we can apply the method proposed in the previous subsection to estimate the frequency of sinusoidal signals corrupted with spatially colored observation noise. Note that the frequency of sinusoids can be estimated by locating the peaks in the MAR spectrum.

*Example 1. *We consider data samples of a signal comprised of two sinusoids corrupted with spatially colored noise. Consider as follows:
where , , are independent random phases uniformly distributed over , and is Gaussian spatially colored noise having the following covariance matrix:
The signal-to-noise ratio (SNR) of each channel is defined as
which is set to 5 and 10 dB. The number of trials is set to 1000 and the parameter in the IFILSM and MYW methods is set to 4. The in the ILSV, ALSV, and IFILSM methods is set to 0.01. We compare the proposed method with the LS, MYW, ALSV, and ILSV methods in terms of mean square error (MSE) of frequency estimates. Note that other existing methods cannot work in the spatially correlated observation noise. In Table 1, we tabulate the MSEs in terms of dB for the LS, MYW, ALSV, ILSV, and proposed methods. We see that, from the table, the proposed method has the best performance.

##### 4.2. Synthetic MAR Processes

*Example 2. *Consider a noisy MAR model with , , and coefficient matrices
The driving process and observation noise covariance are mutually uncorrelated Gaussian temporally white and spatially correlated processes with covariance matrices
We compare the ILSV, the ALSV, and the proposed IFILSM method in terms MSE of MAR coefficient estimate and the convergence probability defined as
where is the number of runs that an iterative algorithm is converged and is the total number of runs. We set the parameters of the algorithms and the parameter in the IFILSM method is set to 4. The in the ILSV, ALSV, and IFILSM methods is set to 0.01.

In Case 1 and Case 2 SNR for each channel is set to 15 and 10 dB, respectively. In this example data samples are changed from 1000 to 4000 with step size 1000. The MSE and values are plotted in Figures 1, 2, and 3 versus , respectively. The probability of convergence is one for all algorithms in Case 1 and the figure is not plotted here. From the figures, we can see that the accuracy and the convergence of the IFILSM are better than the ILSV and ALSV methods. Note that the ILSV and ALSV methods have poor convergence when SNR is lower or equal to 10 dB. It can be seen from the figures that the performance of the all algorithms is constant over the sample size because autocovariance estimates of the observations are not changed in this rage of data sample size.

#### 5. Conclusion

Steepest descent (SD) algorithm is used for estimating noisy multichannel AR model parameters. The observation noise covariance matrix is nondiagonal; this means that the channel noises are assumed to be spatially correlated. The inverse filtering idea is used to estimate the observation noise covariance matrix. Computer simulations showed that the proposed method has better performance and convergence than the ILSV and the ALSV methods.

#### Appendix

The following result discusses the convergence conditions of the proposed algorithm.

Theorem A.1. *The necessary and sufficient condition for the convergence of the proposed algorithm is to require that the step size parameter satisfy the following condition:
**
where is the minimum eigenvalue of the matrix.*

*Proof. *Defining estimation error matrix , substituting into (24), and using , we obtain
The eigenvalues of are where is between 0 and 1.

If , then . In this case, we have , and their intersection is .

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### References

- S. M. Kay,
*Modern Spectral Estimation: Theory and Application*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1988. - S. Srinivasan, R. Aichner, W. B. Kleijn, and W. Kellermann, “Multichannel parametric speech enhancement,”
*IEEE Signal Processing Letters*, vol. 13, no. 5, pp. 304–307, 2006. View at Publisher · View at Google Scholar · View at Scopus - C. Komninakis, C. Fragouli, A. H. Sayed, and R. D. Wesel, “Multi-input multi-output fading channel tracking and equalization using Kalman estimation,”
*IEEE Transactions on Signal Processing*, vol. 50, no. 5, pp. 1065–1076, 2002. View at Publisher · View at Google Scholar · View at Scopus - P. Wang, H. Li, and B. Himed, “A new parametric GLRT for multichannel adaptive signal detection,”
*IEEE Transactions on Signal Processing*, vol. 58, no. 1, pp. 317–325, 2010. View at Publisher · View at Google Scholar · View at Scopus - S. Marple,
*Digital Spectral Analysis with Applications*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1987. - D. T. Pham and D. Q. Tong, “Maximum likelihood estimation for a multivariate autoregressive model,”
*IEEE Transactions on Signal Processing*, vol. 42, no. 11, pp. 3061–3072, 1994. View at Publisher · View at Google Scholar · View at Scopus - A. Schlogl, “A comparison of multivariate autoregressive estimators,”
*Signal Process*, vol. 86, no. 9, pp. 2426–2429, 2006. View at Google Scholar - A. Nehorai and P. Stoica, “Adaptive algorithms for constrained ARMA signals in the presence of noise,”
*IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 36, no. 8, pp. 1282–1291, 1988. View at Publisher · View at Google Scholar · View at Scopus - W. X. Zheng, “Fast identification of autoregressive signals from noisy observations,”
*IEEE Transactions on Circuits and Systems II: Express Briefs*, vol. 52, no. 1, pp. 43–48, 2005. View at Publisher · View at Google Scholar · View at Scopus - A. Mahmoudi and M. Karimi, “Estimation of the parameters of multichannel autoregressive signals from noisy observations,”
*Signal Processing*, vol. 88, no. 11, pp. 2777–2783, 2008. View at Publisher · View at Google Scholar · View at Scopus - W. X. Zheng, “Autoregressive parameter estimation from noisy data,”
*IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing*, vol. 47, no. 1, pp. 71–75, 2000. View at Publisher · View at Google Scholar · View at Scopus - X. M. Qu, J. Zhou, and Y. T. Luo, “A new noise-compensated estimation scheme for multichannel autoregressive signals from noisy observations,”
*Journal of Supercomputing*, vol. 58, no. 1, pp. 34–49, 2011. View at Publisher · View at Google Scholar · View at Scopus - J. Petitjean, E. Grivel, W. Bobillet, and P. Roussilhe, “Multichannel AR parameter estimation from noisy observations as an errors-in-variables issue,”
*Signal, Image and Video Processing*, vol. 4, no. 2, pp. 209–220, 2010. View at Google Scholar - A. Jamoos, E. Grivel, N. Shakarneh, and H. Abdel-Nour, “Dual optimal filters for parameter estimation of a multivariate autoregressive process from noisy observations,”
*IET Signal Processing*, vol. 5, no. 5, pp. 471–479, 2011. View at Publisher · View at Google Scholar · View at Scopus - D. Labarre, E. Grivel, Y. Berthoumieu, E. Todini, and M. Najim, “Consistent estimation of autoregressive parameters from noisy observations based on two interacting Kalman filters,”
*Signal Processing*, vol. 86, no. 10, pp. 2863–2876, 2006. View at Publisher · View at Google Scholar · View at Scopus - D. Labarre, E. Grivel, M. Najim, and N. Christov, “Dual ${H}_{\infty}$
algorithms for signal processing-application to speech enhancement,”
*IEEE Transactions on Signal Processing*, vol. 55, no. 11, pp. 5195–5208, 2007. View at Publisher · View at Google Scholar · View at Scopus - W. Bobillet, R. Diversi, E. Grivel, R. Guidorzi, M. Najim, and U. Soverini, “Speech enhancement combining optimal smoothing and errors-in-variables identification of noisy AR processes,”
*IEEE Transactions on Signal Processing*, vol. 55, no. 12, pp. 5564–5578, 2007. View at Publisher · View at Google Scholar · View at Scopus