Abstract

This paper deals with the problem of two-dimensional autoregressive (AR) estimation from noisy observations. The Yule-Walker equations are solved using adaptive steepest descent (SD) algorithm. Performance comparisons are made with other existing methods to demonstrate merits of the proposed method.

1. Introduction

The problem of two-dimensional (2D) autoregressive modeling is very important in many signal processing applications. This problem has applications in image processing, radar, sonar, and communications. In image processing, it has been applied to image modeling [1], texture analysis [2], and hyperspectral imagery [3]. In radar and sonar, it can be included in direction finding, model based detection, and spectral estimation [46]. It also can be applied to fading channel estimation in communications [7].

In noise-free case, 2D AR estimation is investigated in [810]. A 2D lattice structure is proposed for 2D AR modeling in [8] which is capable of simultaneously providing all possible types of 2D causal quarter-plane (QP) and asymmetric half-plane (ASHP) AR models. Model identification of a noncausal 2D AR process is presented in [9]. An order recursive algorithm is proposed to solve 2D Yule-Walker equations. Modeling of 2D AR processes with various regions of support is considered in [10].

The one-dimensional AR parameter estimation from noisy observations is well investigated in the literatures (see [1114]). The literature is insufficient for two-dimensional noisy autoregressive fields. In [15], a method based on combination of the Yule-Walker equations and third order moment is proposed. In this method, the driving noise process is forced to be non-Gaussian. Recently, in [16], combinations of the low order and high order Yule-Walker equations are solved as a quadratic eigenvalue problem. This method is an extension of the method proposed by Davila in [11].

In this paper, we propose a Yule-Walker based method for the estimation of 2D AR parameters from noisy observations. Due to the observation noise variance, the Least-Squares (LS) estimate of the parameters is biased. We propose an estimation scheme using the additional Yule-Walker equation beyond the order of the model for the observation noise variance estimation. Then, by steepest descent (SD) algorithm, we compensate bias from the estimation. Numerical examples are given to show the effectiveness of the proposed estimation method.

2. Problem Formulation

A 2D autoregressive (AR) field is given by where , is the order of field, and is output of the model, and the driving input field, , is a 2D white zero-mean stationary field having variance of . In practice, the observed field is given by where is the observation noise of zero-mean stationary field having variance of and it is assumed to be uncorrelated with .

The model is assumed causal and stable and with quarter-plane (QP) support. The order of model is assumed to be known. Some order selection methods in 2D AR case are presented in [17].

The power spectral density (PSD) corresponding to noiseless 2D AR field in (1) is given by [18] The Yule-Walker (YW) equations for noiseless 2D AR field given by (1) are as follows [19]: where is 2D autocorrelation function, is the expectation operator, and is Kronecker delta function, defined by Due to uncorrelatedness of and , we have . So, the autocorrelation function of the observed field is given by Our objective is to estimate for , , from the observations for .

3. The Algorithm

The YW equations in (4) can be written in a matrix form as where , , and are the vectors.

Because is unknown, the first row in (7) is removed and after rearranging the equations in terms of , the YW equations can be rewritten as follows: where is matrix, , and are vectors. Note that is after removing the first element .

Multiplying both sides of (9) by , we obtain If is assumed to be known, we can iteratively estimate using SD algorithm as follows [20]: where is the Least-Squares (LS) estimate of , is the step size parameter, and and can be estimated from the observations. We can also control the stability and rate of convergence of the algorithm by changing . The above equation converges if is selected between zero and where is the minimum eigenvalue of the matrix.

In many signal processing applications, the observation noise variance is unknown, so we must estimate it. In the following subsection, we present a method to estimate the observation noise variance.

3.1. The Observation Noise Variance Estimation Given a

Consider the YW equations for lag ; we have We arrange the equations in (12) in terms of as where is matrix and is vector.

Multiplying both sides of (10) by and using (13), we obtain The LS estimate of can be obtained via where is the Euclidean norm. Now, we can estimate and by an iterative algorithm which is summarized in the following subsection.

3.2. The Proposed Algorithm

The proposed estimation algorithm can be summarized as follows.

Step 1. Estimate the autocorrelations   , using data samples as follows [15]:

Step 2. Form , , , and and compute .

Step 3. Set and

Step 4. Set and compute

Step 5. If [13] where is a small positive number, for example, , the convergence is achieved and the iteration process must be terminated; otherwise, go to Step 4.

3.3. Convergence Analysis

The convergence analysis of the proposed algorithm is similar to the method proposed in [21]. The following result discusses the convergence conditions of the proposed algorithm. In this analysis, the observation noise variance is assumed to be known.

Theorem 1. The necessary and sufficient condition for the convergence of the proposed algorithm is to require the step size parameter to satisfy the following condition: where is the minimum eigenvalue of the matrix.

Proof. Defining estimation error matrix , substituting into (11), and using , we obtain The eigenvalues of   are where is between 0 and 1.
If , then . In this case, we have , and their intersection is .

4. Simulation Results

In order to evaluate the performance of the proposed method and to compare with the Least-Squares (LS) method, two examples are presented. In the first example we generate data using a synthetic 2D noisy AR and in the second example we apply the methods for 2D sinusoidal spectral estimation.

Example 1. Consider a synthetic 2D noisy AR model as follows: where the and are white Gaussian noises which are mutually uncorrelated.
The signal to noise ratio (SNR) is calculated by where  is the variance of the signal . We assume that and is adjusted to produce a value of the SNR. The is assumed to be equal to  . The step size, , is set to one.
The methods are compared in terms of normalized root mean squared error (RMSE) which is defined by where is the estimate of in the th trial and   is the total number of trials. The mean number of iterations per test (NIPT) for the proposed method is also presented against the criterion (19) with a choice of .
In this example, we set , SNR = 5 dB, and . Results of simulations are summarized in Table 1. From Table 1, it can be seen that the performance of the proposed method is better than the LS method.

Example 2. Consider 2D sinusoidal signal in the presence of the noise as follows: where , , rad/sample is a random phase uniformly distributed over , and is zero-mean Gaussian noise with variance . Based on linear prediction property of sinusoidal signals, we can model as a 2D AR order with zero input driving noise . Then, we estimate the model parameters and compute the corresponding normalized power spectrum.
In this example, parameter is adjusted to produce SNR equal to −10 dB and . The mean of the spectrum is depicted in Figures 1 and 2 for the proposed and the LS methods, respectively.

It can be seen from figures of the power spectrum that the proposed method can estimate a spectrum sharper than that of the LS method. Note that, in all simulations presented in this section, the convergence of the proposed method was reached usually within a few iterations (NIPT = 6 iterations on average).

5. Conclusion

The two-dimensional noisy AR problem is addressed. The Yule-Walker equations are solved by using adaptive steepest descent algorithm. The induced bias from the observation noise variance is removed using the Yule-Walker equation beyond the order of the model. Simulation results showed that the proposed method can estimate frequency of sinusoidal signals sharper than that of the LS method.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.