Abstract

The standard Yule-Walker equations, as they are known for an autoregression, are generalized to involve the moments of a moving-average process indexed on any number of dimensions. Once observations become available, new moments estimators are set to imitate the theoretical equations. These estimators are not only consistent but also asymptotically normal for any number of indexes. Their variance matrix resembles a standard result from maximum Gaussian likelihood estimation. A simulation study is added to conclude on their efficiency.

1. Introduction

At first, processes taking place on more than one dimension were considered on surfaces or in the three-dimensional space for the sake of spatial dynamics. Next, the time series Autoregressive Moving-Average model could be as well used to clothe the covariance dependence of spatial point processes. Nevertheless, the assumptions of causality and invertibility were based on conventional orderings of the vector indexes, as these were introduced by Whittle [1] and generalized by Guyon [2], and they failed to look natural on the spatial axes. The more general bilateral equations considered by Whittle [1] also related to serious problems during the estimation of the parameters. Besag’s [3] automodels as well as Besag’s [4] equations for observations on irregular sites moved in a new direction but the weaknesses during the estimation remained.

The inclusion of the time axis has provided more convenience for the spatial analysis that may be summarized in two ways. On the one hand when the observations are collected on irregular sites, the asymptotic properties of the estimators for the spatial dependence parameters can now be established as the number of timings only increases to infinity; for example, Dimitriou-Fakalou [5] has proposed a methodology on how to proceed. On the other hand when the regular recordings are increasing towards all the spatial as well as the time axes, the unilateral ordering of indexes can now be arranged in a meaningful way and the causal and invertible ARMA model may be preferred as a natural spatiotemporal equation.

Special cases of the ARMA model are the pure autoregressive and moving-average equations and they both relate to useful properties as in the spectral density of interest. The autoregressive process uses a spectral density with “finite denominator”, which for Gaussian processes [3] translates into a finite conditional expectation of any value based on all other values. A moving-average process uses a “finite numerator” in the spectral density, resulting in the autocovariance function with nonzero values on a finite set of “lags” only.

For a causal and finite autoregressive equation, the method of moments estimators according to the Yule-Walker equations almost coincides with the least squares or maximum Gaussian likelihood estimators and their consistency as well as their asymptotic normality may be established. Nevertheless, if the estimation for a pure moving-average or ARMA multi-indexed process takes place using its infinite autoregressive representation, a problem known as the “edge-effect” [2] resurrects and the asymptotic normality cannot be guaranteed. A modification of the Gaussian likelihood, which has been written from the autoregressive representation of each observation based on its “previous” ones, has been proposed by Yao and Brockwell [6] to confine the edge-effect with a weak condition of a finite second moment, but this works for the case of processes indexed on two dimensions only.

In this paper, for the multidimensional processes that may be clothed via an invertible and finite moving-average equation, a new method of estimation is proposed imitating the Yule-Walker equations. The finite number of nonzero autocovariances of the moving-average process is used most advantageously to eliminate the edge-effect. As a result, the proposed estimators are not only consistent but also asymptotically normal using relaxed conditions of finite second moments.

2. Theoretical Yule-Walker Equations

For and any positive integer, real-valued processes indexed on will be considered, such as time series or spatial processes on one line transect , spatial processes in the plane or in the three-dimensional space, as well as the combination of spatiotemporal processes on , , or , respectively. For any , an invertible moving-average process will be defined by the following equation: where are uncorrelated, zero-mean random variables with positive and finite variance . In (2.1) a unilateral ordering of the fixed lags has been considered. For , this is the standard ordering of integer numbers and, for , the conventional unilateral ordering has been introduced by Whittle [1]. Guyon [2] has generalized the ordering of two vector indexes on , . As for the invertibility condition, it is only imposed to make sure that it can be written , also using the ordering . Anderson and Jury [7] have provided a sufficient condition on the invertibility of such filters.

For the complex numbers and the vector , the polynomial is considered, where for . Similarly, the backwards operators and the vector backwards operator are considered, such that . So (2.1) may now be rewritten as Writing , the autoregressive sequence may be defined from according to the following: The auto-regression is unilateral but not causal, in the sense that it can be written that , as a function of the “future” terms , . Moreover, writing the two polynomials it holds that The set in (2.4) has members the vector “lags”, where have nonzero autocorrelations. The set that uses the “positive” lags of will have cardinality .

The relationship in (2.5) provides the way to write Yule-Walker equations for a moving-average process. The processes and are not only “caused” by the same sequence , one from its “past” and the other from its “future” values, but also have “reverse” second-order dependence; that is, the spectral density of the auto-regression is where according to (2.4), and the spectral density of the moving-average process is which also implies that . While the original Yule-Walker equations for dictate from (2.9), the following equations may be written for , that is, and their sum , which may be rewritten as Further to (2.11), it holds that from (2.8) and (2.9). It is true that (2.11) and (2.12) are equivalent to . Like the standard Yule-Walker equations, the general versions (2.11) and (2.12) are constructed from the coefficients . As the finite number of nonzero autocovariances of is involved now, they will be used as theoretical prototypes to be imitated by their sample analogues for the sake of estimation of the moving-average parameters.

Finally, as the assumption of invertibility of the moving-average equation used for parameterization will be essential, it should be noted that Dimitriou-Fakalou [8] has presented in detail the merits of a causal and invertible spatiotemporal ARMA model. As a special case, the moving-average process , where is the time index and are spatial indexes, may satisfy the equation , where are zero mean, uncorrelated variables with identical, finite variance, is a finite positive integer, have a finite cardinality, and it holds that that is, the model is invertible. Note that, in either equation, no unnatural ordering is forced over space as the unilateral conventions “run” over the past timings only. Hence the model both uses a space-time unilateral ordering of indexes and is meaningful. As a result, for the special case of a stationary process with autocovariance function identified to take zero values after a reasonably small number of lags on all spatial and temporal dimensions, the invertible moving-average equation should be preferred and the new results presented next will be useful.

3. Method of Moments Estimators

For being a set of finite cardinality , there are available from (2.2) with true parameters that are to be estimated. The following condition will be assumed to be true.

Condition C. The parameter space is a compact set containing the true value as an inner point. Further, for any the moving-average model (2.2) is invertible.

Based on , a set will be defined for any . More specifically, it will hold that only if . Next, for the corrected set with elements, it will hold if for every . Thus for any , it holds that . The reduction from to is essential as it will guarantee later that the edge-effect will fade away; this is because the set includes the locations that have all their “neighbors” with being available. As the source of the edge-effect is the speed of the absolute bias to zero, will guarantee that whatever “is missing” from the finite sample will not add at all to the bias of the estimators.

By imitating (2.11) the estimators are set to be the solutions of equations: where and . Using Proposition 3.1, the equations will be used additionally, in order to make sure that the estimators are consistent. The proposition guarantees that the use of equations used as prototypes for estimation can provide a unique way to compute the parameters of the invertible equation.

Proposition 3.1. The polynomials and with as well as and are considered. For any , such that it holds that and with , as well as and , the unique solution that satisfies the equations , , and , , , is , .

For mathematical convenience, new variables depending on the sampling set are defined , and , , and (3.1) may be rewritten as or where the zero subindex relates to . In (3.3) and (3.4), it is with elements The last term in (3.5) will become more obvious when the properties of the estimators will be established but for now it might be justified by a second-order Taylor expansion, where it holds due to the fact that will be shown to be consistent estimators in Theorem 4.1. Further, the sums over involving the second derivatives of the Taylor expansion, when divided by or (), will also tend in probability to the relevant finite expectations when will be assumed to be independent and identically distributed random variables with a finite variance.

Finally by imitating (2.12), the estimator of the error variance may be defined as

Example 3.2. For the simplest one-dimensional case when , it is demonstrated in detail how to compute the new estimator; some other cases will briefly be considered later. Suppose that are available from , where , , and are uncorrelated random variables with variance unity. The original set will be corrected to that it is the maximal set, such that for all it holds that , . Note that the “lags” on which the autocorrelations of are not equal to zero are as in the set ; further, for any it holds that and, thus, for it is true that .
The autocovariance of the relevant auto-regression at the “lag” is . As a result, the estimator will be the solution of the following equation: Furthermore, when are independent and identically distributed random variables, it will hold in probability that for and the estimator may be approximated as a solution of the quadratic equation: Later, it will be investigated whether the approximation is worthwhile. The estimator reduces to or with For the actual nonzero auto-correlation , it holds that and . As a result, if and , then , and the value is bigger than 1. If, on the other hand, , and then and the same value is smaller than −1. Thus, the estimator is approximately Although the distribution of is not known for small sample sizes and risk to be outside the range with some positive probability, both the consistency and the asymptotic normality will be established next. As opposed to what is known for the maximum Gaussian likelihood estimators, the proposed estimators will be asymptotically normal even for processes on more than one dimension. Thus, the new methods might be more useful for higher number of dimensions as well as large sample sizes.

4. Properties of Estimators

The asymptotic normality of standard estimators for the parameters of stationary processes indexed on , , has not been established yet. More specifically as Guyon [2] demonstrated, the bias of the genuine Gaussian likelihood estimators, computed from regular recordings increasing to infinity on all dimensions and at equal speeds, is of order . Nevertheless in order to secure the asymptotic normality, the absolute bias when multiplied by should tend to zero, which is only true for as in Brockwell and Davis [9].

Regarding the proposed estimators of this paper, their bias will stem from (3.3) as in the expected value of , , which expresses what is “missing” from the sample. Even when the bias is multiplied by , the random variable , has a zero mean due to the selection and the fact that the “finite” autocovariance function of a moving-average process is being used. Thus, the edge-effect will not be an obstacle to establish the asymptotic normality of the estimators.

A series of ARMA model-based arguments have been used before to deal with the edge-effect by Yao and Brockwell [6] for two-dimensional processes as well as a weak condition of finite second moments. Guyon [2], on the other hand, used the periodogram as in the Gaussian likelihood of observations proposed by Whittle [1] and required that the fourth moments would be finite. Dahlhaus and Künsch [10] improved certain weaknesses of Guyon's [2] modification on the likelihood but paid the price of having to reduce the dimensionality to secure their results. The proposed estimators of this paper will also use the weak condition of finite second moments, which is a great advantage, since they will refer to moving-average processes on any number of dimensions.

Condition 4. (i) For a set of cardinality , it is written if the length of the minimal hypercube including , say , and the length of the maximal hypercube included in , say , are such that . (ii) Further, as it holds that is bounded away from .

Theorem 4.1. If , then under Conditions C1 and C2(i) and as , it holds that From (3.3) and (3.4), all the equations together may be written as where and which depends on the sampling set . The next proposition reveals what happens to the part in (4.2). Then Theorem 4.3 establishes the asymptotic normality of the estimators.

Proposition 4.2. Let the polynomial . If , then under Conditions C1 and C2(i) and as , it holds that

Theorem 4.3. For , the auto-regression is defined by . Also the vector and the variance matrix are considered. If , then under Conditions C1 and C2 and as , it holds that

5. Empirical Results

In this section, an empirical comparison of the proposed estimators to maximum Gaussian likelihood estimators for moving-average processes is presented in detail. The theoretical foundations in favour of the new estimators have been provided already when their asymptotic normality on any number of dimensions has been established based on finite second moments only. As a result, the speed of the bias to zero is not expected to cause them to perform worse than maximum likelihood estimators, especially when the dimensionality is large.

On the other hand, Theorem 4.3 attributes to the new estimators the variance matrix when, according to Hannan [11], efficient estimation results in with and the same notation as in Theorem 4.3. and are defined similarly but they are not, in general, equal. It seems that as the number of moving-average parameters of the process increases, the two types elements get closer. Nevertheless, as the pure moving-average model will be preferred when it is parsimonious, a decision is made here whether the new estimators are efficient then.

The investigation has started with the one-dimensional case by generating from the model with one parameter only. The moments estimator has been approximated as in the example earlier, while the minimizer of with respect to the parameter has been considered to be the Gaussian likelihood estimator. The true values have been considered and the sample size has been set equal to . Very encouraging results for the efficiency of the proposed estimator are presented in Table 1 as even when the sample size is still small, extreme differences in the variances of the two types estimators cannot be detected. It is the bias of the moments estimator that seems to be the only reason why it might be outperformed by the likelihood estimator in terms of the Mean Squared Error. Nevertheless, the speed with which the bias tends to zero is much faster as one would expect and, eventually, the new estimator for performs better altogether.

Next for the two-dimensional case, have been generated from with being independent, Gaussian random variables with zero mean and variance unity. Using similar arguments as in the example, the moments estimators of and might be derived by solving simultaneously as well as the third equation which is needed to secure the consistency (Proposition 3.1). All of the sums above extend over . The estimators introduced here are , and in the three equations, it has been considered that , where and are zero mean, uncorrelated random variables all with variance unity. Under the condition , a special form of “causality” is implied and the standard Yule-Walker equations may be used to easily write the needed autocorrelations , as functions of and . Again, the maximum likelihood estimators have been approximated by the minimizers of with respect to and .

Table 2 verifies once more the conclusions drawn from the one-dimensional case. It is safe to consider that the new estimators are efficient and this is very apparent in the case that the parameters are in absolute value further away from zero ( and ). The striking case with the small sample size of around 50 points observed in the plane reveals that the variances of the moments estimators may even happen to be smaller. On the other hand, the bias heavily affects the results for the MSE for small sample sizes. Nevertheless as the sample size increases, the absolute bias of the likelihood estimators does not seem to decrease at all versus the bias of the proposed estimators that speedily reaches the zero value. As a result, the new estimators eventually equalize the MSE performance of the standard estimators.

Appendix

A. Outline Proofs

A.1. Proof of Proposition 3.1

Under the invertibility condition for the polynomials , there is a one-to-one correspondence between the coefficients and the autocorrelations , where , . It is true that the coefficients , , generate the numbers , , and they are a solution to the equations of interest. If there is another solution, say , generating , , then it must hold that On the other hand, the general Yule-Walker equations for this solution imply that Thus, the linear equations with unknowns are derived as with a unique solution as it is explained next; the autocovariances , refer to a (weakly) stationary process, say , with spectral density bounded away from 0 and under Condition C1. Consequently, the variance matrix where , is nonsingular and there is a unique solution to the equations above, that is, ,.

A.2. Proof of Theorem 4.1

For , (3.1) may be rewritten as Under the assumption that are independent and identically distributed, it holds as that , according to Proposition 6.3.10 or the Weak Law of Large Numbers and Proposition 7.3.5 of Brockwell and Davis [9], extended to include the cases . Then for and the first of two terms in (A.5), it may be written as or For the next term, it may be written as due to the Cauchy-Schwartz inequality and the independence of the random variables ,. Now for any , it holds that is the corresponding autocovariance function of a causal auto-regression. This guarantees that the autocovariance function decays at an exponential rate and there are constants and , such that , . Similarly for the estimator , it holds that with probability one and . For the case of observations on a hyperrectangle and under Condition C2(ii), it can be easily verified that , for example, the arguments of Yao and Brockwell [6] when . In general, it can be written as Condition C2(i) that After combining the results for the two terms in (A.5), it can be written that where the first term has been set equal to zero. Thus, exactly like the theoretical analogue implies. Since instead of equations have been used, there is a unique solution , according to Proposition 3.1 and as and C2(i) holds. Finally, the consistency for implies, according to (3.7), that

A.3. Proof of Proposition 4.2

According to (3.5), for the th element of , it suffices to look at where the last term tends to zero in probability, thanks to the consistency of the estimators from the use of all the equations. For the polynomial the second term in (A.1) can be written as This comes straight from the fact that as and Condition C2(i) holds. The limit comes from the same argument, as for the proof of the consistency for the estimators. For example, if Condition C2(ii) is true, it may be written , since for any , it holds that for constants and . Similar action might be taken for the first term in (A.1).

For the auto-regression as it was defined in (2.3), it can be seen immediately that is uncorrelated to , , since these are linear functions of “future” error terms only. In general, it can be written according to (2.5) that and back to the general Yule-Walker equations. Thus, and is uncorrelated to . As a result, it holds for , that and, for , that The proof is completed when it is seen that both and are linear functions of members from the sequence and, thus,

A.4. Proof of Theorem 4.3

From (3.3), (4.2), and (4.3) and for , it can be written that The convergence in probability to zero of the remainder might be justified by the fact that its expected value is equal to zero due to the selection , and that its variance is where given the sampling set . First, are independent and identically distributed and then without the assumption of a finite third or fourth moment. Under Condition C2(ii), it can be written that and a similar argument may be used for the cross-terms due to the Cauchy-Schwartz inequality. For the case and recordings on a rectangle, there is a justification for that by Yao and Brockwell [6]. Thus, as and Condition C2 holds, which guarantees the convergence in probability to zero.

Equation (4.2) may now be rewritten as where , . It holds that is a linear function of , and is a function of . Then for , it can be written for that Thus, for , , it can be written that and

Now, for any positive integer , the set is defined. According to the infinite moving-average representation of , for fixed the new process may be defined from Similarly, , and are defined here. For the same reasons as before, it holds that and that For any vector , it holds that is a strictly stationary and -dependent process, for a positive and finite integer . The definition of -dependent processes as well as a theorem for the asymptotic normality for strictly stationary and -dependent processes on might be given similarly to the one-dimensional case by Brockwell and Davis [9]. Then as and under Condition C2, where . Similarly, for it holds as that . Using Chebychev's inequality, it may be verified that as and, thus, it holds that as and under Condition C2. According to (A.14), the th element of is equal to and back to the general Yule-Walker equations with , that is, the identity matrix of order . Equation (A.16) may be rewritten as and under Condition C2. Combining (A.6) and (A.18) with Proposition 4.2, it can be written that as and under Condition C2. The proof will be completed when it is shown that . Indeed, for the vector and for with being a random vector that is independent of , since it is a linear function of , the required result might be obtained then, as Note that the decomposition , as it is justified in the end of proof of Theorem 4.3, guarantees that has a nonzero determinant.

Acknowledgments

Some results of this paper are part of the Ph.D. thesis titled “Statistical inference for spatial and spatiotemporal processes,” which has been submitted to the University of London. The author would like to thank the Leverhulme Trust, which funded her studies and Professor Q. Yao.