Journal of Applied Mathematics

Journal of Applied Mathematics / 2010 / Article
!A Erratum for this article has been published. To view the article details, please click the ‘Erratum’ tab above.

Research Article | Open Access

Volume 2010 |Article ID 494070 | https://doi.org/10.1155/2010/494070

Ralf Zimmermann, "Asymptotic Behavior of the Likelihood Function of Covariance Matrices of Spatial Gaussian Processes", Journal of Applied Mathematics, vol. 2010, Article ID 494070, 17 pages, 2010. https://doi.org/10.1155/2010/494070

Asymptotic Behavior of the Likelihood Function of Covariance Matrices of Spatial Gaussian Processes

Academic Editor: F. Marcellan
Received16 Apr 2010
Revised03 Nov 2010
Accepted28 Nov 2010
Published13 Jan 2011

Abstract

The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood of spatial Gaussian predictor models as a function of its hyperparameters is investigated theoretically. Asymptotic sandwich bounds for the maximum likelihood function in terms of the condition number of the associated covariance matrix are established. As a consequence, the main result is obtained: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training is necessarily bound to fail.

1. Introduction

Spatial Gaussian processing, also known as best linear unbiased prediction, refers to a statistical data interpolation method, which is nowadays applied in a wide range of scientific fields, including computer experiments in modern engineering context; see, for example, [15]. As a powerful tool for geostatistics, it has been pioneered by Krige in 1951 [6], and to pay tribute to his achievements, the method is also termed Kriging; see [7, 8] for geostatistical background.

In practical applications, the data's covariance structure is modeled through covariance functions depending on the so-called hyperparameters. These, in turn, are estimated by optimizing the corresponding maximum likelihood function. It has been demonstrated by many authors that the accuracy of Kriging predictors relies both heavily on hyperparameter-based model training and, from the numerical point of view, on the condition number of the associated Kriging correlation matrix. In this regard, we relate to the following, nonexhaustive selection of papers: Warnes and Ripley [9] and Mardia and Watkins [10] present numerical examples of difficult-to-optimize covariance model functions. Ababou et al. [11] show that likelihood-optimized hyperparameters may correspond to ill-conditioned correlation matrices. Diamond and Armstrong [12] prove error estimates under perturbation of covariance models, demonstrating a strong dependence on the correlation matrix' condition number. In the same setting, Posa [13] investigates numerically the behavior of this precise condition number for different covariance models and varying hyperparameters. An extensive experimental study of the condition number as a function of all parameters in the Kriging exercise is provided by Davis and Morris [14]. Schöttle and Werner [15] propose Kriging model training under suitable conditioning constraints. Related is the work of Ying [16] and Zhang and Zimmerman [17], who prove asymptotic results on limiting distributions of maximum likelihood estimators when the number of sample points approaches infinity. Radial basis function interpolant limits are investigated in [18]. Modern textbooks covering recent results are [19, 20].

In this paper, the connection between hyper-parameter optimization and the condition number of the correlation matrix is investigated from a theoretical point of view. The setting is as follows. All sample data is considered as fixed. An arbitrary feasible covariance model function is chosen for good, so that only the covariance models' hyperparameters are allowed to vary in order to adjust the model likelihood. This is exactly the situation as it occurs in the context of computer experiments, where, based on a fixed set of sample data, predictor models have to be trained numerically. We prove that, under weak conditions, the limit values of the quantities in the model training exercise exist. Subsequently, by establishing asymptotic sandwich bounds on the model likelihood based on the condition number of the associated correlation matrix, it is shown that ill-conditioning eventually also decreases the model likelihood. This result implies a strategy for choosing good starting solutions for hyperparameter-based model training. We emphasize that all covariance models applied in the papers briefly reviewed above subordinate to the theoretical setting of this work.

The paper is organized as follows. In the next section, a short review of the basic theory behind Kriging is given. The main theorem is stated and proved in Section 3. In Section 4, an example of a Kriging data set is presented, which illustrates the limitations of classical model training.

2. Kriging in a Nutshell

Kriging is a statistical approach for estimating an unknown (scalar) function based on a finite data set of sample locations with corresponding responses obtained from measurements or numerical computations. The collection of responses is denoted by The function to be estimated is assumed to be the realization of an underlying random process given by a regression model and a random error function with zero mean. More precisely where the components of the row vector function are the basis functions of the regression model and is the corresponding vector of regression coefficients. By assumption, The component functions of can be chosen arbitrarily, yet they should form a function basis suitable to the specific application. The most common choices for practical applications are (1)constant regression (ordinary Kriging): , , ,(2)linear regression (universal Kriging): , , ,

and higher-order polynomials.

Introducing the regression design matrix the vector of errors at the sampled sites can be written as Note that the first column of equals for all polynomial regression models.

The Kriging predictor estimates at an untried site as a linear combination of the sampled data For each , the unique vector of weights that leads to an unbiased prediction minimizing the mean squared error is given by the solution of the Kriging equation system Here, denote the covariance matrix and the covariance vector, respectively, and the entries of the vector are Lagrange multipliers. Solving (2.8) by Schur matrix complement inversion and substituting in (2.7) leads to the Kriging predictor formula where is the generalized least squares solution to the regression problem . For details, see, for example, [20, 21].

For setting up Kriging predictors, it is therefore mandatory to estimate the covariances based on the sampled data set. The two most popular approaches to tackle this problem are variogram fitting (the geostatistical literature, see [7]) and application of spatial correlation functions (computer experiments, see [4, 20]). The latter ones are usually of the form Here is a vector of distance weights, which models the influence of the coordinate-wise spatial correlation on the prediction. The correlation matrix is defined by In order to avoid ambiguity due to different parameterizations of the correlation models, we fix for the rest of the paper the following.

Convention 1
Large distance weight values correspond to weak spatial correlation, and small distance weight values correspond to strong spatial correlation. More precisely, we assume that feasible spatial correlation functions are always parameterized such that at distinct locations .

All correlation models applied in all the papers briefly reviewed in the introduction can be parameterized accordingly. A collection of spatial correlation functions is given in several publications on Cokriging/Kriging, including [21, Table 2.1]. For example, the Gaussian correlation function parameterized with respect to the convention above is given by The results and proofs presented below hold true without change for every admissible spatial correlation model, assuming that the sample errors are normally distributed and that the process variance is stationary; that is, is independent of the locations , . In this setting, hyper-parameter training for Kriging models consists of optimizing the corresponding maximum likelihood function For fixed, optima for and can be derived analytically, see [20], so that hyper-parameter training for Kriging models is reduced to the following optimization problem: where the dependency on is as follows: Because the logarithm is monotonic, this is equivalent to minimizing the so-called condensed log-likelihood function often encountered in the literature.

3. Asymptotic Behavior of the Maximum Likelihood Function—Why Kriging Model Training Is Tricky: Part I

The condition number of a regular matrix with respect to a given matrix norm is defined as . For the matrix norm induced by the euclidean vector norm, one can show that where , are the largest, the smallest eigenvalue of , respectivley. In order to prevent the solution of the Kriging equation system (2.8) from being spoiled by numerical errors, it is important to prevent the covariance matrix from being severely ill-conditioned. However, the next theorem shows that eventually, when the condition number approaches infinity, so does the associated likelihood function. (Keep in mind that we have formulated likelihood estimation as a minimization problem; see (2.16).) Throughout this section, we will assume that the regression design matrix from (2.5) features the vector as first column. This is the case of the highest practical relevance and, in fact, is of particular difficulty, since in this case the first column of coincides with a limit eigenvector of the correlation matrix as will be seen in the following.

Theorem 3.1. Let , be a data set of sampled sites and responses. Let be the associated spatial correlation matrix, and let be the vector of errors with respect to the chosen regression model. Furthermore, let be the condition number of . Suppose that the following conditions hold true: (1)the eigenvalues of are mutually distinct for small , ,(2)the derivatives of the eigenvalues do not vanish in the limit: for all ,(3), . Then, and there exist constants such that for . The constants are independent of .

Remark 3.2. (1) The conditions given in the above theorem cannot be proven to hold true in general, since they depend on the data set in question. However, they hold true for nondegenerate data set. In Appendix A, a relationship between condition 2 and the regularity of is established, giving strong support that condition 2 is generally valid. Concerning the third condition, it will be shown in Lemma 3.4, that the limit exists, given conditions 1 and 2. Note that the set is of Lebesgue measure zero in . In all practical applications known to the author, these conditions were fulfilled.
(2) It holds that . Hence, the likelihood function approaches a constant limit for and . The corresponding predictor behavior is investigated in Section 4.
(3) Even though Theorem 3.1 shows that the model likelihood becomes arbitrarily bad for hyperparameters , the optimum might lie very close to the blowup region of the condition number, leading to still quite ill-conditioned covariance matrices [11]. This fact as well as the general behavior of the likelihood function as predicted by Theorem 3.1 is illustrated in Figure 1.
(4) Figure 2(b), provides an additional illustration of Theorem 3.1.
(5) Theorem 3.1 offers a strategy for choosing starting solutions for the optimization problem (2.16): take each , as small as possible such that the corresponding correlation matrix is still (numerically) positive definite.
(6) A related investigation of interpolant limits has been performed in [18] but for standard radial basis functions.

In order to support readability, we divide the proof of Theorem 3.1 into smaller units, organized as follows. As a starting point, we establish two auxiliary lemmata on the existence of limits of eigenvalue quotients and of errors vectors. Subsequently, the proof of the main theorem is conducted relying on the lemmata.

Lemma 3.3. In the setting of Theorem 3.1, let , be the eigenvalues of , ordered by size. Then,

Proof. Because of (2.13), it holds that for every admissible spatial correlation function. Since and for all , the limit eigenvalues of the correlation matrix ordered by size are given by .
Under the present conditions, the eigenvalues are differentiable with respect to . Hence, it is sufficient to proof the lemma for and . Now, condition 2 and L'Hospital's rule imply the result.

Lemma 3.4. In the setting of Theorem 3.1, let be defined by (2.18) and (2.19). Then, exists.

Proof. We prove Lemma 3.4 by showing that exists.
Remember that , with depending on the chosen regression model. As in the above lemma, we can restrict the considerations to the direction .
Let , be the eigenvalues of ordered by size with corresponding eigenvector matrix such that . For brevity, define and so forth.
It holds that ; see Lemma 3.3. Hence, for and . In the present setting, the derivatives of eigenvalues and (normalized) eigenvectors exist and can be extended to 0; see, for example, [22]. By another application of L'Hospital's rule, Introducing we can restate (2.19) as It is sufficient to show that exists.
Writing columnwise , a direct computation shows Note that for the default choices of regression basis functions, such that and for . The desired result follows from (3.6) and Lemma 3.3.
Remark 3.5. Actually, one cannot prove for to be regular in general, since this matrix depends on the chosen sample locations. It might be possible to artificially choose samples such that, for example, has not full rank. Yet if so, the whole Kriging exercise cannot be performed, since (2.19) is not well defined in this case. For constant regression, that is, , this is impossible. Note that is independent of .

Now, let us prove Theorem 3.1 using notation as introduced above.

Proof. As shown in the proof of Lemma 3.3 Because the correlation matrix is symmetric and positive definite, a decomposition with orthogonal and , exists.
If necessary, renumber such that . Let . By Lemma 3.4, exists. Condition 3 insures that .
Case 1. Suppose that for all .
By continuity, for and small enough. Since is a compact set, exists. Then, for ,
Case 2. Suppose that Case 1 does not hold true.
From , it follows that for sufficiently small. Let . Then, .
Define For the index defined by , it holds that .
By Lemma 3.3, exists. Using together with the result can be established as in Case 1.
The estimate of the upper bound of ML is obtained in an analogous manner. Let Then, where we used Bernoulli's inequality at , Lemma 3.3, and the fact that .

Remark 3.6. The extension of the main theorem to Cokriging prediction [7, 20] is a straight-forward exercise, since the limit eigenvectors of the Cokriging correlation matrix corresponding to nonzero eigenvalues can also be determined explicitly.

4. Why Kriging Model Training Is Tricky: Part II

The following simple observation illustrates Kriging predictor behavior for large-distance weights . Notation is to be understood as introduced in Section 3.

Observation 1. Suppose that sample locations and responses , are given. Let be the corresponding Kriging predictor according to (2.10). Then, for and distance weights , it holds that

Put in simple words: if too large distance weights are chosen, then the resulting predictor function has the shape of the regression model, , with peaks at the sample sites, compare Figure 2, dashed curve.

Proof. According to (2.10) it holds that where . By (2.13), it holds that , for for every admissible spatial correlation model of the form of (2.11). By Cauchy-Schwartz' inequality, where and . for .

Remark 4.1. The same predictor behavior arises at locations far away from the sampled sites, that is, for . This has to be considered, when extrapolating beyond the sample data set.

Figure 2 shows an example data set for which the Kriging maximum likelihood function is constant over a large range of values. This example was not constructed artificially but occured in the author's daily work of computing approximate fluid flow solutions based on proper orthogonal decomposition (POD) followed by coefficient interpolation as described in [23, 24].

The sample data set is given in Table 1. The Kriging estimator given by the dashed line shows a behavior as predicted by Observation 1. Note that from the model training point of view, all distance weights are equally likely, yet lead to quite different predictor functions. Since the ML features no local minimum, classical hyperparameter estimation is impossible.


: 0.0 2.0 4.0 6.0 8.0

: −0.229334 0.277018 0.455534 −0.769558 0.26634

Appendices

A. On the Validity of Condition 2 in Theorem 3.1

The next lemma strongly indicates that the second condition in the main Theorem 3.1 is given in nondegenerate cases.

Lemma A.1. Let be the correlation matrix function corresponding to a given set of Kriging data and a fixed spatial correlation model.
Let , be the eigenvalues of ordered by size with corresponding eigenvector matrix , and define , . Suppose that the eigenvalues are mutually distinct for close to zero.
Denote the directional derivative in the direction with respect to by a prime , that is, and so forth. Then, it holds that If is regular, then for all , with at most one possible exception.

Proof. For every admissible spatial spatial correlation function of the form (2.11) and , it holds that Thus, .
It holds that and for all ; therefore, the limits of the eigenvalues of the correlation matrix ordered by size are given by . The assumption, that no multiple eigenvalues occur, ensures that the eigenvalues and corresponding (normalized, oriented) eigenvectors are differentiable with respect to . Let be the (orthogonal) matrix of eigenvectors, such that Then, where is the continuous extension of ; see, for example, [22].
Hence,
Let us assume, that there exist two indices such that .
Let . Then, contradicting the regularity of , if .
If , replace by and repeat the above argument.

For most correlation models, the derivative can be computed explicitly.

B. Test Setting Corresponding to Figure 1

In order to produce Figure 1, the following test function has been applied:

The Kriging predictor function displayed in this figure has been constructed based on the fifteen (randomly chosen) sample points shown in Table 2.


Location

184.018839.4383−15.0146
278.309979.84453.5481
391.164719.755120.0921
433.522376.82313.506
527.777555.3972.10686
647.739762.88715.07917
736.478462.8871−1.23344
895.22351.340126.5839
963.571271.729727.5219
1014.160360.69694.74213
111.6300624.28875.00422
1213.723280.41776.48784
1315.667940.09444.24907
1412.97910.88095.16235
1599.892521.825722.8288

Nomenclature

:Dimension of parameter space
:(Fixed) number of sample points
:th sample location
:Sample value at sample location
:Unit matrix
:Vector with all entries equal to 1
:th standard basis vector
:Subspace of all vectors orthogonal to
:Correlation matrix
:Covariance matrix
:Condition number of
:Euclidean scalar product
:Euler’s number
:Dimension of regression model
:Vector of regression coefficients
:Regression model
:Random error function
:Expectation value
:Standard deviation
:Distance weights vector, model hyperparameters.

Acknowledgments

This research was partly sponsored by the European Regional Development Fund and Economic Development Fund of the Federal German State of Lower Saxony Contract/Grant no. W3-80026826.

References

  1. Z. H. Han, S. Gortz, and R. Zimmermann, “On improving efficiency and accuracy of variable-fidelity surrogate modeling in aero-data for loads context,” in Proceeings of European Air and Space Conference (CEAS '09), Manchester, UK, October 2009. View at: Google Scholar
  2. M. C. Kennedy and A. O'Hagan, “Predicting the output from a complex computer code when fast approximations are available,” Biometrika, vol. 87, no. 1, pp. 1–13, 2000. View at: Google Scholar
  3. J. Laurenceau and P. Sagaut, “Building efficient response surfaces of aerodynamic functions with kriging and cokriging,” AIAA Journal, vol. 46, no. 2, pp. 498–507, 2008. View at: Publisher Site | Google Scholar
  4. J. Sacks, J. Welch, T. J. Mitchell, and H. Wynn, “Design and analysis of computer experiments,” Statistical Science, vol. 4, no. 4, pp. 409–423, 1989. View at: Google Scholar
  5. R. Zimmermann and Z. H. Han, “Simplified cross-correlation estimation for multi-fidelity surrogate cokriging models,” Advances and Applications in Mathematical Sciences, vol. 7, no. 2, pp. 181–202, 2010. View at: Google Scholar
  6. D. Krige, “A statistical approach to some basic mine valuation problems on the Witwa-tersrand,” Journal of the Chemical, Metallurgical and Mining Engineering Society of South Africa, vol. 52, no. 6, pp. 119–139, 1951. View at: Google Scholar
  7. A. G. Journel and C. J. Huijbregts, Mining Geostatistics, The Blackburn Press, Caldwell, NJ, USA, 5th edition, 1991.
  8. G. Matheron, “Principles of geostatistics,” Economic Geology, vol. 58, pp. 1246–1266, 1963. View at: Google Scholar
  9. J. J. Warnes and B. D. Ripley, “Problems with likelihood estimation of covariance functions of spatial Gaussian processes,” Biometrika, vol. 74, no. 3, pp. 640–642, 1987. View at: Publisher Site | Google Scholar
  10. K. V. Mardia and A. J. Watkins, “On multimodality of the likelihood in the spatial linear model,” Biometrika, vol. 76, no. 2, pp. 289–295, 1989. View at: Publisher Site | Google Scholar
  11. R. Ababou, A. C. Bagtzoglou, and E. F. Wood, “On the condition number of covariance matrices in kriging, estimation, and simulation of random fields,” Mathematical Geology, vol. 26, no. 1, pp. 99–133, 1994. View at: Publisher Site | Google Scholar
  12. P. Diamond and M. Armstrong, “Robustness of variograms and conditioning of kriging matrices,” Journal of the International Association for Mathematical Geology, vol. 16, no. 8, pp. 809–822, 1984. View at: Publisher Site | Google Scholar
  13. D. Posa, “Conditioning of the stationary kriging matrices for some well-known covariance models,” Mathematical Geology, vol. 21, no. 7, pp. 755–765, 1989. View at: Publisher Site | Google Scholar
  14. G. J. Davis and M. D. Morris, “Six factors which affect the condition number of matrices associated with kriging,” Mathematical Geology, vol. 29, no. 5, pp. 669–683, 1997. View at: Google Scholar
  15. K. Schöttle and R. Werner, “Improving the most general methodology to create a valid correlation matrix,” Management Information Systems, vol. 9, pp. 701–710, 2004. View at: Google Scholar
  16. Z. Ying, “Asymptotic properties of a maximum likelihood estimator with data from a Gaussian process,” Journal of Multivariate Analysis, vol. 36, no. 2, pp. 280–296, 1991. View at: Google Scholar
  17. H. Zhang and D. L. Zimmerman, “Towards reconciling two asymptotic frameworks in spatial statistics,” Biometrika, vol. 92, no. 4, pp. 921–936, 2005. View at: Publisher Site | Google Scholar
  18. M. D. Buhmann, S. Dinew, and E. Larsson, “A note on radial basis function interpolant limits,” IMA Journal of Numerical Analysis, vol. 30, no. 2, pp. 543–554, 2010. View at: Publisher Site | Google Scholar
  19. C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, MIT Press, Cambridge, Mass, USA, 2006.
  20. T. J. Santner, B. J. Williams, and W. I. Notz, The Design and Analysis of Computer Experiments, Springer, New York, NY, USA, 2003.
  21. S. Lophaven, H. B. Nielsen, and J. Søndergaard, “DACE—a MATLAB kriging tool-box, version 2.0,” Tech. Rep. IMM-TR-2002-12, Technical University of Denmark, 2002. View at: Google Scholar
  22. N. P. van der Aa, H. G. Ter Morsche, and R. R. M. Mattheij, “Computation of eigenvalue and eigenvector derivatives for a general complex-valued eigensystem,” Electronic Journal of Linear Algebra, vol. 16, pp. 300–314, 2007. View at: Google Scholar
  23. R. Zimmermann and S. Gortz, “Non-linear reduced order models for steady aerodynamics,” Procedia Computer Sciences, vol. 1, no. 1, pp. 165–174, 2010. View at: Google Scholar
  24. T. Bui-Thanh, M. Damadoran, and K. Willcox, “Proper orthogonal decomposition extensions for parametric applications in transonic aerodynamics,” in Proceedings of the 21th AIAA Applied Aerodynamics Conference, Orlando Fla, USA, 2003, AIAA paper 2003-4213. View at: Google Scholar

Copyright © 2010 Ralf Zimmermann. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1004
Downloads645
Citations

Related articles