Abstract
Covariance is used as an inner product on a formal vector space built on random variables to define measures of correlation across a set of vectors in a -dimensional space. For , one has the diameter; for , one has an area. These concepts are directly applied to correlation studies in climate science.
1. Introduction
In a study of the earth's climate system, Douglass [1] considered the correlation among a set of climate indices. A distance between two indices and was defined as where is the Pearson correlation coefficient. It was stated that satisfies the conditions to be a metric. The measure of correlation, or closeness, among the indices was taken to be the diameter Equation (1.2) was applied to the data from a global set of four climate indices to determine the correlation among them (minimum in ) and to infer 18 changes in the state since 1970 (see Section 8). It was pointed out that the topological diameter , as a measure of phase locking among the indices, is convenient for computation but was probably not the best measure. It was suggested that a better measure of correlation among the indices could be based upon the area of the spherical triangles created by the vectors on the unit sphere.
This paper gives a proof that is a metric and generalizes the diameter to higher dimensions. In addition, the data of [1] are analyzed using this generalization to areas (see Section 8), and many new abrupt climate changes are identified.
2. Probability
Let and be random variables with expected values and . With these values, we make several standard definitions.
Definition 2.1. The variance of is defined as
Definition 2.2. The covariance of and is defined as
We now list a few basic properties of variance and covariance (found in [2]).
Properties 2.3
For and as above. (i) is symmetric. (ii) is bilinear. (iii) is a quadratic form. (iv). (v), the variance of .
Proof. (i) See [2, page 323]. (ii) Follows easily from the definition. (iii) See [2, page 323]. (iv) See [2, page 323].
3. Vector Spaces
The first way most students learn to compare two vectors is through the dot product. The dot product is one example of the more general idea of an inner product. Here we define an inner product and prove that covariance is an inner product.
Definition 3.1. For any real vector space , an inner product is a map that satisfies the following properties for every , and : (i)(ii)(iii)(iv) and if and only if .
We will now construct a vector space for which covariance is an inner product. Let be a set of random variables. Also let , the formal -vector space with basis elements . We must put one mild hypothesis upon in order for it to have the desired properties. The hypothesis is that the vectors must be “probabilistically independent.” That is, for any , we have that if and only if . It should be noted that this independence is in no way related to the linear independence of the random variables.
Proposition 3.2. Let , the formal -vector space generated by the random variables which are probabilistically independent, then covariance is an inner product on .
Proof. We must prove the four properties from Definition 3.1.
(i), (ii), and (iii) follow immediately from Properties 2.3.
(iv) . The nonnegativity is obvious as we are squaring a real number. The condition that follows from the probabilistic independence of .
The proposition implies that is an inner product space (a vector space equipped with an inner product), and as such it has a norm defined by , where is the standard deviation of . Additionally it follows from the Cauchy-Schwartz inequality [3] that .
Using the inner product on , we are able to define an angle between two vectors. To do this, we first define a new map using the standard definition of correlation By the Cauchy-Schwartz inequality, we can easily see that , as such we implicitly define , the angle between and , as follows: Therefore, .
Definition 3.3. is the “Correlation Angle” of and .
Our definition of is the standard method of defining an angle from the covariance (or any other) inner product. We will show that is a “metric” on the unit sphere of .
Definition 3.4. For any set , a map is a metric if for any the following properties are satisfied:
(a)(positive definite),(b)(symmetry), (c)(triangle inequality).
Theorem 3.5. The map from Definition 3.3 is a metric on the unit sphere of .
Proof. We must prove that satisfies the 3 conditions in Definition 3.4. (a) so the nonnegativity is satisfied trivially. It remains to show that . This is true because if the angle between two vectors is zero, then they are (positive) scalar multiples of each other. Thus since and are unit vectors, if , we must have . (b). (c)To prove the triangle inequality, a geometric idea in itself, we delve into the geometry being defined. We will complete this part of the proof in Section 4.
Our metric allows us to measure the correlation between two vectors.
Definition 3.6. For , , , and as above:(i)if , then and are maximally positively correlated. (ii)If , then they are maximally negatively correlated. (iii)If , then and are uncorrelated.
It should be noted that cases (i) and (ii) are both considered to be “maximally correlated.”
4. A Geometric Interpretation
The vector space with inner product lends itself nicely to a geometric interpretation. First we must establish a small amount of background.
Consider , the standard unit sphere in Euclidean -space (). Great circles are the intersection of a plane through the origin and . They share many properties with the standard idea of lines in Euclidean space, including the property that they define the shortest path between any two points. For a thorough treatment of great circles as lines on a sphere, see [4–6] or [7].
For any two nonzero vectors and in , let be the (minimal) angle formed by and . The unit vectors and , corresponding to and , define two points and on . In order to measure the distance from to along , we take the length of the arc on great circle between the two points. By definition, this is the radian measure of .
If , the vector space considered in Section 3, is thought of as with and any two vectors, then we can compute the spherical distance between and , namely, the distance between and on . We call this quantity
Thus far we have identified the inner product space as . We solidify this intuition with the following proposition. First we define , a real valued symmetric matrix. As in [3], we use to create the inner product on .
Proposition 4.1. The inner product space , where is a “twisted dot product” defined for two vectors and as
Proof. This follows from the standard method of representing an inner product by a matrix (see [3, chapter 8.1]).
Now we return to our proof of Theorem 3.5.
Proof of Theorem 3.5(iii). Let be unit vectors. We have left to show that .
Because and are unit vectors, is the geodesic distance between and . Since geodesic distance satisfies the triangle inequality, must as well.
5. Projective Metric
For scientists, (equivalently or ) are often both considered to be “maximally correlated,” for example, see [1]. To take this into account, we modify our metric on the unit sphere of . We think of as a projective space, the space of lines through the origin of . We denote this space as .
Our original correlation angle is modified to be
Proposition 5.1. is a metric on .
Proof. We must show that the three conditions of Definition 3.4 are met.
(i) corresponds to a correlation angle of 0 or . The two vectors are either in the same direction or opposite direction. In either case, they determine the same line through the origin and hence correspond to the same point in projective space. (ii)As in Definition 3.4, the symmetry of follows from the symmetry of . (iii)As before, the triangle inequality follows as is the geodesic distance for a projective space.
The metric gives the angular distance between and . If (what we called a “maximal correlation”) then , however, if , which we called orthogonality or noncorrelation, then .
Proposition 5.2. Let be the metric , then the pair is a projective metric space.
Proof. This is by construction.
6. Time Dependence
Until this point, we have treated our random variables as being time independent. However, random variables often depend on time. Therefore, we will now consider each random variable as depending discretely on time. It should be noted that what follows is essentially a replication of what has come before, however, and are now treated as vectors instead of singleton points. Vectors, however, are just points of . The additional theory and notation is simply a means of dealing with the additional information.
To make our random variables time dependent, they will now be given as
We must now redefine the covariance. We do this by looking at a time window starting at time with a duration of , where is called the summation window where and are the sample means in the summation window of and , respectively. That is, .
If we think of and as the vectors (resp., for ), then we get that where “” is the standard Euclidean dot product, and is the length vector (resp., for ). This is called the “Pearson Covariance.”
In other words, if we define the vectors , and , then we define the Pearson Correlation as follows.
Definition 6.1. , where “” is the usual Euclidean inner product.
Now we define the Pearson Correlation as Here again corresponds to the standard Euclidean angle, known as the Pearson Correlation Angle, and the resulting metric is the standard metric studied in classical spherical geometry (see [4–6] or [7]).
Remark 6.2. The angle between and is the same as the angle between and , the unit vectors corresponding to and .
7. Correlation Measures: and
To this point, we have developed a method that will numerically tell us the correlation between two vectors. In this section, we will create two sets of functions that allow us to measure the correlation across a set of vectors. The first set, , is based upon taking the volumes of -simplices (a 1-simplex is a line, a 2-simplex a triangle, a 3-simplex is a tetrahedron, etc.). The set of benefits from computability but is not as precise as the second set of measures , that measure the volume of -dimensional convex hulls.
Given a set of vectors , let be the set of corresponding unit vectors. We will define a way to measure the closeness of the to each other using the metric . To do this, we define the diameter of as If all of the vectors are taken in the standard way to be points on the unit sphere, then the diameter is a measure of the overall spread of the points. If the diameter is small, then the vectors are all close together, hence highly correlated. Whereas if the diameter is large at least some of the points are far apart, hence not highly correlated. The benefit of the diameter is that it is an easy quantity to calculate; however, it can be somewhat misleading. If, for instance, a large number of points are clustered together and there is one outlying point, the diameter can be quite large despite the fact that the points are generally quite correlated.
We now proceed to generalize the correlation measure defined by . Let be a collection of points on the -sphere, and let be the set of -simplices made up of points in .
Definition 7.1. .
This maximum is taken over the different -simplices made of points in .
Definition 7.2. .
The volume used in Definition 7.2 is the spherical volume, and is the convex hull of the points of with respect to the spherical measure. That is, it is the smallest geodesically convex set containing . (Geodesically convex means that any two points in the set have the minimal geodesic between them completely in the set as well.)
The volume is computed by constructing the convex hull of , then disregarding all the points of not contributing to the hull. The hull is then divided into its “essential” -simplices, and the volumes of these simplices are summed.
and are each measures of -dimensional volume. benefits from being easily computable. , though harder to compute, gives a better measure of the overall spread of the vectors. However, in the one dimensional case, we have that , the diameter. The reason for this is that when making the hull to compute all, but the two furthermost points will be disregarded. This equality is not true in general, a fact which can be easily observed by plotting four points forming a quadrilateral, where . In the general case, however, we do have the inequality . This follows since the maximal simplex will necessarily be a subset of the convex hull. Since volume is monotonic, we have the inequality.
Assume that of the points of are essential to the convex hull. There is a constant defining the number of essential simplices that compose the convex hull. That is, Replacing the volume of each spherical simplex with the maximal one, that is, , we get the following inequalities Since depends only on the number of points in , we see that, for a fixed data set, and differ by at most a fixed constant.
To relate and to Section 6, we note that when time-dependent random variables are looked at over a summation window of length , then we get points on the -sphere. In this situation, we can apply the measures of spread given by or or for .
8. Topology of Earth's Climate Indices and Phase-Locked States
In this section, we apply our new correlation measure to data from Douglass's paper [1]. In [1], the diameter () is used to analyze a set of climate data; in this section, we use to analyze the same data. Comparing the results of the new analysis to Douglass's original analysis shows the increased effectiveness of the new correlation measure.
Various regions of the Earth's climate system are characterized by temperature and pressure indices. Douglass [1], in a study of a global set of four indices, defines a distance between indices that satisfies the properties required to be a metric (Definition 3.4), where is the Pearson correlation coefficient. Note that the distance is an angle.
In Section 7 the correlation among a set of indices can be measured, using by taking the volumes of -simplices. In [1], Douglass uses the diameter of the metric space , defined as
In the notation of Section 7, . Geometrically, selects the largest angle among the set. The diameter may be considered a “dissimilarity” index because large means weak correlation. Thus, the minima in are associated with high correlation among the elements of the set. In Douglass, [1], two cases were considered: (1) the set of 3 Pacific ocean indices and (2) the global set of 4 indices (6 independent pairs). The of the global set is shown (in red) in Figure 1.
(a)
(b)
The maximal area , the generalized correlation measure, was computed for the same four indices of [1]. The plot for the calculation is shown (blue) in Figures 1(a) and 1(b). Comparison of the two plots shows that the area measure reveals more minima than the diameter . The various minima are indicated by arrows in Figures 1(a) and 1(b), and a list of dates is given in Table 1.
9. Summary
By using covariance on a set of time-independent random variables or the covariance defined by the Pearson correlation on a set of time-dependent variables, we create metrics and (resp.) on the unit sphere (resp., projective space) of the corresponding formal vector spaces. If is the -dimensional formal vector space whose basis is the set of random variables , we use or to create or , two measures of spread on values taken by the . In Section 8, we give an explicit example of showing the use of on a global set of climate indices.
The two measures of spread differ by at most a fixed multiplicative constant, so for theoretical purposes, they are of equivalent use. However, when applied, they can have different values. The volume of the convex hull created of , given by , is the most precise measure of the correlation of the ; however, it is computationally difficult. The maximal volume of all possible -simplices defined by the , given by , is a rougher measure of correlation. However, is a simpler computation than .
In the 2-dimensional example, where all the vectors lie on the 2-sphere, one can apply , , or . But in general is coarser than but is significantly easier to compute. For example, in [1] and Section 8, the use of yields much finer and cleaner results than the use of . More generally in -dimensions and for any , and one sacrifices accuracy for ease.