Table of Contents Author Guidelines Submit a Manuscript
Advances in Decision Sciences
Volume 2013 (2013), Article ID 671204, 15 pages
Research Article

Revision: Variance Inflation in Regression

1Department of Statistics, Virginia Tech, Blacksburg, VA 24061, USA
2Department of Mathematics, University of Virginia, P.O. Box 400137, Charlottesville, VA 22904, USA

Received 12 October 2012; Accepted 10 December 2012

Academic Editor: Khosrow Moshirvaziri

Copyright © 2013 D. R. Jensen and D. E. Ramirez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Variance Inflation Factors (VIFs) are reexamined as conditioning diagnostics for models with intercept, with and without centering regressors to their means as oft debated. Conventional VIFs, both centered and uncentered, are flawed. To rectify matters, two types of orthogonality are noted: vector-space orthogonality and uncorrelated centered regressors. The key to our approach lies in feasible Reference models encoding orthogonalities of these types. For models with intercept it is found that (i) uncentered VIFs are not ratios of variances as claimed, owing to infeasible Reference models; (ii) instead they supply informative angles between subspaces of regressors; (iii) centered VIFs are incomplete if not misleading, masking collinearity of regressors with the intercept; and (iv) variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

1. Introduction

Values for residuals , for , for , for , and for , for .

Given of full rank with zero-mean, uncorrelated and homoscedastic errors, the equations yield the estimators for as unbiased with dispersion matrix and .   Ill-conditioning, as near dependent columns of , exerts profound and interlinked consequences, causing “crucial elements of to be large and unstable,” “creating inflated variances”; estimates excessive in magnitude, irregular in sign, and “very sensitive to small changes in ”; and unstable algorithms having “degraded numerical accuracy.” See [13] for example.

Ill-conditioning diagnostics include the condition number , the ratio of its largest to smallest eigenvalues, and the Variance Inflation Factors with ,  that is, ratios of actual ( to “ideal” variances, had the columns of been orthogonal. On scaling the latter to unit lengths and ordering as ,  is identified in [4] as “the best single measure of the conditioning of the data.” In addition, the bounds of [5] apply also in stepwise regression as in [69].

Users deserve to be apprised not only that data are ill conditioned, but also about workings of the diagnostics themselves. Accordingly, we undertake here to rectify long-standing misconceptions in the use and properties of , necessarily retracing through several decades to their beginnings.

To choose a model, with or without intercept, is substantive, is specific to each experimental paradigm, and is beyond the scope of the present study. Whatever the choice, fixed in advance in a particular setting, these models follow on specializing , then , respectively, as with intercept, and without, where ; comprise vectors of regressors; and with as intercept and as slopes. The as defined are essentially undisputed for models in , as noted in [10], serving to gage effects of nonorthogonal regressors as ratios of variances. In contrast, a yet unresolved debate surrounds the choice of conditioning diagnostics for models in , namely, between uncentered regressors giving s for and regressors    centered to their means, giving s for . Specifically, on taking and . In contrast, letting be the correlation matrix for and its inverse, then centered versions are for slopes only.

It is seen here that (i) these differ profoundly in regard to their respective concepts of orthogonality; (ii) objectives and meanings differ accordingly; (iii) sharp divisions trace to muddling these concepts; and (iv) this distinction assumes a pivotal role here. Over time a large body of widely held beliefs, conjectures, intrinsic propositions, and conventional wisdom has accumulated, much from flawed logic, some to be dispelled here. The key to our studies is that s, to be meaningful, must compare actual variances to those of an “ideal” second-moment matrix as reference, the latter to embody the conjectured type of orthogonality. This differs between centered and uncentered diagnostics and for both types requires the reference matrix to be feasible. An outline follows.

Our undertaking consists essentially of four parts. The first is a literature survey of some essentials of ill-conditioning, to include the divide between centered and uncentered diagnostics and conventional s. The structure of orthogonality is examined next. Anomalies in usage, meaning, and interpretation of conventional s are exposed analytically and through elementary and transparent case studies. Long-standing but ostensible foundations in turn are reassessed and corrected through the construction of “Reference models.” These are moment arrays constrained to encode orthogonalities of the types considered. Neither array returns the conventional s nor s. Direct rules for finding the amended Reference models are given, preempting the need for constrained numerical algorithms. Finally, studies of ill-conditioned data from the literature are reexamined in light of these findings.

2. Preliminaries

2.1. Notation

Designate by the Euclidean -space. Matrices and vectors are set in bold type; the transpose and inverse of are and ; and refers on occasion to the element of . Special arrays are the identity ; the unit vector = ; the diagonal matrix ; and = , as idempotent of rank . The Frobenius norm of is = . For of order and rank , a generalized inverse is designated as ; its ordered singular values as ; and by the subspace of spanned by the columns of . By accepted convention its condition number is , specifically, . For our model , with dispersion , we take unless stated otherwise, since variance ratios are scale invariant.

2.2. Case Study 1: A First Look

That anomalies pervade conventional s and s may be seen as follows. Given the design of order , and its inverse as in Conventional centered and uncentered s are respectively, the former for slopes only and the latter taking reciprocals of the diagonals of as reference.

A Critique. The following points are basic. Note first that model (1) is nonorthogonal in both the centered and uncentered regressors.

Remark 1. Thes are not ratios of variances and thus fail to gage relative increases in variances owing to nonorthogonal columns of . This follows since the first row and column of the second-moment matrix are fixed and nonzero by design, so that taking to be diagonal as reference cannot be feasible.

Remark 1 runs contrary to assertions throughout the literature. In consequence, for models in the mainstay s in recent vogue are largely devoid of meaning. Subsequently these are identified instead with angles quantifying degrees of multicollinearity among the regressors.

On the other hand, feasible Reference models for all parameters, as identified later for centered and uncentered data in Definition 13, Section 4.2, give in lieu of conventional s and s, respectively. The former comprise corrected s, extended to include the intercept. Both sets in fact are genuine variance inflation factors, as ratios of variances in the model (1), relative to those in Reference models feasible for centered and for uncentered regressors, respectively.

This example flagrantly contravenes conventional wisdom: (i) variances for slopes are inflated in (4), but for the intercept deflated, in comparison with the feasible centered reference. Specifically, is estimated here with greater efficiency in the initial design (1), despite nonorthogonality of its centered regressors. (ii) Variances are uniformly smaller in the model (1) than in its feasible uncentered reference from (5), thus exhibiting Variance Deflation, despite nonorthogonality of the uncentered regressors. A full explication of the anatomy of this study appears later. In support, we next examine technical details needed in subsequent developments.

2.3. Types of Orthogonality

The ongoing divergence between centered and uncentered diagnostics traces in part to meaning ascribed to orthogonality. Specifically, the orthogonality of columns of in refers unambiguously to the vector-space concept , that is, , as does the notion of collinearity of regressors with the constant vector in . We refer to this as -orthogonality, in short . In contrast, nonorthogonality in often refers instead to the statistical concept of correlation among columns of when scaled and centered to their means. We refer to its negation as -orthogonality, or . Distinguishing between these notions is fundamental, as confusion otherwise is evident. For example, it is asserted in [11, p.125] that “the simple correlation coefficient does measure linear dependency between and in the data.

2.4. The Models

Consider , , , together with where , , = , and from block-partitioned inverses. In later sections, we often will denote the mean-centering matrix by . In particular, the centered form arises exclusively in models with intercept, with or without reparametrizing in the mean-centered regressors, where = . Scaling to unit column lengths gives in correlation form with unit diagonals.

3. Historical Perspective

Our objective is an overdue revision of the tenets of variance inflation in regression. To provide context, we next survey extenuating issues from the literature. Direct quotations are intended not to subvert stances taken by the cited authors. Models in are at issue since, as noted in [10], centered diagnostics have no place in .

3.1. Background

Aspects of ill-conditioning span a considerable literature over many decades. Regarding , scaling columns of to equal lengths approximately minimizes the condition number [12, p.120] based on [13]. Nonetheless, is cast in [9] as a blunt instrument for ill-conditioning, prompting the need for s and other local constructs. Stewart [9] credits s in concept to Daniel and in name to Marquardt.

Ill-conditioning points beyond in view of difficulties cited earlier. Remedies proffered in [14, 15] include transforming variables, adding new data, and deleting variable(s) after checking critical features of the reduced model. Other intended palliatives include Ridge and Partial Least Squares, as compared in [16]; Principal Components regression; Surrogate models as in [17]. All are intended to return reduced standard errors at the expense of bias. Moreover, Surrogate solutions more closely resemble those from an orthogonal system than Ridge [18]. Together the foregoing and other options comprise a substantial literature as collateral to, but apart from, the present study.

3.2. To Center

Advocacy for centering includes the following.(i)s often are defined categorically as the diagonals of the inverse of the correlation matrix of scaled and centered regressors; see [4, 9, 11, 19] for example. These are s, widely adopted without justification as default to the exclusion of s.(ii)It is asserted [4] that “centering removes the nonessential ill-conditioning, thus reducing the variance inflation in the coefficient estimates.”(iii)Centering is advocated when predictor variables are far removed from origins on the basic data scales [10, 11].

3.3. Not to Center

Advocacy for uncentered diagnostics includes the following caveats from proponents of centering.(i)Uncentered data should be examined only if an estimate of the intercept is of interest [9, 10, 20].(ii)“If the domain of prediction includes the full range from the natural origin through the range of data, the collinearity diagnostics should not be mean centered” [10, p.84].

Other issues against centering derive in part from numerical analysis and work by Belsley.(i)Belsley [1] identifies for a system as “the potential relative change in the LS solution that can result from a small relative change in the data.”(ii)These require structurally interpretable variables as “ones whose numerical values and (relative) differences derive meaning and interpretability from knowledge of the structure of the underlying ‘reality’ being modeled” [1, p.75].(iii)“There is no such thing as ‘nonessential’ ill-conditioning,” and “mean-centering can remove from the data the information needed to assess conditioning correctly” [1, p.74].(iv)“Collinearity with the intercept can quite generally corrupt the estimates of all parameters in the model whether or not the intercept is itself of interest and whether or not the data have been centered” [21, p.90].(v)An example [22, p.121] gives severely ill-conditioned data perfectly conditioned in centered form: “centering alters neither” inflated variances nor extremely sensitive parameter estimates in the basic data; moreover, “diagnosing the conditioning of the centered data (which are perfectly conditioned) would completely overlook this situation, whereas diagnostics based on the basic data would not.”(vi)To continue from [22], ill-conditioning persists in the propagation of disturbances, in that “a 1 percent relative change in the ’s results in over a 40% relative change in the estimates,” despite perfect conditioning in centered form, and “knowledge of the effect of small relative changes in the centered data is not meaningful for assessing the sensitivity of the basic LS problem,” since relative changes and their meanings in centered and uncentered data often differ markedly.(vii)Regarding choice of origin, “ the investigator must be able to pick an origin against which small relative changes can be appropriately assessed and it is the data measured relative to this origin that are relevant to diagnosing the conditioning of the LS problem” [22, p.126].

Other desiderata pertain. (i)“Because rewriting the model (in standardized variables) does not affect any of the implicit estimates, it has no effect on the amount of information contained in the data” [23, p.76].(ii)Consequences of ill-advised diagnostics can be severe. Degraded numerical accuracy traces to near collinearity of regressors with the constant vector. In short, centering fails to prevent a loss in numerical accuracy; centered diagnostics are unable to discern these potential accuracy problems, whereas uncentered diagnostics are seen to work well. Two widely used statistical packages, SAS and SPSS-X, fail to detect this type of ill-conditioning through use of centered diagnostics and thus return highly inaccurate coefficient estimates. For further details see [3].

On balance, for models in the jury is out regarding the use of centered or uncentered diagnostics, to include s. Even Belsley [1] (and elsewhere) concedes circumstances where centering does achieve structurally interpretable models. Of note is that the foregoing listed citations to Belsley apply strictly to condition numbers ; other purveyors of ill-conditioning, specifically s, are not treated explicitly.

3.4. A Synthesis

It bears notice that (i) the origin, however remote from the cluster of regressors, is essential for prediction, and (ii) the prediction variance is invariant to parametrizing in centered or uncentered forms. Additional remarks are codified next for subsequent referral.

Remark 2. Typically represents response to input variables . In a controlled experiment, levels are determined beforehand by subject-matter considerations extraneous to the experiment, to include minimal values. However remote the origin on the basic data scales, it seems informative in such circumstances to identify the origin with these minima. In such cases the intercept is often of singular interest, since is then the standard against which changes in are to be gaged as regressors vary. We adopt this convention in subsequent studies from the archival literature.

Remark 3. In summary, the divergence in views, whether to center or not, appears to be that critical aspects of ill-conditioning, known and widely accepted for models in , have been expropriated over decades, mindlessly and without verification, to apply point-by-point for models in .

4. The Structure of Orthogonality

This section develops the foundations for Reference models capturing orthogonalities of types and . Essential collateral results are given in support as well.

4.1. Collinearity Indices

Stewart [9] reexamines ill-conditioning from the perspective of numerical analysis. Details follow, where is a generic matrix of regressors having columns and is the generalized inverse of note, having as its typical rows. Each corresponding collinearity index is defined in [9, p.72] as constructed so as to be scale invariant. Observe that is found along the principal diagonal of . When in is centered and scaled, Section 3 of [9] shows that the centered collinearity indices satisfy . In with parameters , values corresponding to from lie along the principal diagonal of ; the uncentered collinearity indices now satisfy . In particular, since , we have . Moreover, in it follows that the uncentered s are squares of the collinearity indices, that is, . Note the asymmetry that s exclude the intercept, in contrast to the inclusive s. That the label Variance Inflation Factors for the latter is a misnomer is covered in Remark 1. Nonetheless, we continue the familiar notation .

Transcending the essential developments of [9] are connections between collinearity indices and angles between subspaces. To these ends choose a typical in , and rearrange as . We next seek elements of as reordered by the permutation matrix . From the clockwise rule the element of the inverse is in succession for each , where is the projection operator onto the subspace spanned by the columns of . These in turn enable us to connect , and similarly for centered values , to the geometry of ill-conditioning as follows.

Theorem 4. For models in let be conventional uncentered s in terms of Stewart’s [9] uncentered collinearity indices. These in turn quantify the extent of collinearities between subspaces through angles (in ) as follows.(i)Angles between are given by , in succession for .(ii)In particular, quantifies the degree of collinearity between the regressors and the constant vector.(iii)Similarly let be regressors centered to their means, rearrange as , and let be centered s in terms of Stewart’s centered collinearity indices. Then angles (in ) between are given by .

Proof. From the geometry of the right triangle formed by , the squared lengths satisfy , where is the residual sum of squares from the projection. It follows that the principal angle between is given by for , to give conclusion (i). Conclusion (ii) follows on specializing with and . Conclusion (iii) follows similarly from the geometry of the right triangle formed by , where now is the projection operator onto the subspace spanned by the columns of , to complete our proof.

Remark 5. Rules of thumb in common use for problematic s include those exceeding 10 or even 4; see [11, 24] for example. In angular measure these correspond, respectively, to and .

4.2. Reference Models

We seek as Reference feasible models encoding orthogonalities of types and . The keys are as follows: (i) to retain essentials of the experimental structure and (ii) to alter what may be changed to achieve orthogonality. For a model in with moment matrix , our opening paragraph prescribes as reference the model , as diagonal elements of , for assessing -orthogonality. Moreover, on scaling columns of to equal lengths, is perfectly conditioned with . In addition, every model in clearly conforms with its Reference, in the sense that is positive definite, as distinct from models in to follow.

Consider again models in as in (6); let with as the mean-centering matrix; and again let comprise the diagonal elements of .

(i) Reference Model. The uncentered s in , defined as ratios of diagonal elements of to reciprocals of diagonal elements of , appear to have seen exclusive usage, apparently in keeping with Remark 3. However, the following disclaimer must be registered as the formal equivalent of Remark 1.

Theorem 6. Statements that conventional s quantify variance inflation owing to nondiagonal are false for models in having .

Proof. Since the Reference variances are reciprocals of diagonal elements of , this usage is predicated on the false postulate that can be diagonal for . Specifically, are linearly independent, that is, , if and only if has been mean centered beforehand.

To the contrary, Gunst [25] purports to show that registers genuine variance inflation, namely, the price to be paid in variance for designing an experiment having , as opposed to . Since variances for intercepts are and from (6), their ratio is shown in [25] to be in the parlance of Section 2.3. We concede this to be a ratio of variances but, to the contrary, not a , since the parameters differ. In particular, , whereas , with in centered regressors. Nonetheless, we still find the conventional to be useful for purposes to follow.

Remark 7. Section 3 highlights continuing concern in regard to collinearity of regressors with the constant vector. Theorem 4(ii) and expression (10) support the use of as an informative gage on the extent of this occurrence. Specifically, the smaller the angle, the greater the extent of such collinearity.
Instead of conventional s given the foregoing disclaimer, we have the following amended version as Reference for uncentered diagnostics, altering what may be changed but leaving intact.

Definition 8. Given a model in with second-moment matrix , the amended Reference model for assessing -orthogonality is with as diagonal elements of , provided that is positive definite. We identify a model to be -orthogonal when .
As anticipated, a prospective fails to conform to experimental data if not positive definite. These and further prospects are covered in the following, where designates the angle between .

Lemma 9. Take as a prospective Reference for -orthogonality.(i)In order that maybe positive definite, it is necessary that , that is, that .(ii)Equivalently, it is necessary that with as the angle between .(iii)The Reference variance for is .(iv)The Reference variances for slopes are given by where .

Proof. The clockwise rule for determinants gives . Conclusion (i) follows since . The computation in parallel with (10), gives conclusion (ii). Using the clockwise rule for block-partitioned inverses, the element of is given by conclusion (iii). Similarly, the lower right block of , of order , is the inverse of . On identifying ,  , and in Theorem of [26], we have that . Conclusion (iv) follows on extracting its diagonal elements, to complete our proof.

Corollary 10. For the case , in order that maybe positive definite, it is necessary that .

Proof. Beginning with Lemma 9(ii), compute which is <0 when .

Moreover, the matrix itself is intrinsically ill conditioned owing to , its condition number depending on . To quantify this dependence, we have the following, where columns of have been standardized to common lengths .

Lemma 11. Let as in Definition 8, with , , and .(i)The ordered eigenvalues of are , where and are the roots of and .(ii)The roots are positive, and is positive definite, if and only if .(iii)If is positive definite, its condition number is and is increasing in .

Proof. Eigenvalues are roots of the determinantal equation from the clockwise rule, giving values and two roots of the quadratic equation , to give conclusion (i). Conclusion (ii) holds since the product of roots of the quadratic equation is and the greater root is positive. Conclusion (iii) follows directly, to complete our proof.

(ii) Reference Model. As noted, the notion of -orthogonality applies exclusively for models in . Accordingly, as Reference we seek to alter so that the matrix comprising sums of squares and sums of products of deviations from means, thus altered, is diagonal. To achieve this canon of -orthogonality, and to anticipate notation for subsequent use, we have the following.

Definition 12. (i) For a model in with second-moment matrix and inverse as in (6), let , and identify an indicator vector = ,   of order in lexicographic order, where if the element of is zero and if the element of is , that is, unchanged from , with as the regressors centered to their means.
(ii) In particular, the Reference model in for assessing -orthogonality is such that and its inverse from (6) are diagonal, that is, taking such that and or equivalently, . In this case, we identify the model to be -orthogonal.

Recall that conventional s for are ratios of diagonal elements of to reciprocals of the diagonal elements of the centered . Apparently this rests on the logic that -orthogonality in implies that is diagonal. However, the converse fails and instead is embodied in . In consequence, conventional s are deficient in applying only to slopes, whereas the s resulting from Definition 12(ii) apply informatively for all of .

Definition 13. Designate and as Variance Factors resulting from Reference models of Definition 8 and Definition 12(ii), respectively. On occasion these represent Variance Deflation in addition to s.

Essential invariance properties pertain. Conventional s are scale invariant; see Section 3 of [9]. We see next that they are translation invariant as well. To these ends we shift the columns of to a new origin through , where . The resulting model is thus preserving slopes, where . Corresponding to is and to and its inverse are in the form of (6). This pertains to subsequent developments, and basic invariance results emerge as follows.

Lemma 14. Consider together with the shifted version , both in . Then the matrices appearing in (6) and (16) are identical.

Proof. Rules for block-partitioned inverses again assert that of (16) is the inverse of since , to complete our proof.

These facts in turn support subsequent claims that centered s are translation and scale invariant for slopes , apart from the intercept .

4.3. A Critique

Again we distinguish s and s from centered and uncentered regressors. The following comments apply.(C1) A design is either or -orthogonal, respectively, according as the lower right block of , or from expression (6), is diagonal. Orthogonalities of type and are exclusive and hence work at crossed purposes.(C2) In particular, -orthogonality holds if and only if the columns of are -nonorthogonal. If is -orthogonal, then for ,   and . Conversely, if indeed is -orthogonal, so that  =  =  , then cannot be diagonal as Reference, in which case the conventional s are tenuous.(C3) Conventional s, based on uncentered in with , do not gage variance inflation as claimed, founded on the false tenet that can be diagonal.(C4) To detect influential observations and to classify high leverage points, case–influence diagnostics, namely , are studied in [27] for assessing the impact of subsets on variance inflation. Here is from the full data and on deleting observations in the index set . Similarly, [28] proposes using on deleting the th observation. In the present context these would gain substance on modifying and accordingly.

5. Case Study 1: Continued

5.1. The Setting

We continue an elementary and transparent example to illustrate essentials. Recall of Section 2.2, the design of order , and and its inverse , as in expressions (1). The design is neither nor orthogonal since neither the lower right block of nor the centered is diagonal. Moreover, the uncentered s as listed in (3) are not the vaunted relative increases in variances owing to nonorthogonal columns of . Indeed, the only opportunity for -orthogonality here is between columns .

Nonetheless, from Section 4.1 we utilize Theorem 4(i) and (10) to connect the collinearity indices of [9] to angles between subspaces. Specifically, Minitab recovered values for the residuals ; further computations proceed routinely as listed in Table 1. In particular, the principal angle between the constant vector and the span of the regressor vectors is , as anticipated in Remark 7.

Table 1: Values for residuals , for , for , for , and for , for .
5.2. -Orthogonality

For subsequent reference let be a design as in (1) but with second-moment matrix: Invoking Definition 8, we seek a -orthogonal Reference with moment matrix having . This is found constructively on retaining in , but replacing by , as listed in Table 2, giving the design with columns as Reference.

Table 2: Designs ;  , obtained by substituting X = in [15, •, X2] of order , and corresponding variances .

Accordingly, is “as -orthogonal as it can get,” given the lengths and sums of as prescribed in the experiment and as preserved in the Reference model. Lemma 9, with and = , gives and . Applications of Lemma 9(iii)-(iv) in succession give the reference variances

As Reference, these combine with actual variances from at (1), giving for the original design relative to . For example, = 0.7222/0.9375 = 0.7704, with from (1), and 0.9375 from (19). This contrasts with in (3) as in Section 2.2 and Table 2, it is noteworthy that the nonorthogonal design yields uniformly smaller variances than the -orthogonal namely, .

Further versions ;   of our basic design are listed for comparison in Table 2, where = for the various designs. The designs themselves are exhibited on transposing rows = from Table 2, and substituting each into the template . The design was constructed but not listed since its is not invertible. Clearly these are actual designs amenable to experimental deployment.

5.3. -Orthogonality

Continuing and invoking Definition 12(ii), we seek a -orthogonal Reference having the matrix as diagonal. This is identified as in Table 2. From this the matrix and its inverse are The variance factors are listed in (4) where, for example, = 0.3889/0.3571 = 1.0889. As distinct from conventional s for and only, our here reflects Variance Deflation, wherein is estimated more precisely in the initial -nonorthogonal design.

Remark 15. Observe that the choice for may be posed as seeking such that , then solving numerically for using Maple for example. However, the algorithm in Definition 12(ii) affords a direct solution: that should be diagonal stipulates its off-diagonal element as = 0, so that = 0.6 in at expression (20).

To illustrate Theorem 4(iii), we compute = = 0.2857 and = 73.398  as the angle between the vectors of (1) when centered to their means. For slopes in , [9] shows that . This follows since their numerators are equal, but denominators are reciprocals of lengths of the centered and uncentered regressor vectors. To illustrate, numerators are equal for and for , but denominators are reciprocals of and .

A further note on orthogonality is germane. Suppose that the actual experiment is yet, towards a thorough diagnosis, the user evaluates the conventional as ratios of variances of to from Table 2. Unfortunately, their meaning is obscured since a design cannot at once be and -orthogonal.

As in Remark 2, we next alter the basic design at (1) on shifting columns of the measurements to as minima and scaling to have squared lengths equal to . The resulting follows directly from (1). The new matrix and its inverse are giving conventional s as . Against -orthogonality, this gives the diagnostics demonstrating, in comparison with (4), that s, apart from , are invariant under translation and scaling, a consequence of Lemma 14.

We further seek to compare variances for the shifted and scaled against -orthogonality as Reference. However, from at (21) we determine that and . Since the latter exceeds unity, we infer from Lemma 9(i) that -orthogonality is incompatible with this configuration of the data, so that the s are undefined. Equivalently, on evaluating and , Corollary 10 asserts that is not positive definite. This appears to be anomalous, until we recall that -orthogonality is invariant under rescaling, but not recentering the regressors.

5.4. A Critique

We reiterate apparent ambiguity ascribed in the literature to orthogonality. A number of widely held guiding precepts has evolved, as enumerated in part in our opening paragraph and in Section 3. As in Remark 3, these clearly have been taken to apply verbatim for and in , to be paraphrased in part as follows.(P1) Ill-conditioning espouses inflated variances; that is, s necessarily equal or exceed unity.(P2)-orthogonal designs are “ideal” in that s for such designs are all unity; see [11] for example.We next reassess these precepts as they play out under and orthogonality in connection with Table 2.(C1) For models in having , we reiterate that the uncentered s at expression (3), namely , overstate adverse effects on variances of the -nonorthogonal array, since cannot be diagonal. Moreover, these conventional values fail to discern that the revised values for s at (5), namely, , reflect that are estimated with greater efficiency in the -nonorthogonal array .(C2) For -orthogonality, the claim P1 thus is false by counterexample. The s at (5) are Variance Deflation Factors for design , where , relative to the -orthogonal .(C3) Additionally, the variances , , in Table 2 are all seen to decrease from , , , , despite the transition from the -orthogonal to the -nonorthogonal . Similar trends are confirmed in other ill-conditioned data sets from the literature.(C4) For -orthogonal designs, the claim P2 is false by counterexample. Such designs need not be “ideal,” as may be estimated more efficiently in a -nonorthogonal design, as demonstrated by the s for at (4) and (22). Similar trends are confirmed elsewhere, as seen subsequently.(C5)s for are critical in prediction, where prediction variances necessarily depend on , especially for predicting near the origin of the system of coordinates.(C6) Dissonance between and is seen in Table 2, where , as -orthogonal with , is the antithesis of -orthogonality at , where .(C7) In short, these transparent examples serve to dispel the decades-old mantra that ill-conditioning necessarily spawns inflated variances for models in , and they serve to illuminate the contributing structures.

6. Orthogonal and Linked Arrays

A genuine -orthogonal array was generated as eigenvectors = from a positive definite matrix (see Table 8.3 of [11, p.377]) using of the programming package. The columns , as the second through fourth columns of Table 3, comprise the first three eigenvectors scaled to length 8. These apply for models in and to be analyzed. In addition, linked vectors were constructed as , , and , where . Clearly these arrays, as listed in Table 3, are not archaic abstractions, as both are amenable to experimental implementation.

Table 3: Basic -orthogonal design X0 = [18, X1, X2, X3], and a linked design Z0 = [18, Z1, Z2, Z3], of order .
6.1. The Model

We consider in turn the orthogonal and the linked series.

6.1.1. Orthogonal Data

Matrices and for the orthogonal data under model are listed in Table 4, where variances occupy diagonals of . The conventional uncentered s are . Since , we find the angle between the constant and the span of the regressor vectors to be as in Theorem 4(ii). Moreover, the angle between and the span of , namely, , is not because of collinearity with the constant vector, despite the mutual orthogonality of . Observe here that already is -orthogonal; accordingly, = ; and thus the s are all unity.

Table 4: Matrices X0X0 and (X0X0)−1 for the orthogonal data under model .

In view of dissonance between and orthogonality, the sums of squares and products for the mean-centered are reflecting nonnegligible negative dependencies, to distinguish the -orthogonal from the -nonorthogonal matrix of deviations.

To continue, we suppose instead that the -orthogonal model was to be recast as -orthogonal. The Reference model and its inverse are listed in Table 5. Variance factors, as ratios of diagonal elements of in Table 4 to those of in Table 5, are reflecting negligible differences in precision. This parallels results reported in Table 2 comparing to as Reference, except for larger differences in Table 2, namely, ,,.

Table 5: Matrices and for checking -orthogonality in the orthogonal data X0 under model .
6.1.2. Linked Data

For the Linked Data under model , the matrix follows routinely from Table 3; its inverse is listed in Table 6. The conventional uncentered s are now . These again fail to gage variance inflation since cannot be diagonal. From (10) the principal angle between the constant vector and the span of the regressor vectors is degrees.

Table 6: Matrices and for checking -orthogonality for the linked data under model .

(i) Reference Model. To rectify this malapropism, we seek in Definition 8 a model as Reference. This is found on setting all off-diagonal elements of to zero, excluding the first row and column. Its inverse is listed in Table 6. The s are ratios of diagonal elements of on the left to diagonal elements of on the right, namely, Against conventional wisdom, it is counter-intuitive that are all estimated with greater precision in the -nonorthogonal model than in the -orthogonal model of Definition 8. As in Section 5.4, this serves again to refute the tenet that ill-conditioning espouses inflated variances for models in , that is, that s necessarily equal or exceed unity.

(ii) Reference Model. Further consider variances, were the Linked Data to be centered. As in Definition 12(ii), we seek a Reference model such that is diagonal. The required and its inverse are listed in Table 7. Since variances in the Linked Data are from Table 6 and Reference values appear as diagonal elements of on the right in Table 7, their ratios give the centered s as We infer that changes in precision would be negligible, if instead the Linked Data experiment was to be recast as a -orthogonal experiment.

Table 7: Matrices and for checking -orthogonality in the linked data under model .

As in Remark 15, the reference matrix is constrained by the first and second moments from and has free parameters for the entries. The constraints for -orthogonality are . Although this system of equations can be solved with numerical software such as Maple, the algorithm stated in Definition 12 easily yields the direct solution given here.

6.2. The Model

For the Linked Data take as , where the expected response at is , and there is no occasion to translate the regressors. The lower right submatrix of from is ; its inverse is listed in Table 8. The ratios of diagonal elements of to reciprocals of diagonal elements of are the conventional uncentered s, namely, . Equivalently, scaling to have unit diagonals and inverting gives as in Table 8; its diagonal elements are the s. Throughout Section 6, these comprise the only correct interpretation of conventional s as genuine variance ratios.

Table 8: Matrices (ZZ)−1 and for the linked data under model .

7. Case Study: Body Fat Data

7.1. The Setting

Body fat and morphogenic measurements were reported for human subjects in Table D.11 of [29, p.717]. Response and regressor variables are : amount of body fat; X1: triceps skinfold thickness; X2: thigh circumference; and X3: mid-arm circumference, under . From the original data as reported the condition number of is 1,039,931, and the s are . On scaling columns of to equal column lengths , the resulting condition number of the revised is 2, 969.6518, and the uncentered s are as before from invariance under scaling.

Against complaints that regressors are remote from their natural origins, we center regressors to as origin on subtracting the minimal element from each column as in Remark 2, and scaling these to . This is motivated on grounds that centered diagnostics, apart from , are invariant under shifts and scalings from Lemma 14. In addition, under the new origin , assumes prominence as the baseline response against which changes due to regressors are to be gaged. The original data are available on the publisher’s web page for [29].

Taking these shifted and scaled vectors as columns of in the revised , values for and its inverse are listed in Table 9. The condition number of is now 113.6969 and its s are ,,,. Additionally, from Theorem 4(ii) we recover the angle to quantify collinearity of regressors with the constant vector.

Table 9: Matrices and for the body fat data under model centered to minima and scaled to equal lengths.

In addition, the scaled and centered correlation matrix for is indicating strong linkage between mean-centered versions of . Moreover, the experimental design is neither nor -orthogonal, since neither the lower right block of nor the centered matrix is diagonal.

(i) Reference Model. To compare this with a -orthogonal design, we turn again to Definition 8. However, evaluating the test criterion in Lemma 9(i) shows that this configuration is not commensurate with -orthogonality, so that the s are undefined. Comparisons with -orthogonal models are addressed next.

(ii) Reference Model. As in Definition 12(ii), the centering matrix is To check against -orthogonality, we obtain the matrix of Definition 12(ii) on replacing off-diagonal elements of the lower right submatrix of by corresponding off-diagonal elements of . The result is the matrix and its inverse as given in Table 10, where diagonal elements of are the Reference variances. The s, found as ratios of diagonal elements of relative to those of , are listed in Table 12 under , the indicator of Definition 12(i) for this case.

Table 10: Matrices and for the body fat data, adjusted to give off-diagonal elements the value 0.0, with code .
7.2. Variance Factors and Linkage

Traditional gages of ill-conditioning are patently absurd on occasion. By convention s are “all or none,” wherein Reference models entail strictly diagonal components. In practice some regressors are inextricably linked: to get, or even visualize, orthogonal regressors may go beyond feasible experimental ranges, require extrapolation beyond practical limits, and challenge credulity. Response functions so constrained are described in [30] as “picket fences.” In such cases, taking to be diagonal as its Reference is moot, at best an academic folly, abetted in turn by default in standard software packages. In short, conventional diagnostics here offer answers to irrelevant questions. Given pairs of regressors irrevocably linked, we seek instead to assess effects on variances for other pairs that could be unlinked by design. These comments apply especially to entries under in Table 12.

To proceed, we define essential ill-conditioning as regressors inextricably linked, necessarily to remain so and as nonessential if regressors could be unlinked by design. As a followup to Section 3.2, for we infer that the constant vector is inextricably linked with regressors, thus accounting for essential ill-conditioning not removed by centering, contrary to claims in [4]. Indeed, this is the essence of Definition 8.

Fortunately, these limitations may be remedied through Reference models adapted to this purpose. This in turn exploits the special notation and trappings of Definition 12(i) as follows. In addition to , we iterate for indicators taking values in , , , , , . Values of s thus obtained are listed in Table 12. To fix ideas, the Reference for , that is, for the constraint , is given in Table 11, together with its inverse. Found as ratios of diagonal elements of relative to those of in Table 11, s are listed in Table 12 under . In short, is “as -orthogonal as it could be” given the constraint. Other s in Table 12 proceed similarly. For all cases in Table 12, excluding , is estimated with greater efficiency in the original model than any of the postulated Reference models.

Table 11: Matrices and for the body fat data, adjusted to give off-diagonal elements the value 0.0 except X1X2 code .
Table 12: Summary s for the body fat data centered to minima and scaled to common lengths, identified with varying from Definition 12.

As in Remark 15, the reference matrix for is constrained by , to be retained, whereas and are to be determined from . Although this system can be solved numerically as noted, the algorithm stated in Definition 12 easily yields the direct solution given here.

Recall : triceps skinfold thickness; : thigh circumference; and : mid-arm circumference; and from (27) that are strongly linked. Invoking the cutoff rule [24] for s in excess of 4.0, it is seen for in Table 12 that and are excessive, where are forced to be mutually uncorrelated as Reference. Remarkably similar values are reported for , all gaged against intractable Reference models wherein are postulated to be uncorrelated, however, incredibly.

On the other hand, s for ,, in Table 12 are likewise comparable, where are allowed to remain linked at = 24.2362. Here the s reflect negligible changes in efficiency of the estimates in comparison with the original , where are all linked. In summary, negligible changes in overall efficiency would accrue on recasting the experiment so that and were pairwise uncorrelated.

Table 12 conveys further useful information on noting that s, as ratios, are multiplicative and divisible. For example, take the model coded , where and are retained at their initial values under , namely, 19.4533 and 24.2362 from Table 9, with unlinked. Now taking as reference, we extract the s on dividing elements in Table 12 at , by those of . The result is