Research Article  Open Access
D. R. Jensen, D. E. Ramirez, "Revision: Variance Inflation in Regression", Advances in Decision Sciences, vol. 2013, Article ID 671204, 15 pages, 2013. https://doi.org/10.1155/2013/671204
Revision: Variance Inflation in Regression
Abstract
Variance Inflation Factors (VIFs) are reexamined as conditioning diagnostics for models with intercept, with and without centering regressors to their means as oft debated. Conventional VIFs, both centered and uncentered, are flawed. To rectify matters, two types of orthogonality are noted: vectorspace orthogonality and uncorrelated centered regressors. The key to our approach lies in feasible Reference models encoding orthogonalities of these types. For models with intercept it is found that (i) uncentered VIFs are not ratios of variances as claimed, owing to infeasible Reference models; (ii) instead they supply informative angles between subspaces of regressors; (iii) centered VIFs are incomplete if not misleading, masking collinearity of regressors with the intercept; and (iv) variance deflation may occur, where illconditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess illconditioned data, we consider case studies ranging from elementary examples to data from the literature.
1. Introduction
Values for residuals , for , for , for , and for , for .
Given of full rank with zeromean, uncorrelated and homoscedastic errors, the equations yield the estimators for as unbiased with dispersion matrix and . Illconditioning, as near dependent columns of , exerts profound and interlinked consequences, causing “crucial elements of to be large and unstable,” “creating inflated variances”; estimates excessive in magnitude, irregular in sign, and “very sensitive to small changes in ”; and unstable algorithms having “degraded numerical accuracy.” See [1–3] for example.
Illconditioning diagnostics include the condition number , the ratio of its largest to smallest eigenvalues, and the Variance Inflation Factors with , that is, ratios of actual ( to “ideal” variances, had the columns of been orthogonal. On scaling the latter to unit lengths and ordering as , is identified in [4] as “the best single measure of the conditioning of the data.” In addition, the bounds of [5] apply also in stepwise regression as in [6–9].
Users deserve to be apprised not only that data are ill conditioned, but also about workings of the diagnostics themselves. Accordingly, we undertake here to rectify longstanding misconceptions in the use and properties of , necessarily retracing through several decades to their beginnings.
To choose a model, with or without intercept, is substantive, is specific to each experimental paradigm, and is beyond the scope of the present study. Whatever the choice, fixed in advance in a particular setting, these models follow on specializing , then , respectively, as with intercept, and without, where ; comprise vectors of regressors; and with as intercept and as slopes. The as defined are essentially undisputed for models in , as noted in [10], serving to gage effects of nonorthogonal regressors as ratios of variances. In contrast, a yet unresolved debate surrounds the choice of conditioning diagnostics for models in , namely, between uncentered regressors giving s for and regressors centered to their means, giving s for . Specifically, on taking and . In contrast, letting be the correlation matrix for and its inverse, then centered versions are for slopes only.
It is seen here that (i) these differ profoundly in regard to their respective concepts of orthogonality; (ii) objectives and meanings differ accordingly; (iii) sharp divisions trace to muddling these concepts; and (iv) this distinction assumes a pivotal role here. Over time a large body of widely held beliefs, conjectures, intrinsic propositions, and conventional wisdom has accumulated, much from flawed logic, some to be dispelled here. The key to our studies is that s, to be meaningful, must compare actual variances to those of an “ideal” secondmoment matrix as reference, the latter to embody the conjectured type of orthogonality. This differs between centered and uncentered diagnostics and for both types requires the reference matrix to be feasible. An outline follows.
Our undertaking consists essentially of four parts. The first is a literature survey of some essentials of illconditioning, to include the divide between centered and uncentered diagnostics and conventional s. The structure of orthogonality is examined next. Anomalies in usage, meaning, and interpretation of conventional s are exposed analytically and through elementary and transparent case studies. Longstanding but ostensible foundations in turn are reassessed and corrected through the construction of “Reference models.” These are moment arrays constrained to encode orthogonalities of the types considered. Neither array returns the conventional s nor s. Direct rules for finding the amended Reference models are given, preempting the need for constrained numerical algorithms. Finally, studies of illconditioned data from the literature are reexamined in light of these findings.
2. Preliminaries
2.1. Notation
Designate by the Euclidean space. Matrices and vectors are set in bold type; the transpose and inverse of are and ; and refers on occasion to the element of . Special arrays are the identity ; the unit vector = ; the diagonal matrix ; and = , as idempotent of rank . The Frobenius norm of is = . For of order and rank , a generalized inverse is designated as ; its ordered singular values as ; and by the subspace of spanned by the columns of . By accepted convention its condition number is , specifically, . For our model , with dispersion , we take unless stated otherwise, since variance ratios are scale invariant.
2.2. Case Study 1: A First Look
That anomalies pervade conventional s and s may be seen as follows. Given the design of order , and its inverse as in Conventional centered and uncentered s are respectively, the former for slopes only and the latter taking reciprocals of the diagonals of as reference.
A Critique. The following points are basic. Note first that model (1) is nonorthogonal in both the centered and uncentered regressors.
Remark 1. Thes are not ratios of variances and thus fail to gage relative increases in variances owing to nonorthogonal columns of . This follows since the first row and column of the secondmoment matrix are fixed and nonzero by design, so that taking to be diagonal as reference cannot be feasible.
Remark 1 runs contrary to assertions throughout the literature. In consequence, for models in the mainstay s in recent vogue are largely devoid of meaning. Subsequently these are identified instead with angles quantifying degrees of multicollinearity among the regressors.
On the other hand, feasible Reference models for all parameters, as identified later for centered and uncentered data in Definition 13, Section 4.2, give in lieu of conventional s and s, respectively. The former comprise corrected s, extended to include the intercept. Both sets in fact are genuine variance inflation factors, as ratios of variances in the model (1), relative to those in Reference models feasible for centered and for uncentered regressors, respectively.
This example flagrantly contravenes conventional wisdom: (i) variances for slopes are inflated in (4), but for the intercept deflated, in comparison with the feasible centered reference. Specifically, is estimated here with greater efficiency in the initial design (1), despite nonorthogonality of its centered regressors. (ii) Variances are uniformly smaller in the model (1) than in its feasible uncentered reference from (5), thus exhibiting Variance Deflation, despite nonorthogonality of the uncentered regressors. A full explication of the anatomy of this study appears later. In support, we next examine technical details needed in subsequent developments.
2.3. Types of Orthogonality
The ongoing divergence between centered and uncentered diagnostics traces in part to meaning ascribed to orthogonality. Specifically, the orthogonality of columns of in refers unambiguously to the vectorspace concept , that is, , as does the notion of collinearity of regressors with the constant vector in . We refer to this as orthogonality, in short . In contrast, nonorthogonality in often refers instead to the statistical concept of correlation among columns of when scaled and centered to their means. We refer to its negation as orthogonality, or . Distinguishing between these notions is fundamental, as confusion otherwise is evident. For example, it is asserted in [11, p.125] that “the simple correlation coefficient does measure linear dependency between and in the data.”
2.4. The Models
Consider , , , together with where , , = , and from blockpartitioned inverses. In later sections, we often will denote the meancentering matrix by . In particular, the centered form arises exclusively in models with intercept, with or without reparametrizing in the meancentered regressors, where = . Scaling to unit column lengths gives in correlation form with unit diagonals.
3. Historical Perspective
Our objective is an overdue revision of the tenets of variance inflation in regression. To provide context, we next survey extenuating issues from the literature. Direct quotations are intended not to subvert stances taken by the cited authors. Models in are at issue since, as noted in [10], centered diagnostics have no place in .
3.1. Background
Aspects of illconditioning span a considerable literature over many decades. Regarding , scaling columns of to equal lengths approximately minimizes the condition number [12, p.120] based on [13]. Nonetheless, is cast in [9] as a blunt instrument for illconditioning, prompting the need for s and other local constructs. Stewart [9] credits s in concept to Daniel and in name to Marquardt.
Illconditioning points beyond in view of difficulties cited earlier. Remedies proffered in [14, 15] include transforming variables, adding new data, and deleting variable(s) after checking critical features of the reduced model. Other intended palliatives include Ridge and Partial Least Squares, as compared in [16]; Principal Components regression; Surrogate models as in [17]. All are intended to return reduced standard errors at the expense of bias. Moreover, Surrogate solutions more closely resemble those from an orthogonal system than Ridge [18]. Together the foregoing and other options comprise a substantial literature as collateral to, but apart from, the present study.
3.2. To Center
Advocacy for centering includes the following.(i)s often are defined categorically as the diagonals of the inverse of the correlation matrix of scaled and centered regressors; see [4, 9, 11, 19] for example. These are s, widely adopted without justification as default to the exclusion of s.(ii)It is asserted [4] that “centering removes the nonessential illconditioning, thus reducing the variance inflation in the coefficient estimates.”(iii)Centering is advocated when predictor variables are far removed from origins on the basic data scales [10, 11].
3.3. Not to Center
Advocacy for uncentered diagnostics includes the following caveats from proponents of centering.(i)Uncentered data should be examined only if an estimate of the intercept is of interest [9, 10, 20].(ii)“If the domain of prediction includes the full range from the natural origin through the range of data, the collinearity diagnostics should not be mean centered” [10, p.84].
Other issues against centering derive in part from numerical analysis and work by Belsley.(i)Belsley [1] identifies for a system as “the potential relative change in the LS solution that can result from a small relative change in the data.”(ii)These require structurally interpretable variables as “ones whose numerical values and (relative) differences derive meaning and interpretability from knowledge of the structure of the underlying ‘reality’ being modeled” [1, p.75].(iii)“There is no such thing as ‘nonessential’ illconditioning,” and “meancentering can remove from the data the information needed to assess conditioning correctly” [1, p.74].(iv)“Collinearity with the intercept can quite generally corrupt the estimates of all parameters in the model whether or not the intercept is itself of interest and whether or not the data have been centered” [21, p.90].(v)An example [22, p.121] gives severely illconditioned data perfectly conditioned in centered form: “centering alters neither” inflated variances nor extremely sensitive parameter estimates in the basic data; moreover, “diagnosing the conditioning of the centered data (which are perfectly conditioned) would completely overlook this situation, whereas diagnostics based on the basic data would not.”(vi)To continue from [22], illconditioning persists in the propagation of disturbances, in that “a 1 percent relative change in the ’s results in over a 40% relative change in the estimates,” despite perfect conditioning in centered form, and “knowledge of the effect of small relative changes in the centered data is not meaningful for assessing the sensitivity of the basic LS problem,” since relative changes and their meanings in centered and uncentered data often differ markedly.(vii)Regarding choice of origin, “ the investigator must be able to pick an origin against which small relative changes can be appropriately assessed and it is the data measured relative to this origin that are relevant to diagnosing the conditioning of the LS problem” [22, p.126].
Other desiderata pertain. (i)“Because rewriting the model (in standardized variables) does not affect any of the implicit estimates, it has no effect on the amount of information contained in the data” [23, p.76].(ii)Consequences of illadvised diagnostics can be severe. Degraded numerical accuracy traces to near collinearity of regressors with the constant vector. In short, centering fails to prevent a loss in numerical accuracy; centered diagnostics are unable to discern these potential accuracy problems, whereas uncentered diagnostics are seen to work well. Two widely used statistical packages, SAS and SPSSX, fail to detect this type of illconditioning through use of centered diagnostics and thus return highly inaccurate coefficient estimates. For further details see [3].
On balance, for models in the jury is out regarding the use of centered or uncentered diagnostics, to include s. Even Belsley [1] (and elsewhere) concedes circumstances where centering does achieve structurally interpretable models. Of note is that the foregoing listed citations to Belsley apply strictly to condition numbers ; other purveyors of illconditioning, specifically s, are not treated explicitly.
3.4. A Synthesis
It bears notice that (i) the origin, however remote from the cluster of regressors, is essential for prediction, and (ii) the prediction variance is invariant to parametrizing in centered or uncentered forms. Additional remarks are codified next for subsequent referral.
Remark 2. Typically represents response to input variables . In a controlled experiment, levels are determined beforehand by subjectmatter considerations extraneous to the experiment, to include minimal values. However remote the origin on the basic data scales, it seems informative in such circumstances to identify the origin with these minima. In such cases the intercept is often of singular interest, since is then the standard against which changes in are to be gaged as regressors vary. We adopt this convention in subsequent studies from the archival literature.
Remark 3. In summary, the divergence in views, whether to center or not, appears to be that critical aspects of illconditioning, known and widely accepted for models in , have been expropriated over decades, mindlessly and without verification, to apply pointbypoint for models in .
4. The Structure of Orthogonality
This section develops the foundations for Reference models capturing orthogonalities of types and . Essential collateral results are given in support as well.
4.1. Collinearity Indices
Stewart [9] reexamines illconditioning from the perspective of numerical analysis. Details follow, where is a generic matrix of regressors having columns and is the generalized inverse of note, having as its typical rows. Each corresponding collinearity index is defined in [9, p.72] as constructed so as to be scale invariant. Observe that is found along the principal diagonal of . When in is centered and scaled, Section 3 of [9] shows that the centered collinearity indices satisfy . In with parameters , values corresponding to from lie along the principal diagonal of ; the uncentered collinearity indices now satisfy . In particular, since , we have . Moreover, in it follows that the uncentered s are squares of the collinearity indices, that is, . Note the asymmetry that s exclude the intercept, in contrast to the inclusive s. That the label Variance Inflation Factors for the latter is a misnomer is covered in Remark 1. Nonetheless, we continue the familiar notation .
Transcending the essential developments of [9] are connections between collinearity indices and angles between subspaces. To these ends choose a typical in , and rearrange as . We next seek elements of as reordered by the permutation matrix . From the clockwise rule the element of the inverse is in succession for each , where is the projection operator onto the subspace spanned by the columns of . These in turn enable us to connect , and similarly for centered values , to the geometry of illconditioning as follows.
Theorem 4. For models in let be conventional uncentered s in terms of Stewart’s [9] uncentered collinearity indices. These in turn quantify the extent of collinearities between subspaces through angles (in ) as follows.(i)Angles between are given by , in succession for .(ii)In particular, quantifies the degree of collinearity between the regressors and the constant vector.(iii)Similarly let be regressors centered to their means, rearrange as , and let be centered s in terms of Stewart’s centered collinearity indices. Then angles (in ) between are given by .
Proof. From the geometry of the right triangle formed by , the squared lengths satisfy , where is the residual sum of squares from the projection. It follows that the principal angle between is given by for , to give conclusion (i). Conclusion (ii) follows on specializing with and . Conclusion (iii) follows similarly from the geometry of the right triangle formed by , where now is the projection operator onto the subspace spanned by the columns of , to complete our proof.
Remark 5. Rules of thumb in common use for problematic s include those exceeding 10 or even 4; see [11, 24] for example. In angular measure these correspond, respectively, to and .
4.2. Reference Models
We seek as Reference feasible models encoding orthogonalities of types and . The keys are as follows: (i) to retain essentials of the experimental structure and (ii) to alter what may be changed to achieve orthogonality. For a model in with moment matrix , our opening paragraph prescribes as reference the model , as diagonal elements of , for assessing orthogonality. Moreover, on scaling columns of to equal lengths, is perfectly conditioned with . In addition, every model in clearly conforms with its Reference, in the sense that is positive definite, as distinct from models in to follow.
Consider again models in as in (6); let with as the meancentering matrix; and again let comprise the diagonal elements of .
(i) Reference Model. The uncentered s in , defined as ratios of diagonal elements of to reciprocals of diagonal elements of , appear to have seen exclusive usage, apparently in keeping with Remark 3. However, the following disclaimer must be registered as the formal equivalent of Remark 1.
Theorem 6. Statements that conventional s quantify variance inflation owing to nondiagonal are false for models in having .
Proof. Since the Reference variances are reciprocals of diagonal elements of , this usage is predicated on the false postulate that can be diagonal for . Specifically, are linearly independent, that is, , if and only if has been mean centered beforehand.
To the contrary, Gunst [25] purports to show that registers genuine variance inflation, namely, the price to be paid in variance for designing an experiment having , as opposed to . Since variances for intercepts are and from (6), their ratio is shown in [25] to be in the parlance of Section 2.3. We concede this to be a ratio of variances but, to the contrary, not a , since the parameters differ. In particular, , whereas , with in centered regressors. Nonetheless, we still find the conventional to be useful for purposes to follow.
Remark 7. Section 3 highlights continuing concern in regard to collinearity of regressors with the constant vector. Theorem 4(ii) and expression (10) support the use of as an informative gage on the extent of this occurrence. Specifically, the smaller the angle, the greater the extent of such collinearity.
Instead of conventional s given the foregoing disclaimer, we have the following amended version as Reference for uncentered diagnostics, altering what may be changed but leaving intact.
Definition 8. Given a model in with secondmoment matrix , the amended Reference model for assessing orthogonality is with as diagonal elements of , provided that is positive definite. We identify a model to be orthogonal when .
As anticipated, a prospective fails to conform to experimental data if not positive definite. These and further prospects are covered in the following, where designates the angle between .
Lemma 9. Take as a prospective Reference for orthogonality.(i)In order that maybe positive definite, it is necessary that , that is, that .(ii)Equivalently, it is necessary that with as the angle between .(iii)The Reference variance for is .(iv)The Reference variances for slopes are given by where .
Proof. The clockwise rule for determinants gives . Conclusion (i) follows since . The computation in parallel with (10), gives conclusion (ii). Using the clockwise rule for blockpartitioned inverses, the element of is given by conclusion (iii). Similarly, the lower right block of , of order , is the inverse of . On identifying , , and in Theorem of [26], we have that . Conclusion (iv) follows on extracting its diagonal elements, to complete our proof.
Corollary 10. For the case , in order that maybe positive definite, it is necessary that .
Proof. Beginning with Lemma 9(ii), compute which is <0 when .
Moreover, the matrix itself is intrinsically ill conditioned owing to , its condition number depending on . To quantify this dependence, we have the following, where columns of have been standardized to common lengths .
Lemma 11. Let as in Definition 8, with , , and .(i)The ordered eigenvalues of are , where and are the roots of and .(ii)The roots are positive, and is positive definite, if and only if .(iii)If is positive definite, its condition number is and is increasing in .
Proof. Eigenvalues are roots of the determinantal equation from the clockwise rule, giving values and two roots of the quadratic equation , to give conclusion (i). Conclusion (ii) holds since the product of roots of the quadratic equation is and the greater root is positive. Conclusion (iii) follows directly, to complete our proof.
(ii) Reference Model. As noted, the notion of orthogonality applies exclusively for models in . Accordingly, as Reference we seek to alter so that the matrix comprising sums of squares and sums of products of deviations from means, thus altered, is diagonal. To achieve this canon of orthogonality, and to anticipate notation for subsequent use, we have the following.
Definition 12. (i) For a model in with secondmoment matrix and inverse as in (6), let , and identify an indicator vector = , of order in lexicographic order, where if the element of is zero and if the element of is , that is, unchanged from , with as the regressors centered to their means.
(ii) In particular, the Reference model in for assessing orthogonality is such that and its inverse from (6) are diagonal, that is, taking such that and or equivalently, . In this case, we identify the model to be orthogonal.
Recall that conventional s for are ratios of diagonal elements of to reciprocals of the diagonal elements of the centered . Apparently this rests on the logic that orthogonality in implies that is diagonal. However, the converse fails and instead is embodied in . In consequence, conventional s are deficient in applying only to slopes, whereas the s resulting from Definition 12(ii) apply informatively for all of .
Definition 13. Designate and as Variance Factors resulting from Reference models of Definition 8 and Definition 12(ii), respectively. On occasion these represent Variance Deflation in addition to s.
Essential invariance properties pertain. Conventional s are scale invariant; see Section 3 of [9]. We see next that they are translation invariant as well. To these ends we shift the columns of to a new origin through , where . The resulting model is thus preserving slopes, where . Corresponding to is and to and its inverse are in the form of (6). This pertains to subsequent developments, and basic invariance results emerge as follows.
Lemma 14. Consider together with the shifted version , both in . Then the matrices appearing in (6) and (16) are identical.
Proof. Rules for blockpartitioned inverses again assert that of (16) is the inverse of since , to complete our proof.
These facts in turn support subsequent claims that centered s are translation and scale invariant for slopes , apart from the intercept .
4.3. A Critique
Again we distinguish s and s from centered and uncentered regressors. The following comments apply.(C1) A design is either or orthogonal, respectively, according as the lower right block of , or from expression (6), is diagonal. Orthogonalities of type and are exclusive and hence work at crossed purposes.(C2) In particular, orthogonality holds if and only if the columns of are nonorthogonal. If is orthogonal, then for , and . Conversely, if indeed is orthogonal, so that = = , then cannot be diagonal as Reference, in which case the conventional s are tenuous.(C3) Conventional s, based on uncentered in with , do not gage variance inflation as claimed, founded on the false tenet that can be diagonal.(C4) To detect influential observations and to classify high leverage points, case–influence diagnostics, namely , are studied in [27] for assessing the impact of subsets on variance inflation. Here is from the full data and on deleting observations in the index set . Similarly, [28] proposes using on deleting the th observation. In the present context these would gain substance on modifying and accordingly.
5. Case Study 1: Continued
5.1. The Setting
We continue an elementary and transparent example to illustrate essentials. Recall of Section 2.2, the design of order , and and its inverse , as in expressions (1). The design is neither nor orthogonal since neither the lower right block of nor the centered is diagonal. Moreover, the uncentered s as listed in (3) are not the vaunted relative increases in variances owing to nonorthogonal columns of . Indeed, the only opportunity for orthogonality here is between columns .
Nonetheless, from Section 4.1 we utilize Theorem 4(i) and (10) to connect the collinearity indices of [9] to angles between subspaces. Specifically, Minitab recovered values for the residuals ; further computations proceed routinely as listed in Table 1. In particular, the principal angle between the constant vector and the span of the regressor vectors is , as anticipated in Remark 7.

5.2. Orthogonality
For subsequent reference let be a design as in (1) but with secondmoment matrix: Invoking Definition 8, we seek a orthogonal Reference with moment matrix having . This is found constructively on retaining in , but replacing by , as listed in Table 2, giving the design with columns as Reference.

Accordingly, is “as orthogonal as it can get,” given the lengths and sums of as prescribed in the experiment and as preserved in the Reference model. Lemma 9, with and = , gives and . Applications of Lemma 9(iii)(iv) in succession give the reference variances
As Reference, these combine with actual variances from at (1), giving for the original design relative to . For example, = 0.7222/0.9375 = 0.7704, with from (1), and 0.9375 from (19). This contrasts with in (3) as in Section 2.2 and Table 2, it is noteworthy that the nonorthogonal design yields uniformly smaller variances than the orthogonal namely, .
Further versions ; of our basic design are listed for comparison in Table 2, where = for the various designs. The designs themselves are exhibited on transposing rows = from Table 2, and substituting each into the template . The design was constructed but not listed since its is not invertible. Clearly these are actual designs amenable to experimental deployment.
5.3. Orthogonality
Continuing and invoking Definition 12(ii), we seek a orthogonal Reference having the matrix as diagonal. This is identified as in Table 2. From this the matrix and its inverse are The variance factors are listed in (4) where, for example, = 0.3889/0.3571 = 1.0889. As distinct from conventional s for and only, our here reflects Variance Deflation, wherein is estimated more precisely in the initial nonorthogonal design.
Remark 15. Observe that the choice for may be posed as seeking such that , then solving numerically for using Maple for example. However, the algorithm in Definition 12(ii) affords a direct solution: that should be diagonal stipulates its offdiagonal element as = 0, so that = 0.6 in at expression (20).
To illustrate Theorem 4(iii), we compute = = 0.2857 and = 73.398 as the angle between the vectors of (1) when centered to their means. For slopes in , [9] shows that . This follows since their numerators are equal, but denominators are reciprocals of lengths of the centered and uncentered regressor vectors. To illustrate, numerators are equal for and for , but denominators are reciprocals of and .
A further note on orthogonality is germane. Suppose that the actual experiment is yet, towards a thorough diagnosis, the user evaluates the conventional as ratios of variances of to from Table 2. Unfortunately, their meaning is obscured since a design cannot at once be and orthogonal.
As in Remark 2, we next alter the basic design at (1) on shifting columns of the measurements to as minima and scaling to have squared lengths equal to . The resulting follows directly from (1). The new matrix and its inverse are giving conventional s as . Against orthogonality, this gives the diagnostics demonstrating, in comparison with (4), that s, apart from , are invariant under translation and scaling, a consequence of Lemma 14.
We further seek to compare variances for the shifted and scaled against orthogonality as Reference. However, from at (21) we determine that and . Since the latter exceeds unity, we infer from Lemma 9(i) that orthogonality is incompatible with this configuration of the data, so that the s are undefined. Equivalently, on evaluating and , Corollary 10 asserts that is not positive definite. This appears to be anomalous, until we recall that orthogonality is invariant under rescaling, but not recentering the regressors.
5.4. A Critique
We reiterate apparent ambiguity ascribed in the literature to orthogonality. A number of widely held guiding precepts has evolved, as enumerated in part in our opening paragraph and in Section 3. As in Remark 3, these clearly have been taken to apply verbatim for and in , to be paraphrased in part as follows.(P1) Illconditioning espouses inflated variances; that is, s necessarily equal or exceed unity.(P2)orthogonal designs are “ideal” in that s for such designs are all unity; see [11] for example. We next reassess these precepts as they play out under and orthogonality in connection with Table 2.(C1) For models in having , we reiterate that the uncentered s at expression (3), namely , overstate adverse effects on variances of the nonorthogonal array, since cannot be diagonal. Moreover, these conventional values fail to discern that the revised values for s at (5), namely, , reflect that are estimated with greater efficiency in the nonorthogonal array .(C2) For orthogonality, the claim P1 thus is false by counterexample. The s at (5) are Variance Deflation Factors for design , where , relative to the orthogonal .(C3) Additionally, the variances , , in Table 2 are all seen to decrease from , , → , , despite the transition from the orthogonal to the nonorthogonal . Similar trends are confirmed in other illconditioned data sets from the literature.(C4) For orthogonal designs, the claim P2 is false by counterexample. Such designs need not be “ideal,” as may be estimated more efficiently in a nonorthogonal design, as demonstrated by the s for at (4) and (22). Similar trends are confirmed elsewhere, as seen subsequently.(C5)s for are critical in prediction, where prediction variances necessarily depend on , especially for predicting near the origin of the system of coordinates.(C6) Dissonance between and is seen in Table 2, where , as orthogonal with , is the antithesis of orthogonality at , where .(C7) In short, these transparent examples serve to dispel the decadesold mantra that illconditioning necessarily spawns inflated variances for models in , and they serve to illuminate the contributing structures.
6. Orthogonal and Linked Arrays
A genuine orthogonal array was generated as eigenvectors = from a positive definite matrix (see Table 8.3 of [11, p.377]) using of the programming package. The columns , as the second through fourth columns of Table 3, comprise the first three eigenvectors scaled to length 8. These apply for models in and to be analyzed. In addition, linked vectors were constructed as , , and , where . Clearly these arrays, as listed in Table 3, are not archaic abstractions, as both are amenable to experimental implementation.

6.1. The Model
We consider in turn the orthogonal and the linked series.
6.1.1. Orthogonal Data
Matrices and for the orthogonal data under model are listed in Table 4, where variances occupy diagonals of . The conventional uncentered s are . Since , we find the angle between the constant and the span of the regressor vectors to be as in Theorem 4(ii). Moreover, the angle between and the span of , namely, , is not because of collinearity with the constant vector, despite the mutual orthogonality of . Observe here that already is orthogonal; accordingly, = ; and thus the s are all unity.

In view of dissonance between and orthogonality, the sums of squares and products for the meancentered are reflecting nonnegligible negative dependencies, to distinguish the orthogonal from the nonorthogonal matrix of deviations.
To continue, we suppose instead that the orthogonal model was to be recast as orthogonal. The Reference model and its inverse are listed in Table 5. Variance factors, as ratios of diagonal elements of in Table 4 to those of in Table 5, are