- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Submit a Manuscript
- Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 340415, 12 pages
On Estimating the Linear-by-Linear Parameter for Ordinal Log-Linear Models: A Computational Study
1School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW 2308, Australia
2School of Veterinary Medicine, University of California, Davis, Davis, CA 95616, USA
Received 17 January 2012; Accepted 5 March 2012
Academic Editors: T. Allahviranloo, H. J. Ruskin, P. B. Vasconcelos, and Q.-W. Wang
Copyright © 2012 Eric J. Beh and Thomas B. Farver. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Estimating linear-by-linear association has long been an important topic in the analysis of contingency tables. For ordinal variables, log-linear models may be used to detect the strength and magnitude of the association between such variables, and iterative procedures are traditionally used. Recently, studies have shown, by way of example, three non-iterative techniques can be used to quickly and accurately estimate the parameter. This paper provides a computational study of these procedures, and the results show that they are extremely accurate when compared with estimates obtained using Newton’s unidimensional method.
During the 1970s and 1980s parameter estimation procedures for ordinal log-linear and ordinal association models gained considerable attention in the literature. For example, Haberman , Goodman [2, 3], Agresti , and Gilula and Haberman  all considered various approaches to obtain the maximum likelihood estimate (MLE) of the parameters of such models. Recently there has been a renewed interest paid to estimation methods for these ordinal models. Galindo-Garre and Vermunt  considered two adaptations of Goodman’s  approach while Ait-Sidi-Allal et al.  considered an iterative procedure based on Fisher’s scoring algorithm. These approaches can involve computationally intensive procedures and are therefore subject to a number of problems including the poor choice of starting values, or the slow (or even lack of) convergence of the algorithm.
A different approach to estimating parameters from ordinal log-linear (or association) models is to consider the direct, iterative-free, estimation procedure originally proposed by Beh and Davy  and recently extended by Beh and Farver . The latter authors presented three noniterative estimation procedures for estimating the association parameter for such models and empirically demonstrated (by considering 14 commonly analysed contingency tables of varying dimension and sample size) that these noniterative procedures can provide estimates that are statistically indistinguishable when compared with the estimate obtained using Newton’s unidimensional method.
By performing a computational study, this paper will explore the accuracy and reliability of Newton’s method and four noniterative procedures on randomly generated contingency tables with a fixed association parameter. Such a study will be made by first briefly describing the ordinal log-linear model (Section 2) and Newton’s method (Section 3.1). A brief mathematical description of the non-iterative techniques of Beh and Farver  will be made in Section 3.2. The calculation of the standard error of these estimates is considered in Section 3.3. Section 4.1 describes the mechanism by which contingency tables of varying dimension and association parameter are generated, while the results of the study are discussed in Section 4.2. Section 5 provides an overview of these findings.
2. The Ordinal Log-Linear Model
Suppose we consider an two-way contingency table, , where the th cell entry is given by for and . Let the grand total of be , and let the matrix of joint relative frequencies be , so that the th cell entry is and . Define the th row marginal proportion by , and define the th column marginal proportion by . When the row and column variables both consist of ordered categories, the logarithm of the th cell may be reconstituted by considering the following ordinal log-linear model: where is the expected cell frequency of the th cell. For this model is the grand mean and and are the main effects associated with the th row category and th column category, respectively, and are constrained so that . These parameters may be estimated such that
see Beh and Davy . Model (1) requires the specification of which is the score associated with the th row category. When the row categories are ordered, these scores can be arranged in ascending, or descending, order depending on the structure of the row variable. Similarly, is the score associated with the th column category for an ordinal column variable; the scores are arranged in ascending (or descending) order. For (1), and are the weighted means of and , respectively, where the weights are the marginal relative proportions.
The relationship between the two-ordered categorical variables may be determined by estimating the parameter . This parameter is the measure of linear-by-linear association, and its interpretation is aided by considering that the log-odds ratio measure is
Therefore, if equidistant scores are chosen to reflect the ordered structure of the variables so that and , then is the common log-odds ratio of the contingency table.
If one considers the log-linear model (1), then the th cell can be reconstituted by considering the association model where and are subject to the constraint and are treated here as nuisance parameters. The estimation of can be carried out by using Newton’s unidimensional method considered by Goodman  if only one set of scores is fixed. One may also refer to Haberman , Agresti (, Appendix B), Nelder and Wedderburn , and Galindo-Garre and Vermunt  for iterative algorithms. Recently, non-iterative approaches have been proposed for estimating . Beh and Davy  originally considered such an approach for estimating . This work has since been expanded upon by Beh and Farver  who proposed another three non-iterative procedures for estimating this parameter and discussed the calculation of the corresponding standard errors. By performing an empirical study, they demonstrated that such non-iterative procedures estimate with as much precision as Newton’s method. Such an approach allows one to easily, and directly, estimate the linear-by-linear parameter without resorting to iterative techniques. This paper continues the study undertaken by Beh and Farver  by providing an evaluation of the performance and accuracy of Newton’s method, and the four non-iterative procedures, for estimating the parameter from model (1).
3. On Estimating
3.1. Newton’s Unidimensional Method
There are a number of iterative procedures that may be used for estimating from the log-linear model (1). These include the iterative proportional fitting algorithm as described by Agresti (, pp. 64-66) and Nelder and Wedderburn’s  quasi-likelihood approach. In this paper we will consider Newton’s unidimensional method due to its relative simplicity and popularity. A brief description of Newton’s method for estimating will be provided here although Beh and Farver  described it in more detail.
Suppose that each of the cell frequencies of a contingency table , , are independent Poisson random variables. Also denote the expected value of (under some model) to be . In this paper the expected value of is determined by considering (4). The th iteration of Newton’s method involves updating the MLE of , denoted as , using the estimate obtained from the th iteration such that until convergence is reached. This is achieved when , for some reasonably small value of . Here is just (4) with in place of . When this stopping criterion is satisfied, the MLE of using Newton’s method, , is the final iterative estimate . This method is adopted in this paper for obtaining the MLE of where . To initiate Newton’s method, a starting value of is used which coincides with independence between the two categorical variables. The standard error of the MLE of using the iterative procedure described by (5) is where is the estimated value of given . There are some adaptations of Newton’s unidimensional method that can be included to entice it to converge faster, be more accurate, or both. Beh and Farver  consider some possible ways of doing this.
3.2. Noniterative Procedures
The computational study considered in this paper evaluates four non-iterative procedures for estimating . The first approach is where and and is termed the log-noniterative estimate or LogNI estimate of . The estimate (7) is guaranteed to exist when , for all and , and so is restricted to the case where none of the cell frequencies observed in a contingency table are zero. When there exist zero cell frequencies in the contingency table, this problem may be simply overcome by substituting small values where the zeros are observed. Agresti (, p. 397) points out that in some situations, especially where a saturated model is considered, replacing zero cells with 0.5 is appropriate, although in other circumstances adding 0.5 may create problems.
An alternative approach to approximating the natural logarithm present in the estimate (7) is to consider instead
which takes into account that . Alternatively, the second-order approximation
may also be considered. By adopting these two approximations of the logarithm function, two estimates of are obtained by considering which are referred to as the and estimates of , respectively. An alternative way to deal with the problem of zero cell frequencies is to use another approximation of the logarithm function that is present in the Log NI estimate (7). For example, by considering only the first term in the power series expansion of the function the estimate (7) can be approximated to yield an alternative estimate of This estimate was originally proposed in Beh and Davy  and so is termed the BD-noniterative estimate or simply the estimate of and is the fourth non-iterative estimation approach evaluated in the present simulation study.
3.3. Standard Error of the Noniterative Estimates
Beh and Farver  show, by considering the estimate (12), the standard error of is However, (13) is equivalent to the standard error (6) when the row and column variables are completely independent. To show this, consider the standard error (6). When the two categorical variables are independent so that , then . Therefore (6) can be alternatively expressed as
This standard error provides an indication of the maximum standard error that one would obtain for a contingency table with sample size . For the computational study considered in this paper, the standard error of the non-iterative estimates is calculated using (6).
4. The Computational Study
4.1. Randomly Generating a Contingency Table
In order to study the accuracy and reliability of Newton’s method and the four non-iterative procedures described above, we consider a simple procedure for generating a two-way contingency table of dimension given a value of and sample size . The procedure involves selecting an independent and random sample of pseudo-marginal row proportions, denoted by , from a uniform distribution from the interval . A sample of pseudo-marginal column proportions, , is generated in a similar manner. The random marginal proportions of the th row and th column can therefore be obtained such that
respectively, ensuring that and . Given these randomly generated marginal proportions and the pre-defined scores used to reflect the ordered structure of the categories, the mean row and column scores are
respectively. For this computational study, natural row scores, and natural column scores were chosen to reflect the ordinal structure of the categories. However, any scoring procedure may be considered to reflect the ordinal structure of categorical variables such as midrank scores  and Nishisato scores . The th joint marginal cell frequency is then calculated based on the ordinal log-linear model (1) such that
where is the frequency observed in the th cell, given the randomly generated marginal proportions, the pre-determined value, and
It is apparent that the values obtained will not be guaranteed to be integers. Therefore, each cell value is rounded to the nearest integer to obtain the randomly generated contingency table. Thus, the sample size of n specified in the algorithm is only an approximate sample size value.
The computational study performed in this paper is based on random contingency tables (using the procedure described in Section 4.1) of different dimensions with a sample size of approximately 1000. Tables were generated with the following dimensions: 2 × 2, 2 × 3, 2 × 4, 2 × 5, 3 × 3, 3 × 4, 3 × 5, 4 × 4, 4 × 5, and 5 × 5. For each dimension, 200 random tables were generated each with a fixed value of ; the values considered in this paper were 0, 0.01, 0.05, and 0.1 to 1.5 at increments of 0.1. In terms of the exponent of (i.e., the common log-odds ratio of the table), these are equivalent to 1, 1.01, 1.05, and 1.11 to 4.48, respectively. In practice, the common log-odds ratio may be negative; however, for the purposes of exploring the behaviour of Newton’s unidimensional method and the four non-iterative procedures (the , , , and estimates) against the true value of , we restricted our attention to positive values. A preliminary investigation revealed that randomly generated contingency tables with a negative behaved in a similar manner to tables generated with a positive value.
Tables 1, 2, 3, 4, and 5 provide a summary of the mean estimate and standard error (in parentheses) of from the 200 randomly generated contingency tables. Table 1 summarises the results obtained using Newton’s method, and Table 2 provides a summary of the accuracy of (7) for estimating . Table 3 summarises the results obtained by considering (10) to estimate the parameter while Tables 4 and 5 summarise output when using (7) and (12), respectively. We also took into consideration the number of iterations needed for Newton’s method to converge to four decimal places. Thus, Table 6 summarises the median number of iterations needed for convergence to occur. Also summarised in this table is the range (maximum and minimum) of the number of iterations observed for the each set of 200 simulated tables.
Table 1 and Figure 1 show that Newton’s method is very accurate for estimating . The Wald statistic of the mean parameter estimate indicates that such a procedure gives estimates of that are statistically indistinguishable from the true estimate for all sized contingency tables and values. However, the number of iterations needed for Newton’s method to estimate the parameter varies greatly depending on the size of the contingency table and the magnitude of (Table 6). Contingency tables of small dimension and small required no more than about 10 iterations for convergence to four decimal places to occur. Due to the accuracy of Newton’s method, the legends attached to Figure 1 highlighting the varying table sizes have been omitted.
Table 6 shows that for contingency tables of all dimensions considered, if is larger than 0.8, then the number of iterations for convergence was nearly always larger than 10 with a median number of iterations ranging from 20 (for a 2 × 2 contingency table) to 9769 (for a 5 × 5 table). For the 200 randomly generated 5 × 5 contingency tables, the number of iterations until convergence was achieved ranging from an economical 9 iterations to nearly 35000.
Table 2 and Figure 2 show that of the four non-iterative procedures, using to estimate came closest to matching the performance of Newton’s method. There were 15 cases out of 180 total cases where the mean estimate of obtained using was different from the true at the 1% level of significance. Ten of these 15 cases occurred when estimating a in a contingency table having a dimension of 4 × 5 or 5 × 5. Due to the accuracy of the estimate, the legends attached to Figure 2 highlighting the varying table sizes have been omitted.
Tables 3, 4, and 5 and the corresponding figures show that the estimators , , and are stable and accurate for relatively small contingency tables and where is also small (not exceeding approximately 0.5). However they performed poorly for relatively large tables with a large parameter value, continually underestimating the parameter. Figures 3, 4 and 5 graphically verify these results. Using resulted in 30 cases (out of 180 total cases) where the mean estimate of was different from the true value (at the 1% level of significance) with 23 of these 30 cases occurring when estimating a in a contingency table having a dimension of 3 × 5, 4 × 4, 4 × 5, or 5 × 5. Using resulted in 35 cases where the mean estimate of was statistically different from the true value with 27 of these 35 cases occurring when estimating a in a contingency table having a dimension of 3 × 5, 4 × 4, 4 × 5 or 5 × 5. Using resulted in 43 cases where the mean estimate of was statistically different from the true value with 31 of these 43 cases occurring when estimating a in a contingency table having a dimension of 3 × 5, 4 × 4, 4 × 5 or 5 × 5.
Two hundred (200) contingency tables of dimension 4 × 5 were randomly generated (using the algorithm described in Section 4.1) with and sample size approximately 1000. For these tables, Figure 6 illustrates the stability of each of the five estimation procedures and summarises the value obtained from the Wald test of the estimates , , , , and when compared with . All five methods provide reliable and accurate estimates of the parameter and, for all the tables generated, are statistically indistinguishable when compared with the true value of the parameter. These figures show that Newton’s method (Figure 6(a)) provides the most reliable estimate since the value calculated from each of the 200 tables is consistently near 1.0. Of the non-iterative procedures Figure 6(b) shows that is nearly as stable (although slightly more variable) when compared with Newton’s method.
The three remaining non-iterative procedures also behave well, but provide poorer estimates of than Newton’s method or . Figure 6(c) shows that provides estimates that are closer to than Figure 6(d) shows that the value of is centred around 0.7 while Figure 6(c) shows that the value of is centred around 0.9. This is to be expected since is a first-order approximation of the estimate (7) while is its second-order approximation. For the 200 4 × 5 tables, these two non-iterative estimation procedures provide more accurate estimates of than . Figure 6(e) shows that the value from the Wald test of the estimate is centred around 0.5.
Iterative procedures can be computationally very intensive. Certainly for those contingency tables that are of large dimension, or where is large (but unknown), the number of iterations needed for convergence to occur can be excessively large. This is not necessarily a major issue for those who are experienced in statistical programming. However, for those researchers that are not, a simple direct estimation procedure can prove to be advantageous. This paper has demonstrated the benefit of considering alternative, iteration-free, procedures for estimating the linear-by-linear parameter of an ordinal log-linear, and its related association model.
For small contingency tables (in terms of dimension and ) any of the non-iterative procedures considered in this paper will produce estimates that are statistically similar to their true value. For large contingency tables with moderately small values of (less than 0.4), any of the non-iterative procedures will work extremely well without the need for programming and executing iterative procedures.
Of the four non-iterative procedures considered in this paper, (7) provides more accurate and reliable results than its counterparts. In fact there were only a few cases where the estimate produced was statistically different (according to the Wald test) from the true parameter value. However, an inspection of Table 2 reveals that these differences appear negligible and that such a result can be attributed to the very small standard error of the non-iterative estimate. Certainly Figure 2 shows the accurate performance of .
The enhanced accuracy of , when compared with the remaining non-iterative estimates, is not unexpected since it is easily obtained by rearranging (4) and aggregating across and . Since , , and are calculated using functional approximations of , this latter procedure will always produce more accurate estimates. Such a conclusion is observed by considering the results in Section 4.
One can also observe the relative accuracy of the non-iterative methods. Since the right-hand side of (11) is less than its left-hand side, for all contingency tables. This is indeed the case when comparing the results summarised in Tables 2 and 5. This inequality becomes more severe for contingency tables of relatively large dimension. For similar reasons , and this is clear when comparing Table 3 with Table 4. However, such a consideration is not a trivial exercise since it allows one to better understand how many terms (from the approximation of the logarithm function) will yield an accurate and reliable estimate of .
Beh and Davy , who originally considered the accuracy of the non-iterative estimation procedures for ordinal log-linear models, showed from a practical perspective that will produce accurate estimates of the true parameter. The more extensive study by Beh and Farver  confirmed these findings. However, the approximation (11) is valid only if . If only the positive values of this ratio are considered, will provide reasonably good estimates of only when for all and . Therefore, such a result suggests that the procedure will produce better estimates of for relatively small contingency tables than for larger tables.
The computations performed here have focused on generating contingency tables with a sample size of approximately 1000. Further investigations need to be undertaken to study the behaviour of the non-iterative estimation procedures for varying sample sizes. Preliminary investigations reveal that, for contingency tables with a sample size exceeding about 200, the estimate is similar to the estimate obtained using Newton’s unidimensional method and the true parameter. Further work on the application of such non-iterative procedures needs investigating for other types of models of association for ordinal contingency tables.
- S. J. Haberman, “Log linear models for frequency tables with ordered classifications,” Biometrics, vol. 30, no. 4, pp. 589–600, 1974.
- L. A. Goodman, “Simple models of the analysis of association in cross-classifications having ordered categories,” Journal of the American Statistical Association, vol. 74, pp. 537–552, 1979.
- L. A. Goodman, “The analysis of cross-classified data having ordered and/or unordered categories: association models, correlation models, and asymmetry models for contingency tables with or without missing entries,” The Annals of Statistics, vol. 13, pp. 10–69, 1985.
- A. Agresti, Analysis of Ordinal Categorical Data, Wiley, New York, NY, USA, 1984.
- Z. Gilula and S. J. Haberman, “Canonical analysis of contingency tables by maximum likelihood,” Journal of the American Statistical Association, vol. 81, pp. 780–788, 1986.
- F. Galindo-Garre and J. K. Vermunt, “The order-restricted association model: two estimation algorithms and issues in testing,” Psychometrika, vol. 69, no. 4, pp. 641–654, 2004.
- M. L. Aït-Sidi-Allal, A. Baccini, and A. M. Mondot, “A new algorithm for estimating the parameters and their asymptotic covariance in correlation and association models,” Computational Statistics and Data Analysis, vol. 45, no. 3, pp. 389–421, 2004.
- E. J. Beh and P. J. Davy, “A non-iterative alternative to ordinal log-linear models,” Journal of Applied Mathematics and Decision Sciences, vol. 8, no. 2, pp. 67–86, 2004.
- E. J. Beh and T. B. Farver, “An evaluation of non-iterative methods for estimating the linear-by-linear parameter of ordinal log-linear models,” The Australian and New Zealand Journal of Statistics, vol. 51, no. 3, pp. 335–352, 2009.
- J. A. Nelder and R. W. M. Wedderburn, “Generalized linear models,” Journal of the Royal Statistical Society A, vol. 135, pp. 370–384, 1972.
- A. Agresti, Categorical Data Analysis, Wiley, Hoboken, NJ, USA, 2nd edition, 2002.
- V. N. Nair, “Testing in industrial experiments with ordered categorical data,” Technometrics, vol. 28, no. 4, pp. 283–291, 1986.
- S. Nishisato and P. S. Arri, “Nonlinear programming approach to optimal scaling of partially ordered categories,” Psychometrika, vol. 40, no. 4, pp. 525–548, 1975.