- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Computational and Mathematical Methods in Medicine

Volume 2012 (2012), Article ID 986176, 9 pages

http://dx.doi.org/10.1155/2012/986176

## Survival Data Analysis with Time-Dependent Covariates Using Generalized Additive Models

^{1}Department of Engineering Informatics, Osaka Electro-Communication University, Osaka 572-8530, Japan^{2}Clinical Information Division, Data Science Center, EPS Corporation, Osaka 532-0003, Japan^{3}Nishinomiya Municipal Central Hospital, Hyogo 663-8014, Japan

Received 16 July 2011; Accepted 11 January 2012

Academic Editor: Hugo Palmans

Copyright © 2012 Masaaki Tsujitani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We discuss a flexible method for modeling survival data using penalized smoothing splines when the values of covariates change for the duration of the study. The Cox proportional hazards model has been widely used for the analysis of treatment and prognostic effects with censored survival data. However, a number of theoretical problems with respect to the baseline survival function remain unsolved. We use the generalized additive models (GAMs) with B splines to estimate the survival function and select the optimum smoothing parameters based on a variant multifold cross-validation (CV) method. The methods are compared with the generalized cross-validation (GCV) method using data from a long-term study of patients with primary biliary cirrhosis (PBC).

#### 1. Introduction

Several prognostic models for PBC data have been developed using the Cox proportional hazards model, and the values of all covariates were determined at the time when the patient entered the study [1]. However, situations may exist in which the values of covariates change for the duration of the study. The time-dependent model uses follow-up data to estimate the effect of the evolution of the covariates during the course of the disease; see, for example, Cox [2], Altman and Stavola [3] and Collett [4].

Let be a continuous lifetime variable and a vector of *time-fixed* covariates. The Cox’s proportional hazards model postulates that the hazard at time is the product of two components [5, 6]
where is a vector of coefficients. The proportional hazards assumption is that the baseline hazard is a function of but does not involve the values of covariates which are measured at the beginning of an interval to predict short-term survival.

We investigate PBC data for 312 patients who were seen at the Mayo Clinic and were monitored for the duration of the study, as described in Murtaugh et al. [7] and Therneau and Grambsch [8]. The Cox proportional hazards model was developed based on the relationship between survival and the patient characteristics observed when the patient entered the study. The precision of time-fixed models used in PBC is rather low, partly because these models are based on data for which the covariates were measured at the time when the patient entered the study.

For the analysis of data with time-dependent covariates, however, the survivor function for any individual depends on time and the baseline hazard function. This means that the survivor function cannot be expressed as a power of the baseline survivor function and is generally difficult to obtain for any individual; see, for example, Kalbfleish and Prentice [9] and Marubini and Valsecchi [10]. The Mayo updated model (e.g., [7]), and the European new version model (e.g., [3, 11, 12]) have been commonly used to improve the accuracy of survival predictions as a function of covariates measured at any time during the course of the disease. In the present article, we propose the variant multifold CV method for GAM when choosing the optimum smoothing parameters in order to estimate the survival function and predict the short-term survival (say, for the following six months) at any time during the course of the disease.

Another useful idea in our analysis is the concept of competing risk. There is “liver transplantation” in PBC data as competing risk. Competing risk has been treated as censored data. By adding the liver transplantation as one of time-dependent covariate, one can test the significance of liver transplantation.

#### 2. Model Building

By extending the Cox proportional hazard model (1), a flexible survival model has been examined where is a spline function for the covariate [13–15]. The proportional hazard model (2) used the time-fixed values of covariates as shown in Dickson et al. [1]. The estimates of hazard ratio by relative survival regression model [16] with time-dependent covariates are compared with that of Cox proportional hazard model. A new approach [17, 18] is proposed with PBC data, aiming to capture nonlinear patterns of bilirubin time courses and their relationship with survival time of patients. However, because most patients with PBC make repeated visits to the clinic, it is natural to ask the optimum timing of liver transplantation by predicting short-term survival at any time in the course of the disease.

The time-dependent covariates are provided for patient #, where is the midpoint of time interval for the th clinic visit. The event times may be subject to the usual random censoring. Then only the minimum of survival and censoring time with censoring indicator

are observed. The relative hazard then depends on time , and thus the proportional hazards assumption is no longer satisfied, as described in Altman and Stavola [3] and Arjas [19].

For example, Table 1 shows the values of *age*, *prothrombin time*, and *bilirubin* as time-dependent covariates for dead patient #9; for details, see Table 4 in Murtaugh et al. [7]. Patients were scheduled to return for further observations at six months, 12 months, and yearly. Thus, patients generate 1945 observations in total. The covariates values for each patient were allowed to vary with the time interval for the th clinic visit.

A grouped version of Cox’s proportional hazard model with time-fixed covariates has been considered in the framework of discrete grouped data for the feed-forward neural network. Given the continuous survivor time, piecewise models arise from the partition of the time axis into disjointed intervals. Biganzoli et al. [20, 21] show that, by treating the time interval as an input variable in a feed forward neural network, it is possible to estimate smoothed discrete hazards as conditional probabilities of failure. Biganzoli et al. [20] also pointed out that an advantage of this kind of data structure is the possibility of straightforward use of time-dependent covariates since each subject is represented, for each observation interval, by one input vector which can change across intervals. In order to apply this neural network approach, which is called partial logistic regression models [20], discretization of one-month or one-week intervals must be applied for the continuous survivor time with time-fixed covariates. We cannot determine which discretization, one-month or one-week intervals, must be applied; that, is the discretization is not originally unique. For the data in Table 1, however, the choice of discretization of the time axis for the partial logistic regression model is generally determined by clinical relevance, possibly according to the scheduled time intervals between follow-up visits.

The primary goal of the present study is to predict short-term survival in patients on the basis of measurements of several characteristics having time-dependent covariates for the purpose of facilitating the decision as to when to undertake liver transplantation. Based on partial logistic model due to Cox [22] and Efron [23] for the grouped data, Tsujitani and Sakon [24] have proposed a partial logistic model with a discrete hazard rate for ungrouped data having time-dependent covariates where is a vector of coefficients. The modeled response is the logit of hazard rate, and the logit is linear in the covariates. However, this assumption is violated when covariate effects are best represented by smooth, nonlinear function. In recent years, a variety of powerful techniques have been developed for exploring the function form of effects. We examine here a flexible survival model GAM that does not require linearity of the covariate function by extending a generalized linear model (GLM); see, for example, Hastie and Tibshirani [13] and McCullagh and Nelder [25]. By identification of nonlinear covariate effects, we can estimate more accurately a patient’s prognosis and thus determine a liver transplant based on prediction of short-term survival.

The linear predictor in (4) is specified as a sum of smooth functions with twice continuous derivatives of some or all of the covariates for the discrete hazard rate of patient at the time interval

The smooth functions in (5) can be represented aswhere are the numbers of knots, andThe functions in the matrix are *B*-spline basis functions. Thus, (5) can be rewritten as
where *B*-splines with 10 interiors knots will be used for each continuous covariate. The number of knots is arbitrary but appears to have little effect on the results, provided that the number is not too small, as described in Gray [15].

At the time interval for the th clinic visit of patient #, we define

where is the history of dead and censored of time intervals for the first th clinic visit of patient #, and is the same history extended to include . Tsujitani and Sakon [24] derived the full log likelihood for all patients with partial log likelihood The unknown parameters in (9) can thus be estimated by maximizing the partial log likelihood (12), which is the log likelihood for the independent Bernoulli trial. Although is not a log likelihood in the usual sense, it possesses the usual asymptotic properties under fairly broad conditions; see, for example, Andelsen and Gill [26].

To avoid overfitting, such models are estimated by penalized maximum likelihood where are smoothing parameters that control the tradeoff between the fit and the smoothness, and is the twice derivative of with respect to . The advantage of penalized estimates is enlightened in Wood ([27], Section 4.1).

Two model-fitting issues remain. The first concerns the selection of smoothing parameter in (13). The careful smoothing parameter choice is outweighed by the easy identification of a covariate’s functional form, and the applicability of established inferential methods to short-term survival prediction. In order to select the smoothing parameters, the algorithm due to Wood [27–29] can be used by minimizing GCV as an approximation to leaving-one-out CV. For example, however, the dead patient #9 generated seven observations as shown in Table 1. Patients were scheduled to return for further observations at six months, 12 months, and yearly. It should be noted that this patient generated seven observations. Thus, patients generate 1945 observations in total.

We propose a natural extension of -fold CV algorithm by “leaving-one-out” CV based on each patients. The ordinal -fold CV divides the data randomly in groups so that their sizes are as nearly equal as possible. The partition should be made to avoid possible biases, as described in Zhang [30]. In many problems, the ordinal -fold CV is, thus, unsatisfactory in several respects for time-dependent covariates. Applying this kind of data structure to the CV algorithm, we obtain insights into how the partition of data should be done. A natural extension of -fold CV algorithm by setting is to allow the deletion of the patient with several observations. The variant -fold CV is given as follows:

*Step 1. *Split the original sample into parts , where .

*Step 2. *Fit the model to with the -th subject (i.e., patient) deleted of the data, and predict for the deleted th sample .

*Step 3. *Do the above for and combine the CV estimates

A second issue is the goodness-of-fit test of the model. After choosing the optimum smoothing parameters via -fold CV algorithm, the deviance allows us to test the goodness of fit where denotes the maximized partial log likelihood under some current GAM, and the log likelihood for the maximum (full) model is zero. The deviance given by (15) is, however, not even approximately a distribution for the case in which ungrouped binary responses are available; see, for example, Landwehr et al. [31] and Tsujitani and Koshimizu [32] and Collett [33]. The number of degrees of freedom required for the test for significance using the assumed distribution for the deviance is a contentious issue. No adequate distribution theory exists for the deviance. The reason for this is somewhat technical; for details, see Section 3.8 in Collett [33]. Consequently, the deviance on fitting a model to binary response data cannot be used as a summary measure of the goodness-of-fit test of the model.

Based on the above discussion, we employ bootstrapping to the deviance of (15) in order to obtain the goodness-of-fit test due to Efron and Tibshirani [34].

*Step 1. *Generate bootstrap samples from the original sample . Let denote the th bootstrap sample.

*Step 2. *For the bootstrap sample , the deviance of (15) is computed by .

This process is repeated independently times, and the computed values are arranged in ascending order.

*Step 3. *The value of the th order statistic of the replications can be taken as an estimate of the quantile of order .

*Step 4. *The estimate of the -th percentile (i.e., % critical point) of is used to test the goodness-of-fit of a model having a specified significance level . The value of deviance of (15) being greater than the estimate of the percentile indicates that the model fits poorly.

#### 3. Results

The survival function for our discretized situation is The conditional probability of survival over a short-time interval (say, six months) after time can be estimated as

By using variant -fold CV and GCV, the optimum smoothing parameters for GAM are determined as shown in Table 2. Table 3 summarizes the values to test the nonparametric effects of covariates for the model *s*(time) + *s*(age) + *s*(pro) + *s*(bili) with the optimum smoothing parameters and GCV. From Table 3, all covariates are highly significant for GCV; however, time is not significant for variant -fold CV. GCV is only the approximation of leaving-one-out CV. Furthermore, the variant -fold CV is leaving one-out CV based on each patients. So variant -fold CV is better than GCV. For the purpose of comparison, we included the results in the case using GCV in Tables 2 and 3. Table 4 shows the results of a number of models that were fit to the data.

The likelihood ratio (LR) statistic based on deviance can be conducted to test whether the spline effect provides a significantly better fit than a linear effect. Table 5 shows the test of significance for spline effects based on the models in Table 4. It is clear from Table 5 that the spline effects of prothrombin time and bilirubin are strongly significant. No spline provides a significantly better fit than a linear model for age. Thus, we accept the final model: age + *s*(pro) + *s*(bili) with the edf(effective degrees of freedom) 3.954 and 4.477 for *s*(pro) and *s*(bili), respectively. We used likelihood ratio test instead of information criteria as a valid alternative approach for model selection. From Table 4, however, it is found that the same final model is selected by using AIC.

Figure 1 shows the histogram of the bootstrapped for the optimum model with . The bootstrap estimate of the 95th percentile (i.e., 5% critical point) is . Comparison to of (15) suggests that the model fits the data.

Figure 2 shows the prediction of the probability of surviving beyond the next six months for dead patient #9. For the purpose of comparison, the results obtained using partial logistic regression, the Mayo updated, and the European new version models are also provided. Figure 2 also indicates that the six-month survival probability predicted by GAM are lower than those predicted by the other models. Because the patient #9 died, the lower predicted probabilities are better. The conditional probability of survival over a short time interval (say, six months) after time during the course of the disease can be predicted from data collected for censored and dead data.

For the graphical representation, the individual probabilities for predicted survival are averaged in order to compare the Mayo updated model, the European new version model, the partial logistic regression model, and GAM. We can predict the probability of survival over the following six months using the four models with respect to data and censored data out of 312 patients. For the group and the censored group , the probability of surviving over the next months is denoted by for the -th clinic visit of patient #. The average probability of survival over the next months for the -th clinic visit of patient # in group can be estimated as where is the total number of patients for the -th clinic visit in group , and is the survival function of patient at the time interval in group ; see, for example, Markus et al. [35], Marubini and Valsecchi [10], and Thomsen et al. [36].

Figure 3 shows a comparison of the probability of survival over the next six months using the four models with respect to dead and censored data among all 312 patients. The figure clarifies that(i)for the case of dead data, the six-month survival probabilities predicted by GAM are lower than those predicted by the other models, and,(ii)for the case of censored data, the difference among the four models is very small.

Figure 4 also shows the box and whisker plots of probability of survival over the next six months using GAM with respect to dead data among all 312 patients. It should be noted that the variance of probabilities of survival over the next six months is much higher in the fourth clinic visits than in other clinic visits. Another useful idea in our analysis is the concept of competing risk. There is “liver transplantation” in PBC data as competing risk. Competing risk has been treated as censored data. By adding as one of time-dependent covariate for the liver transplantation, one can test the significance of liver transplantation. The covariate for liver transplantation is taken as a binary variable (codes 0 before liver transplantation, 1 at liver transplantation) as shown in Giorgi and Gouvernet [16] and Crowley [37]. Table 6 shows the three types for the combination of “censored” and “liver transplantation.” Table 7 shows the values of covariates for liver-transplanted patient #5.

In order to test the significance of “liver transplantation,” we consider two models:

Model IV: age + *s*(pro) + *s*(bili).

Model IV′: age + *s*(pro) + *s*(bili) + liver transplantation. The values of deviance and d.f. are given in Table 8. The reduction in the value of deviance is on 0.973 d.f. This is significant at the 1% level.

For the purpose of the comparison, the hazard of the cumulative incidence function (CIF) may be modeled in the presence of competing risks. The model is based on where is the time of the last observation (not the midpoint at the time interval ), is the hazard of the subdistribution, and is the baseline hazard of the subdistribution ([38], Section 6.2). The values are summarized in Table 8 to test the significance for covariates using the model (19). From Tables 3 and 8, there is little difference between our method and the CIF.

#### 4. Discussion

In this paper, we introduced the probabilistic interpretation of GAM and constructed the maximum likelihood principle of GAM for the analysis of survival data having time-dependent covariates. We proposed the information criterion based on the variant -fold CV when choosing the optimal smoothing parameters in application of GAM. Introducing the maximum likelihood principle into GAM, the deviance allows us to test the goodness-of-fit of GAM. The proposed methods were illustrated by comparing the probability of survival over the next six months using the Mayo-updated model, the European new version model, the partial logistic regression model, and GAM with respect to dead and censored data among PBC data. We expect that flexible methods for modeling survival data with time-dependent covariates using machine learning theory such that support vector machine will be very useful in this real-world contexts; see, for example, Hastie et al. [39]. Furthermore, smoothing spline ANOVA models by Gu [40] will enable us to include the interactions between the covariates.

We assume that there is only one cause of failure; that is, the event is allowed to occur only once for each patient. However, there is increasing interest to apply survival data sets with multiple events per patient [8, 41]. Wei et al. [42] analyzed bladder cancer data by modeling marginal distributions of multivariate failure time with proportional hazards models. The model may violate the proportional hazards assumption, even when the overall data set does not (Table 9). By modifying such as

Equation (11) becomes

Thus, the ideas presented in this paper can be extended to identification of prognostic factors relative to survival time in the case that the same event may recur during a follow-up study, and covariates values change with time.

#### References

- E. R. Dickson, P. M. Grambsch, T. R. Fleming, L. D. Fisher, and A. Langworthy, “Prognosis in primary biliary cirrhosis: model for decision making,”
*Hepatology*, vol. 10, no. 1, pp. 1–7, 1989. View at Scopus - D. R. Cox, “Regression models and life tables (with discussion),”
*Journal of the Royal Statistical Society: Series B*, vol. 34, pp. 187–220, 1972. - D. G. Altman and B. L. de Stavola, “Practical problems in fitting a proportional hazards model to data with updated measurements of the covariates,”
*Statistics in Medicine*, vol. 13, no. 4, pp. 301–341, 1994. View at Scopus - D. Collett,
*Modelling Survival Data in Medical Research*, Chapman & Hall/CRC, London, UK, 2nd edition, 2003. - J. P. Klein and M. L. Moeschberger,
*Survival Analysis*, Springer, New York, NY, USA, 2nd edition, 2003. - J. F. Lawless,
*Statistical Models and Methods for Lifetime Data*, John Wiley, New York, NY, USA, 2nd edition, 2003. - P. A. Murtaugh, E. R. Dickson, G. M. van Dam et al., “Primary biliary cirrhosis: prediction of short-term survival based on repeated patient visits,”
*Hapatology*, vol. 20, pp. 126–134, 1994. - T. M. Therneau and P. M. Grambsch,
*Modeling Survival Data: Extending the Cox Model*, Springer, New York, NY, USA, 2000. - J. D. Kalbfleisch and R. L. Prentice,
*The Statistical Analysis of Failure Time Data*, John Wiley, New York, NY, USA, 2nd edition, 2002. - E. Marubini and M. G. Valsecchi,
*Analysing Survival Data from Clinical Trials and Observational Studies*, John Wiley, New York, NY, USA, 1995. - E. Christensen, P. Schlichting, P. K. Andersen et al., “Updating prognosis and therapeutic effect evaluation in cirrhosis with Cox’s multiple regression model for time-dependent variables,”
*Scandinavian Journal of Gastroenterology*, vol. 21, no. 163, pp. 174–1986. - E. Christensen, D. G. Altman, J. Neuberger et al., “Updating prognosis in primary biliary cirrhosis using a timedependent Cox regression model,”
*Gastroenterology*, vol. 105, pp. 1865–1876, 1993. - T. J. Hastie and R. J. Tibshirani,
*Generalized Additive Models*, Chapman & Hall, London, NY, USA, 1990. - L. A. Sleeper and D. P. Harrington, “Regression splines in the Cox model with application to covariate effects in liver disease,”
*Journal of the American Statistical Association*, vol. 85, pp. 941–949, 1990. - R. J. Gray, “Flexible methods for analyzing survival data using splines,with applications to breast cancer prognosis,”
*Journal of the American Statistical Association*, vol. 87, pp. 942–951, 1992. - R. Giorgi and J. Gouvernet, “Analysis of time-dependent covariates in a regressive relative survival model,”
*Statistics in Medicine*, vol. 24, no. 24, pp. 3863–3870, 2005. View at Publisher · View at Google Scholar · View at PubMed · View at MathSciNet · View at Scopus - J. Ding and J. L. Wang, “Modeling longitudinal data with nonparametric multiplicative random effects jointly with survival data,”
*Biometrics*, vol. 64, no. 2, pp. 546–556, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus - R. Schoop, E. Graf, and M. Schumacher, “Quantifying the predictive performance of prognostic models for censored survival data with timedependent covariates,”
*Biometrics*, vol. 64, pp. 603–610, 2008. - E. Arjas, “A graphical method for assessing goodness of fit in Cox’s proportional hazards model,”
*Journal of the American Statistical Association*, vol. 83, pp. 204–212, 1988. - E. Biganzoli, P. Boracchi, L. Mariani, and E. Marubini, “Feed forward neural networks for the analysis of censored survival data: a partial logistic regression approach,”
*Statistics in Medicine*, vol. 17, no. 10, pp. 1169–1186, 1998. View at Publisher · View at Google Scholar · View at Scopus - E. Biganzoli, P. Boracchi, and E. Marubini, “A general framework for neural network models on censored survival data,”
*Neural Networks*, vol. 15, no. 2, pp. 209–218, 2002. View at Publisher · View at Google Scholar · View at Scopus - D. R. Cox, “Partial likelihood,”
*Biometrika*, vol. 62, no. 2, pp. 269–276, 1975. View at Scopus - B. Efron, “Logistic regression, survival analysis, and Kaplan-Meier curve,”
*Journal of the American Statistical Association*, vol. 83, pp. 414–425, 1988. - M. Tsujitani and M. Sakon, “Analysis of survival data having time-dependent covariates,”
*IEEE Transactions on Neural Networks*, vol. 20, no. 3, pp. 389–394, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus - P. McCullagh and J. A. Nelder,
*Generalized Linear Models*, Chapman & Hall, London, UK, 2nd edition, 1989. - P. K. Andelsen and R. D. Gill, “Cox’s regression model for counting process: a large sample study,”
*Annals of Statistics*, vol. 10, pp. 1100–1120, 1982. - S. N. Wood,
*Generalized Additive Models: An Introduction with R*, Chapman & Hall, London, UK, 2006. - S. N. Wood, “Stable and efficient multiple smoothing parameter estimation for generalized additive models,”
*Journal of the American Statistical Association*, vol. 99, pp. 673–686, 2004. - S. N. Wood, “Fast stable direct fitting and smoothness selection for generalized additive models,”
*Journal of the Royal Statistical Society: Series B*, vol. 70, no. 3, pp. 495–518, 2008. View at Publisher · View at Google Scholar · View at Scopus - P. Zhang, “Model selection via multifold cross validation,”
*Annals of Statistics*, vol. 21, pp. 299–313, 1993. - J. M Landwehr, D. Pregibon, and A. C. Shoemaker, “Graphical methods for assessing logistic regression models,”
*Journal of the American Statistical Association*, vol. 79, pp. 61–71, 1984. - M. Tsujitani and T. Koshimizu, “Neural discriminant analysis,”
*IEEE Transactions on Neural Networks*, vol. 11, no. 6, pp. 1394–1401, 2000. View at Scopus - D. Collett,
*Modelling Binary Data*, Chapman & Hall/CRC, London, UK, 2nd edition, 2003. - B. Efron and R. J. Tibshirani,
*An Introduction to the Bootstrap*, Chapman & Hall, London, UK, 1993. - B. H. Markus, E. R. Dickson, P. M. Grambsch et al., “Efficacy of liver transplantation in patients with primary biliary cirrhosis,”
*The New England Journal of Medicine*, vol. 320, no. 26, pp. 1709–1713, 1989. View at Scopus - B. L. Thomsen, N. Keiding, and D. G. Altman, “A note on the calculation of expected survival, illustrated by the survival of liver transplant patients,”
*Statistics in Medicine*, vol. 10, no. 5, pp. 733–738, 1991. View at Scopus - J. Crowley and M. Hu, “Covariance analysis of heart transplant survival data,”
*Journal of the American Statistical Association*, vol. 72, pp. 27–36, 1977. - M. Pintilie,
*Competing Risks: A Practical Perspective*, John Wiley, New York, NY, USA, 2006. - T. J. Hastie, R. J. Tibshirani, and J. Friedman,
*The Elements of Statistical Learning: Data Mining, Inference, and Prediction*, Springer, New York, NY, USA, 2001. - C. Gu, “Smoothing Spline ANOVA Models,” pp. Springer–New York, NY, USA, 2002.
- E. T. Lee and J. W. Wang,
*Statistical Methods for Survival Data Analysis*, John Wiley, New York, NY, USA, 3rd edition, 2003. - L. J. Wei, D. Y. Lin, and L. Wessfeld, “Regression analysis of multivariate incomplete failure time data by modeling marginal distributions,”
*Journal of the American Statistical Association*, vol. 84, pp. 1065–1073, 1989.