- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Probability and Statistics
Volume 2013 (2013), Article ID 753930, 7 pages
Nonlinear Survival Regression Using Artificial Neural Network
1Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences (USWRS), Tehran 1985713834, Iran
2Department of Biostatistics, Faculty of Paramedical Sciences, Shahid Beheshti University of Medical Sciences, Tehran 1971653313, Iran
3Hospital Management Research Center, Tehran University of Medical Sciences (TUMS), Tehran 1996713883, Iran
Received 9 May 2012; Revised 21 November 2012; Accepted 23 November 2012
Academic Editor: Shein-chung Chow
Copyright © 2013 Akbar Biglarian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Survival analysis methods deal with a type of data, which is waiting time till occurrence of an event. One common method to analyze this sort of data is Cox regression. Sometimes, the underlying assumptions of the model are not true, such as nonproportionality for the Cox model. In model building, choosing an appropriate model depends on complexity and the characteristics of the data that effect the appropriateness of the model. One strategy, which is used nowadays frequently, is artificial neural network (ANN) model which needs a minimal assumption. This study aimed to compare predictions of the ANN and Cox models by simulated data sets, which the average censoring rate were considered 20% to 80% in both simple and complex model. All simulations and comparisons were performed by R 2.14.1.
Many different parametric, nonparametric, and semiparametric regression methods are increasingly examined to explore the relationship between a response variable and a set of covariates. The choice of an appropriate method for modeling depends on the methodology of the survey and the nature of the outcome and explanatory variables.
A common research question in medical research is to determine whether a set of covariates are correlated with the survival or failure times. Two major characteristics of survival data are censoring and violation of normal assumption for ordinary least squares multiple regressions. These two characteristics of time variable are reasons that straightforward multiple regression techniques cannot be used. Different parametric and semiparametric models in survival regression were introduced which model survival or hazard function. Parametric models, for instance, exponential or weibull, predict survival function while accelerated failure time models are parametric regression methods with logarithm failure time as dependent variable [1, 2].
Choosing an appropriate model for the analysis of the survival data depends on some conditions which are called the underlying assumptions of the model. Sometimes, these assumptions may not be true, for example: (a) lack of independence between consequent waiting times to occurrence of an event or nonproportionality of hazards in semiparametric models, (b) lack of independency of censoring or the distribution of failure times in the case of parametric models [1–3].
Although, the Cox regression model is an efficient strategy in analyzing survival data, but when the assumptions of this model are fail, the free assumption methods could be suitable.
Artificial neural network (ANN) models, which are completely nonparametric, have been used increasingly in different areas of sciences. Although analyzing the data using ANN methodology is usually more complex than traditional approaches, ANN models are more flexible and efficient when our main aim is prediction or classification of an outcome using different explanatory variables [4–17].
Note that when several covariates and complex interactions are of concern the best method is ANN; otherwise, based on model assumptions simple regression models can be appropriately used.
In this study, simulated data sets with different rates of censoring were used to predict the outcome using ANN and traditional Cox regression models, and then the results of predictions were compared.
2.1. Cox Regression Model
Suppose that denotes a continuous nonnegative random variable describing the failure time of an event (i.e., time-to-event) in a system. The probability density function of ; that is, the actual survival time is . The survival function, , is probability that the failure occurs later than time . The related hazard function, , denotes the probability density of an event occurring around time , given that it has not occurred prior to time .
As we know, an inherent characteristic of survival data is censoring. Right censored data which is the commonest form of censoring occurs when survival times are greater than some defined time point [1, 2]. The generated data used in this study contains right-censored data.
Proportional hazards model, which also called Cox regression, is a popular method in analysis of survival data. This model is presented as In this model, is the regression coefficients vector, is a vector of covariates, and is the baseline hazard which is unspecified and a function of time. The likelihood and partial likelihood function for right censored survival data , given by (2) and (3): , is censoring status where if the observation is complete and if it is censored. is defined as the risk set at time .
To fit the Cox regression and estimate , the partial likelihood in (3) is maximized using iteratively reweighted least squares to implement the Newton-Raphson method. However, in the high-dimensional case, this approach cannot be used to estimate ; even is not unique.
2.2. Neural Networks Model
An ANN consists of several layers. Layers are interconnected group of artificial neurons. In addition, each layer has a weight that indicates the amount of the effect of neurons on each other. Usually an ANN model has three layers that called input, hidden (middle), and output. The input layer contains the predictors. The hidden layer contains unobservable nodes and applies a nonlinear transformation to the linear combination of input layer. The value of each hidden node is a function of the predictors. The output layer contains the outcome which is some functions of the hidden units. In hidden and output layers, the exact form of the function depends on the network type and user definition (based on response variable).
There are different methods for learning in the NN. For example, in multiple layers perceptron (MLP), which is the most commonly used, the learning performs with minimization of the mean square error of the output and by back-propagation learning algorithm [16, 19, 20].
In this paper, we use the activation transfer function () as sigmoid function in hidden and in output layers (). The th response for the predictor values is a nonlinear function as Note that is th row of the input data matrix , is a nonlinear function of linear combination of input data, is the vector weights of the hidden to the output units, and is the matrix weights of the input to the hidden units. Equation (4) together yield the MLP model: By the sigmoid activation function, (5) can be written as below which is a nonlinear regression: where: are unknown parameter vectors, is a vector of known constants, and are residuals. The parameters (weights) can be estimated by optimizing some criterion function such as maximizing the log-likelihood function or minimizing the sum of squared errors.
In an MLP framework, a serious problem is overfitting. To control of the overfitting, usually a penalty term is added to the optimization criterion. To this, penalized least squares criterion for parameter estimation is given by : where the penalty term is .
In likelihood schema which is often used in shrinkage method, an adaption of (7) is [8, 21] The penalty weight regulates between over- and underfitting. A best value of is between 0.001 and 0.1 and is chosen by cross-validation [1, 8]. In this paper, we use (8) to get the parameter estimated. It is mentioned that, for an outcome (the response variable) with two classes , is probability of event for the th patient, and the error function provides the cross-entropy error function as An ANN can be modeled as a generalized linear modeling with nonlinear predictors [8–11]. Bignazoli et al.  introduced a method called partial logistic ANN, and Lisboa  developed it with fit smooth estimates of the discrete time hazard in structure. It is similar to MLP  with additional covariate, namely, time as an input and given by where and denote the number of the input and the hidden nodes, respectively. and denote bias term in the hidden and output layers, respectively. After the estimation of the network weights, , a single output node estimates conditional failure probability values from the connections with the middle units, and the survivorship is calculated from the estimated discrete time hazard by multiplying the conditionals for survival over time interval. Then statistics could be obtained as  where is the censoring indicator function.
2.3. Model Fitting
The ultimate goal of the learning process is to minimize the error by net. In training step, to fit the model by a fixed number of hidden nodes, we use penalized likelihood as By using this, we improve the convergence of the optimization and also control overfitting problem [1, 8, 9, 16, 21].
To identify the number of the hidden nodes and then model selection, Bayesian Information Criterion (BIC) and Network Information Criterion (NIC) [8, 23, 24], that is, generalization of Akaike Information Criterion (AIC), are calculated: where is the number of the parameters estimated, and is the number of observations in training set. The best model is with the smallest value of these criterions. In addition, to assess prediction accuracy in validation (testing) group, we calculated classification accuracy and mean square error (MSE).
The best model is selected with the smallest value of MSE. The models considered 2, 3, 4, 5, 10, 15, and 20 hidden nodes. The weight decay was considered 0.012 which is chosen based on some empirical study .
At finally, in order to comparison of the Cox and ANN predictions, classification accuracy and concordance indexes were calculated. All simulations and comparisons were performed by R 2.14.1.
In order to compare the accuracy of the predictions by ANN and Cox regression, four different simulation schemes, based on Monte Carlo simulation, were used. In each schema, hazard at any time was considered as exponential form , namely, (Table 1). For each schema, 1,000 independent random observations were generated and then with the based on the relationship between exponential parameter and independent variables, survival times were generated. Afterward, the survival times were transformed as right censored. In this context, if generated time is greater than the quantile of exponential function with parameter of , it is considered as censorship. This process was repeated 100 times. To access the accuracy of predictions, each sample is randomly divided to two parts. The first part, the training group, was consisting of 700 observations, and the 300 remainder observations were allocated to second group, that is, the testing group. Furthermore, in all simulation, the average rates of censorship were considered equal to 20%, 30%, 40%, 50%, 60%, 70%, and 80%. In addition, the models were considered with the main effects and without/with any interaction terms as simple and complex model, respectively.
In simulation 1 and 2, two covariates were used which was generated randomly from binomial and standard normal distributions. The models of these simulations were consisting of any and one interaction terms. In simulation 3, three covariates were used which was generated randomly from binomial and standard normal distributions, respectively. The model of this simulation has had two interaction terms. In simulation 4, four covariates were used which was generated randomly from binomial and standard normal distributions. The models of these simulations are complex and consist of two-, three- and four-interaction terms (Table 1).
The model selection is based on BIC for learning set and SSE criterion for the testing subset data as a verification. The results in Table 2 show that the simple model performs with less hidden node but complex model performs better with more hidden nodes. The MSE values confirm these results (Table 2).
In the next step, to compare of ANN and Cox regression predictions, concordance indexes were calculated from classification accuracy table in testing subset. Concordance index was reported as a generalization of the area under receiver operating characteristic curve for censored data [27, 28]. This index means that the proportion of the cases that are classified correctly in noncensored (event) and censored groups and 0 to 1 values indicated as the ability of the models accuracy. The concordance index of ANN and Cox regression models was reported in Table 3. The results of simulation study in simpler model showed that there was not any different between the predictions of Cox regression and NN models. But NN predictions were better than Cox regression predictions in complex model with high rates of censoring.
In this paper, we presented two approaches for modeling of survival data with different degrees of censoring: Cox regression and neural network models. A Monte-Carlo simulation study was performed to compare predictive accuracy of Cox and neural network models in simulation data sets.
In the simulation study, four different models were considered. The rate of censorship in each of these models was considered from 20% up 80%. These models were considered with the main effects and also with the interaction terms. Then the ability of these models in prediction was evaluated. As was seen, in simple models and with less censored cases, there was little difference in ANN and Cox regression models predictions. It seems that for simpler models, the levels of censorship have no effect on predictions, but the predictions in more complex models depend on the levels of censorship. The results showed that the NN model for more complex models was provided better predictions. But for simpler models predictions there was not any different in results. This result was consistent with the finding from Xiang's study . Therefore, NN model is proposed in two cases of: (1) occurrence of high censorship (i.e., censoring rate of 60% and higher) and/or (2) in the complex models (i.e., with many covariates and any interaction terms). This is a very good result and can be used in practical issues which often are faced on with many numbers of variables and also many cases of censorship. For that reason, in these two cases the ANN strategy can be used as an alternate of traditional Cox model. Finally, it is mentioned that there are some flexible alternative methods, such as piecewise exponential and grouped time models which can be used for survival data and then its ability compared with ANN model.
The authors wish to express their special thanks to referees for their valuable comments.
- E. T. Lee and J. W. Wang, Statistical Methods for Survival Data Analysis, Wiley Series in Probability and Statistics, Wiley-Interscience, Hoboken, NJ, USA, 3rd edition, 2003.
- V. Lagani and I. Tsamardinos, “Structure-based variable selection for survival data,” Bioinformatics, vol. 26, no. 15, pp. 1887–1894, 2010.
- M. H. Kutner, C. J. Nachtsheim, and J. Neter, Applied Linear Regression Models, McGraw-Hill/Irwin, New York, NY, USA, 4th edition, 2004.
- W. G. Baxt and J. Skora, “Prospective validation of artificial neural networks trained to identify acute myocardial infarction,” Lancet, vol. 347, pp. 12–15, 1996.
- B. A. Mobley, E. Schecheer, and W. E. Moore, “Prediction of coronary artery stenosis by artificial networks,” Artificial Intelligence in Medicine, vol. 18, pp. 187–203, 2000.
- D. West and V. West, “Model selection for medical diagnostic decision support system: breast cancer detection case,” Artificial Intelligence in Medicine, vol. 20, pp. 183–204, 2000.
- F. Ambrogi, N. Lama, P. Boracchi, and E. Biganzoli, “Selection of artificial neural network models for survival analysis with genetic algorithms,” Computational Statistics & Data Analysis, vol. 52, no. 1, pp. 30–42, 2007.
- E. Biganzoli, P. Boracchi, L. Mariani, and E. Marubini, “Feed forward neural networks for the analysis of censored survival data a partial logistic regression approach,” Statistics in Medicine, vol. 17, pp. 1169–1186, 1998.
- E. Biganzoli, P. Boracchi, and E. Marubini, “A general framework for neural network models on censored survival data,” Neural Networks, vol. 15, pp. 209–218, 2002.
- E. M. Bignazoli and P. Borrachi, “The Partial Logistic Artificial Neural Network (PLANN): a tool for the flexible modelling of censored survival data,” in Proceedings of the European Conference on Emergent Aspects in Clinical Data Analysis (EACDA '05), 2005.
- E. M. Bignazoli, P. Borrachi, F. Amborgini, and E. Marubini, “Artificial neural network for the joint modeling of discrete cause-specific hazards,” Artificial Intelligence in Medicine, vol. 37, pp. 119–130, 2006.
- R. Bittern, A. Cuschieri, S. D. Dolgobrodov, et al., “An artificial neural network for analysing the survival of patients with colorectal cancer,” in Proceedings of the European Symposium on Artificial Neural Networks (ESANN ’05), Bruges, Belgium, April 2005.
- K. U. Chen and C. J. Christian, “Using back-propagation neural network to forecast the production values of the machinery industry in Taiwan,” Journal of American Academy of Business, Cambridge, vol. 9, no. 1, pp. 183–190, 2006.
- C. L. Chia, W. Nick Street, and H. W. William, “Application of artificial neural network-based survival analysis on two breast cancer datasets,” in Proceedings of the American Medical Informatics Association Annual Symposium (AMIA '07), pp. 130–134, Chicago, Ill, USA, November 2007.
- A. Eleuteri, R. Tagliaferri, and L. Milano, “A novel neural network-based survival analysis model,” Neural Networks, vol. 16, pp. 855–864, 2003.
- B. D. Ripley and R. M. Ripley, “Neural networks as statistical methods in survival analysis,” in Clinical Applications of Artificial Neural Networks, pp. 237–255, Cambridge University Press, Cambridge, UK, 2001.
- B. Warner and M. Manavendra, “Understanding neural networks as statistical tools,” Amstat, vol. 50, no. 4, pp. 284–293, 1996.
- J. P. Klein and M. L. Moeschberger, Survival Analysis: Techniques for Censored and Truncated Data, Springer, New York, NY, USA, 2th edition, 2003.
- J. W. Kay and D. M. Titterington, Statistics and Neural Networks, Oxford University Press, Oxford, UK, 1999.
- E. P. Goss and G. S. Vozikis, “Improving health care organizational management through neural network learning,” Health Care Management Science, vol. 5, pp. 221–227, 2002.
- R. M. Ripley, A. L. Harris, and L. Tarassenko, “Non-linear survival analysis using neural networks,” Statistics in Medicine, vol. 23, pp. 825–842, 2004.
- P. J. G. Lisboa, H. Wong, P. Harris, and R. Swindell, “A Bayesian neural network approach for modeling censored data with an application to prognosis after surgery for breast cancer,” Artificial Intelligence in Medicine, vol. 28, no. 1, pp. 1–25, 2003.
- C. M. Bishop, Neural Networks for Pattern Recognition, The Clarendon Press Oxford University Press, New York, NY, USA, 1995.
- B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge, UK, 1996.
- N. Fallah, G. U. Hong, K. Mohammad, et al., “Nonlinear Poisson regression using neural networks: a simulation study,” Neural Computing and Applications, vol. 18, no. 8, pp. 939–943, 2009.
- A. Xiang, P. Lapuerta, A. Ryutov, et al., “Comparison of the performance of neural network methods and Cox regression for censored survival data,” Computational Statistics & Data Analysis, vol. 34, no. 2, pp. 243–257, 2000.
- F. E. Harrell, R. M. Califf, D. B. Pryor, et al., “Evaluating the yield of medical tests,” The Journal of the American Medical Association, vol. 247, pp. 2543–2546, 1982.
- F. E. Harrell, K. L. Lee, R. M. Califf, et al., “Regression modeling strategies for improved prognostic prediction,” Statistics in Medicine, vol. 3, pp. 143–152, 1984.