Journal of Probability and Statistics

Journal of Probability and Statistics / 2012 / Article
Special Issue

Advanced Designs and Statistical Methods for Genetic and Genomic Studies of Complex Diseases

View this Special Issue

Review Article | Open Access

Volume 2012 |Article ID 478680 | https://doi.org/10.1155/2012/478680

Jinfeng Xu, "High-Dimensional Cox Regression Analysis in Genetic Studies with Censored Survival Outcomes", Journal of Probability and Statistics, vol. 2012, Article ID 478680, 14 pages, 2012. https://doi.org/10.1155/2012/478680

High-Dimensional Cox Regression Analysis in Genetic Studies with Censored Survival Outcomes

Academic Editor: Yongzhao Shao
Received22 Feb 2012
Revised21 May 2012
Accepted26 May 2012
Published15 Jul 2012

Abstract

With the advancement of high-throughput technologies, nowadays high-dimensional genomic and proteomic data are easy to obtain and have become ever increasingly important in unveiling the complex etiology of many diseases. While relating a large number of factors to a survival outcome through the Cox relative risk model, various techniques have been proposed in the literature. We review some recently developed methods for such analysis. For high-dimensional variable selection in the Cox model with parametric relative risk, we consider the univariate shrinkage method (US) using the lasso penalty and the penalized partial likelihood method using the folded penalties (PPL). The penalization methods are not restricted to the finite-dimensional case. For the high-dimensional (š‘ā†’āˆž, š‘ā‰Ŗš‘›) or ultrahigh-dimensional case (š‘›ā†’āˆž, š‘›ā‰Ŗš‘), both the sure independence screening (SIS) method and the extended Bayesian information criterion (EBIC) can be further incorporated into the penalization methods for variable selection. We also consider the penalization method for the Cox model with semiparametric relative risk, and the modified partial least squares method for the Cox model. The comparison of different methods is discussed and numerical examples are provided for the illustration. Finally, areas of further research are presented.

1. Introduction

The modern high-throughput technologies offer the possibility of a powerful, genome-wide search for the genetic and environmental factors that have influential effects on diseases. The identification of such factors and the discernment of such a relationship can lead to better understanding of the causation of diseases and better predictive models. In the presence of a large number of covariates, it is very challenging to build a model which fully utilize all the information and excels in both parsimony and prediction accuracy. In classical settings where the number of covariates š‘ is fixed and the sample size š‘› is large, subset selection coupled with model selection criteria such as Akaikeā€™s information criterion (AIC) and Bayesian information criterion (BIC) can be used to identify relevant variables or choose the best model with the optimal prediction accuracy. However, subset selection is inherently unstable because of its discreteness [1]. To overcome this drawback of subject selection, Tibshirani [2] proposed the least absolute shrinkage and selection operator (LASSO) for simultaneous coefficient estimation and variable selection. Fan and Li [3] further proposed the penalization method with the smoothly-clipped absolute deviation (SCAD) penalty and rigorously established its oracle properties. The optimal properties of the lasso or SCAD-based penalization methods are not restricted to the finite-dimensional case. In the high-dimensional case (š‘ā†’āˆž,š‘ā‰Ŗš‘›), Fan and Peng [4] proved that the oracle properties are well retained. In the ultra high-dimensional case (š‘›ā†’āˆž,š‘›ā‰Ŗš‘), Fan and Lv [5] proposed the sure independence screening method (SIS) which first reduces dimensionality from high to a moderate scale that is below the sample size and then apply a penalization method. In a general asymptotic framework, the sure independence screening method is shown to fare well for even exponentially growing dimensionality. In high-dimensional or ultra high-dimensional situations, J. Chen and Z. Chen [6] proposed the extended Bayesian information criterion (EBIC) and established its selection consistency under mild conditions. The EBIC is further extended to the generalized linear model [7].

When the clinical outcome involves time to an event such as age at disease onset or time to cancer recurrence, the regression analysis is often conducted by the Cox relative risk model. The classical Cox model is only applicable to the situation where the number of subjects is much larger than the number of covariates. Thus, to accommodate the large š‘ and small š‘› scenario, some variable selection and dimension reduction techniques have to be implemented in a regression analysis. Recently, for variable selection in the Cox model, a number of approaches based on the efficient shrinkage method have been proposed and gained increased popularity. See, for example, LASSO [8], SCAD [9], and adaptive lasso [10, 11].

For high-dimensional variable selection in the Cox model with parametric relative risk, we review the univariate shrinkage method (US) [12] and the penalized partial likelihood approach [13]. The univariate shrinkage method [12] assumes the independence of the covariates in each risk set and the partial likelihood factors into a product. This leads to an attractive procedure which is univariate in its operation and most suitable for a high-dimensional variable selection setting. The variables are entered into the model based on the size of their Cox score statistics, and in nature the method is similar to univariate thresholding in linear regression and nearest shrunken centroids in classification. The univariate shrinakge method is applicable to the setting with an arbitrary number of variables but is less informative in identifying joint effects from multiple variables. The penalized partial likelihood approach [13] employs a class of folded-concave penalties to the Cox parametric relative risk model and strong oracle properties of non-concave penalized methods are established for nonpolynomial (NP) dimensional data. A coordinate-wise algorithm is used for finding the grid of solution paths. The penalized partial likelihood approach investigates joint effects from multiple variables and is applicable to both the finite-dimensional and high-dimensional cases. For the ultra high-dimensional case, some preliminary procedures such as the sure independence screening (SIS) method and the extended Bayesian information criterion (EBIC) can be used to reduce the number of variables to be moderately below the sample size before the penalized partial likelihood approach is formally adopted.

The aforementioned two methods both adopt the Cox parametric relative risk model for the covariance analysis. In practice, the parametric form of the relative risk model is quite restrictive and may not be tenable. In Section 3, we review a penalization method in the Cox model with semiparametric relative risk approach [14]. The relative risk is assumed to partially linear with one parametric component and one nonparametric component. Two penalties are applied sequentially to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts. The semiparametric relative risk model greatly relaxes the restrictive assumption of the classical Cox model and facilitate its use in exploratory data analysis. Although the method is proposed for the finite-dimensional setting, it is straightforward to be extended to the high dimensional and ultra high-dimensional situations the same as the penalization method for the Cox model with parametric relative risk.

In Section 4, we review a modified partial least squares method for dimension reduction in the Cox regression approach [15] which provides another alternative approach to dealing with the problem of high-dimensionality. By mimicking the partial least squares in the linear model, it first constructs the components which are linear combinations of original covariates. By sequentially determining the components and using the cross-validation to select the number of components, a parsimonious model with good predictive accuracy can be obtained.

In Section 5, we discuss the comparison of different methods and numerical examples are provided for the illustration. Finally, several important problems for future research are also presented in Section 6.

2. The Penalization Methods for the Cox Model with Parametric Relative Risk

2.1. The Cox Model with Parametric Relative Risk

We consider the setting where the time to event is subject to right censoring and the observations consist of {š‘Œš‘–=š‘‡š‘–āˆ§š¶š‘–,š›æš‘–=š¼(š‘‡š‘–ā‰¤š¶š‘–),š‘š‘–,š‘–=1,ā€¦,š‘›}, where š‘‡š‘– is the survival time, š¶š‘– the censoring time, and š‘š‘– is the š‘-dimensional vector of covariates. The Cox relative risk model assumes that the conditional hazard function of š‘‡ given the covariates š‘=š‘§ takes the following form: šœ†(š‘”āˆ£š‘=š‘§)=šœ†0ī€·š›½(š‘”)expš‘‡0š‘ī€ø,(2.1) where šœ†0(š‘”) is the unknown baseline hazard function and š›½0 is the unknown vector of coefficients. The influential effects that the covariates might have on the time š‘‡š‘– are examined by the relative risk. The unknown coefficient vector š›½0 is estimated by maximizing the partial likelihood function šæ(š›½)=š‘›ī‘š‘–=1īƒÆī€·š›½expš‘‡š‘š‘–ī€øāˆ‘š‘—āˆˆš‘…š‘–ī€·š›½expš‘‡š‘š‘—ī€øīƒ°š›æš‘–,(2.2) or equivalently, the log partial likelihood function ā„“(š›½)=š‘›ī“š‘–=1š›æš‘–āŽ§āŽŖāŽØāŽŖāŽ©š›½š‘‡š‘š‘–āŽ”āŽ¢āŽ¢āŽ£ī“āˆ’logš‘—āˆˆš‘…š‘–ī€·š›½expš‘‡š‘š‘—ī€øāŽ¤āŽ„āŽ„āŽ¦āŽ«āŽŖāŽ¬āŽŖāŽ­,(2.3) where š‘…š‘–={š‘—āˆ¶š‘Œš‘—ā‰„š‘Œš‘–}. As in the least squares estimation, the estimation of š›½ from the partial likelihood function requires the sample size š‘› is much larger than the dimension of the covariate vector š‘. In practice, a marginal approach is often adopted which includes one covariate at a time and maximizes šæš‘˜ī€·š›½š‘˜ī€ø=š‘›ī‘š‘–=1īƒÆī€·š›½expš‘˜š‘š‘–š‘˜ī€øāˆ‘š‘—āˆˆš‘…š‘–ī€·š›½expš‘˜š‘š‘—š‘˜ī€øīƒ°š›æš‘–,(2.4) or ā„“š‘˜ī€·š›½š‘˜ī€ø=š‘›ī“š‘–=1š›æš‘–āŽ§āŽŖāŽØāŽŖāŽ©š›½š‘˜š‘š‘–š‘˜āŽ”āŽ¢āŽ¢āŽ£ī“āˆ’logš‘—āˆˆš‘…š‘–ī€·š›½expš‘˜š‘š‘—š‘˜ī€øāŽ¤āŽ„āŽ„āŽ¦āŽ«āŽŖāŽ¬āŽŖāŽ­,(2.5) for š‘˜=1,ā€¦,š‘.

2.2. The Univariate Shrinkage Method

To identify the variables which are associated with š‘‡, multiple testing procedures will be used to make valid statistical inferences. However, Tibshirani [12] looks at the problem from another perspective. Since the maximizer of the partial likelihood is not unique when š‘›ā‰Ŗš‘, he proposes the regularized partial likelihood approach by using the lasso penalty as follows: š½(š›½)=ā„“(š›½)āˆ’šœ†š‘ī“š‘˜=1||š›½š‘˜||.(2.6) By assuming that both conditionally on each risk set, and marginally, the covariates are independent of one another, and using Bayesā€™s theorem, Tibshirani [12] shows that the log partial likelihood function ā„“(š›½)=constant+š‘ī“š‘˜=1ā„“š‘˜ī€·š›½š‘˜ī€ø.(2.7) The regularized partial likelihood function is š½(š›½)=constant+š‘ī“š‘˜=1ā„“š‘˜ī€·š›½š‘˜ī€øāˆ’šœ†š‘ī“š‘˜=1||š›½š‘˜||,(2.8) and results in the the Cox univariate shrinkage (CUS) estimator which maximizes the penalized function. Since the maximization is a set of one-dimensional maximization ā„“š‘˜(š›½š‘˜)āˆ’šœ†|š›½š‘˜|,š‘˜=1,ā€¦,š‘, for a range of šœ†, we can fairly easily get the penalized estimates Ģ‚š›½š‘˜. Actually, the entire paths of the regularization estimates can be obtained. It can also be shown that Ģ‚š›½š‘˜||š‘ˆā‰ 0āŸŗš‘˜||āˆšš‘‰š‘˜>šœ†,(2.9) where š‘ˆš‘˜ and š‘‰š‘˜ are the gradient of the (unpenalized) log-partial likelihood and the (negative) observed Fisher information. This is similar to soft/hard thresholding. Hence, the Cox univariate shrinkage method ranks all the covariates based on the Cox score statistic. As the Cox score is often used for determining the univariate significance of covariates, the results have easy interpretation. The tuning parameter šœ† can be selected by cross-validation as in Verweij and van Houwelingen [16] or directly determined as in Donoho and Johnstone [17]. The Cox univariate shrinkage method presents a numerically convenient approach for high-dimensional variable selection in the Cox model. In the literature, the modified shooting algorithm [10] and the least squares approximation based algorithm [11] both yield the entire solution paths, but only when š‘› is much larger than š‘.

One drawback of the Cox univariate shrinkage procedure is that the variables enter into the model based on their univariate Cox scores. Thus, when two predictors are both strongly predictive and highly correlated with each other, both will appear in the model. In that case, it may be more desirable to just include one of them for parsimony. This can be done using preconditioning [18] as is demonstrated by Tibshirani [12].

2.3. The Penalized Partial Likelihood Method

The penalized partial likelihood estimation with noncave penalties has been extensively studied by Fan and Li [9] for the case where the sample size š‘› is much larger than the dimension of š‘ Bradic et al. [13] considered the folded penalties for the penalized partial likelihood estimation when the dimension of š‘ is nonpolynomial (NP). The folded penalties include the smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP) as special cases. The penalized log partial likelihood becomes ā„“(š›½)āˆ’šœ†š‘›š‘ī“š‘˜=1š‘šœ†š‘›ī€·||š›½š‘˜||ī€ø,(2.10) where š‘šœ†š‘›(ā‹…) is a penalty function and šœ†š‘› is a nonnegative tuning parameter. For a class of folded penalties, by clarifying the identification problem of the penalized partial likelihood estimates and deriving a large deviation result for divergence of a martingale from its compensator, Bradic et al. [13] establish the strong oracle properties for the penalized estimates. Note that their results also hold for the lasso penalty. The strong oracle property indicates that as both š‘› and š‘ goes to āˆž, with probability tending to 1, the penalized estimator behaves as if the true relevant variables in the model were known. This is different from the classical notion of oracle which just requires that the estimator behaves like the oracle rather than an actual oracle itself. The strong oracle property implies the classical oracle property of Fan and Li [9] and sign consistency of Bickel et al. [19]. This tighter notion of an oracle property was first mentioned in Kim et al. [20] for the SCAD estimator of the linear model with polynomial dimensionality and then extended by Bradic et al. [21] to the penalized M-estimators under the ultrahigh dimensionality setting. Bradic et al. [13] further extended it to the Cox model by employing sophisticated techniques dealing with martingale and censoring structures.

Analogous to the Cox univariate shrinkage method, the penalized Cox relative risk method [13] proposes a coordinate wise algorithm which is especially attractive for the situation of š‘ā‰«š‘› and have been previously studied for linear and generalized linear models [5, 22, 23]. Since the coordinate-wise maximization algorithm in each iteration provides limits that are stationary points of the overall optimization, each output of the iterative coordinate ascent algorithm (ICA), Bradic et al. [13] propose gives a stationary point.

For each iteration, sequentially for š‘˜=1,ā€¦,š‘, by the partial quadratic approximation of ā„“(š›½) at the current estimate along the š‘˜-th coordinate while fixing the other coordinates, the š‘˜-th coordinate of the estimate is updated by maximizing the univariate penalized likelihood. Due to the univariate nature, the problem can be solved analytically, avoiding the challenges of nonconcave optimization. It updates each coordinate if the maximizer of the penalized univariate optimization increases the penalized objective function as well. The algorithm stops when two values of the penalized objective function are not different by more than a small threshold value.

Although both the iterative coordinate ascent algorithm (ICA) and the univariate shrinkage method exploit the convenient of univariate optimization, the univariate shrinkage method separates the coefficient estimates while in the iterative coordinate ascent algorithm, the coefficient estimates, are still related to one another in the iterative updating. These two methods also require different conditions. The univariate shrinkage method assumes that both conditionally on each risk set, and marginally, the covariates are independent of one another while the penalized Cox relative risk method assumes the conditions on the folded-concave penalties, the sparsity level, the dimensionality of the covariate vector, and the magnitude of the tuning parameter šœ†š‘›.

2.4. The Penalized Partial Likelihood Approach for the Ultra High-Dimensional Case

While the univariate shrinkage method is applicable to an arbitrary dimensionality, the penalized partial likelihood requires that the sample size is larger than the number of variables. Thus, to apply the penalized partial likelihood approach to the ultra high-dimensional case, a preliminary screening procedure is needed. Fan and Lv [5] proposed the sure independence screening procedure which first shrinks the full model 1,ā€¦,š‘ straightforwardly and accurately down to a submodel with size š‘‘=š‘œ(š‘›). Thus, the original problem of estimating the sparse š‘-vector š›½ reduces to estimating a sparse š‘‘-vector that is based on the now much smaller submodel. The penalized partial likelihood method in Section 2.3 can then be applied to the submodel. Fan and Lv [5] proved the sure independence screening method has optimal theoretical properties for even exponentially growing dimensionality.

For small š‘› large š‘ƒ problems, the traditional model selection criteria such as AIC, BIC, and cross-validation choose too many features. To overcome the difficulties, J. Chen and Z. Chen [6] developed a family of extended Bayesā€™ information criteria (EBIC). The EBIC is shown to be consistent with nice finite sample properties in both the linear model [6] and the generalized linear model [7]. For any subset model š‘ āŠ‚{1,2,ā€¦,š‘}, denote its size by šœˆ(š‘ ). Let Ģ‚š›½(š‘ ) be the maximum partial likelihood estimate corresponding to the subset model š‘ . The extended Bayesian information criterion is defined as ī€·Ģ‚ī€øāˆ’2ā„“š›½(š‘ )+šœˆ(š‘ )logš‘›+2šœˆ(š‘ )š›¾logš‘,(2.11) where š›¾ is a prespecified constant and can be chosen to be 0.5 as suggested by J. Chen and Z. Chen [7]. Optimal theoretical properties such as selection consistency of the EBIC have been rigorously obtained by J. Chen and Z. Chen [6] for the linear model and by J. Chen and Z. Chen [7] for the generalized linear model. The EBIC can be appealingly applied to the Cox model and it is worthwhile to further investigate its theoretical properties in the Cox model which has not yet been addressed in the literature.

3. The Penalization Method for the Cox Model with Semiparametric Relative Risk

The Cox relative risk model is sometimes too restrictive in examining the covariate effects. It seems implausible that the linearity assumption holds in the presence of a large number of predictors. Intuitively, at least for some of them, the linearity assumption might be violated and the modeling of covariate effects via the parametric relative risk model might lead to erroneous results. On the other hand, there are two objectives in the high-dimensional regression analysis of genetic studies with censored survival outcomes, we not only want to identify the predictor variables which are associated with the time but also to discern such a relationship if there does exist an association. Therefore, it is worth looking at other alternative survival models in examining the covariate effects.

Du et al. [14] proposed the penalized method for the Cox model with semiparametric relative risk model. Let š‘š‘‡=(š‘ˆš‘‡,š‘Šš‘‡), where š‘ˆ and š‘Š are the subvectors of š‘ with dimensions š‘‘=š‘āˆ’š‘ž and š‘ž, respectively. Instead of (2.1), they assume that šœ†(š‘”āˆ£š‘=š‘§)=šœ†0ī€ŗš›½(š‘”)expš‘‡0ī€»š‘ˆ+šœ‚(š‘Š),(3.1) where šœ‚(š‘¤)=šœ‚(š‘¤1,ā€¦,š‘¤š‘ž) is an unknown multivariate smooth function. The model assumes the additivity of the effects of š‘ˆ and š‘Š and only the effect of š‘ˆ is postulated to be linear. The effect of š‘Š can be of any form. This greatly enhances the flexibility and facilitates more robust investigation of the covariate effects across a large number of genetic and environment factors. Similarly, the log partial likelihood is ā„“(š›½,šœ‚)=š‘›ī“š‘–=1š›æš‘–āŽ§āŽŖāŽØāŽŖāŽ©š›½š‘‡š‘ˆš‘–ī€·š‘Š+šœ‚š‘–ī€øāŽ”āŽ¢āŽ¢āŽ£ī“āˆ’logš‘—āˆˆš‘…š‘–ī€·š›½expš‘‡š‘ˆš‘—ī€·š‘Š+šœ‚š‘—āŽ¤āŽ„āŽ„āŽ¦āŽ«āŽŖāŽ¬āŽŖāŽ­ī€øī€ø.(3.2)

Du et al. [14] proposed two penalties for the model (3.1), one penalty for the roughness of the function šœ‚ and the other penalty for simultaneous coefficient estimation and variable selection. The estimation iterates between the estimation of šœ‚ given an initial estimator of š›½ and the estimation of š›½ given an initial estimator of šœ‚. Given an estimate Ģ‚š›½ of š›½, šœ‚ is estimated by maximizing ā„“ī€·Ģ‚ī€øš›½,šœ‚āˆ’šœ†š½(šœ‚),(3.3) where š½ is a roughness penalty specifying the smoothness of šœ‚, and šœ†>0 is a smoothing parameter controlling the tradeoff. A popular choice for š½ is the šæ2-penalty which yields tensor product cubic splines for multivariate š‘Š. Given an estimate Ģ‚šœ‚ of šœ‚, š›½ can be estimated by ā„“(š›½,Ģ‚šœ‚)āˆ’š‘‘ī“š‘˜=1š‘šœƒš‘›ī€·||š›½š‘˜||ī€ø,(3.4) where š‘šœƒš‘›(ā‹…) is the SCAD penalty function and šœƒš‘› is the tuning parameter. In its numerical implementation, the SCAD penalty is approximated by a one-step approximation which transforms the SCAD penalty problem to a LASSO-type optimization, where the celebrated LARS algorithm [24] can be readily used to yield the entire solution path.

The algorithm converges quickly within a few iterations. In this approach, the SCAD penalty facilitates the simultaneous coefficient estimation and variable selection in the parametric component of relative risk model. As the multivariate smooth function š‘Š also involves multiple predictor variables, it is therefore necessary to identify the correct structure of šœ‚ and relevant variables in š‘Š too. Taking care of variable selection for the parametric components, we still need an approach to assess the structure of the nonparametric components. By transforming the profile partial likelihood to a density estimation problem with biased sampling, Du et al. [14] further derive a model selection tool based on the Kullback-Leibler geometry for the nonparametric component šœ‚. Specifically, a quantity based on the ratio of two Kullback-Leibler distances can be used to diagnose the feasibility of a reduced model šœ‚, the smaller the ratio is, the more feasible the reduced model is. Thus, the penalized Cox semiparametric relative risk approach provides a flexible tool for identifying relevant variables in both the parametric and nonparametric components.

4. The Modified Partial Least Squares Method for Dimension Reduction in the Cox Model

Partial least squares (PLS) [25] is a classical dimension reduction method of dealing with a large number of covariates. By constructing new variables which are linear combination of the original variables, it fully utilizes the information and a proper regression analysis can be conducted using the new variables. Different from the principal components (PCs) analysis, partial least squares utilizes the information contained in both the response variable and the predictor variables to construct new variables. This complicates its direct application to censored survival data since the response variable is subject to right censoring. Nguyen and Rocke [26] applied the standard PLS methods of Wold [25] directly to survival data and used the resulted PLS components in the Cox model for predicting survival time. Since the approach did not take into account that some of the survival time are censored and not exactly the underlying time to event, the resulting components are questionable and may induce bias. Alternatively, by reformulating the Cox model into a generalized linear model, Park et al. [27] applied the formulation of PLS of Marx [28] to derive the PLS components. Despite its validity, the introduction of many additional nuisance parameters in the reformulation makes the algorithm fail to converge when the number of covariates is large.

Li and Jiang [15] proposed a modified partial least squares method for the Cox model by constructing the components based on repeated least square fitting of residuals and Cox regression fitting. Let š‘¤š‘–š‘—āˆvar(š‘‰š‘–š‘—) be the weights and āˆ‘š‘›š‘–=1š‘¤š‘–š‘—=1. First, let š‘‹š‘—=(š‘1š‘—,ā€¦,š‘š‘›š‘—)š‘‡ and define š‘‰1š‘—=š‘‹š‘—āˆ’š‘§ā‹…š‘—šŸ,(4.1) where š‘§ā‹…š‘—āˆ‘=(1/š‘›)š‘›š‘–=1š‘š‘–š‘—, and šŸ is an š‘›-dimensional vector of all elements 1. After fitting the Cox model with one covariate at one time, we obtain the maximize partial likelihood estimate Ģ‚š›½1š‘— for the predictor variable š‘‰1š‘—, š‘—=1,ā€¦,š‘. Combining these estimates, we get the first component š‘‡1=š‘ī“š‘—=1š‘¤1š‘—Ģ‚š›½1š‘—š‘‰1š‘—.(4.2) The information in š‘‹ that is not in š‘‡1 can be written as the residuals of regressing š‘‰1,š‘— on š‘‡1š‘‰2,š‘—=š‘‰1,š‘—āˆ’š‘‡š‘‡1š‘‰1š‘—š‘‡š‘‡1š‘‡1š‘‡1.(4.3) By performing the Cox regression analysis with š‘‡1 and š‘‰1š‘— (one š‘— at a time), we obtain the maximized partial likelihood estimates Ģ‚š›½2š‘— and consequently get the second component š‘‡2=š‘ī“š‘—=1š‘¤2š‘—Ģ‚š›½2š‘—š‘‰2š‘—.(4.4) This procedure extends iteratively in a natural way to give component š‘‡2,ā€¦,š‘‡š¾, where the maximum value of š¾ is the sample size š‘›. Specifically, suppose that š‘‡š‘– has just been constructed, and to construct š‘‡š‘–+1, we first regress š‘‰š‘–š‘— against š‘‡š‘– and denote the residual as š‘‰(š‘–+1),š‘—, which can be written as š‘‰(š‘–+1),š‘—=š‘‰š‘–,š‘—āˆ’š‘‡š‘‡š‘–š‘‰š‘–š‘—š‘‡š‘‡š‘–š‘‡š‘–š‘‡š‘–.(4.5) Then we fit the Cox relative risk model šœ†(š‘”āˆ£š‘)=šœ†0ī€·š›½(š‘”)exp1š‘‡1+ā‹Æ+š›½š‘–š‘‡š‘–+š›½(š‘–+1),š‘—š‘‰(š‘–+1),š‘—ī€ø,(4.6) and obtain the maximum partial likelihood estimates Ģ‚š›½(š‘–+1),š‘— and š‘‡š‘–+1=š‘ī“š‘—=1š‘¤(š‘–+1)š‘—Ģ‚š›½(š‘–+1)š‘—š‘‰(š‘–+1)š‘—.(4.7) With the components š‘‡1,ā€¦,š‘‡š¾, a standard Cox regression model can be fitted and the risk score can be obtained as Ģ‚š›½1š‘‡1Ģ‚š›½+ā‹Æ+š¾š‘‡š¾,(4.8) where Ģ‚š›½š‘—,š‘—=1,ā€¦,š¾ is the maximum partial likelihood estimate of š›½š‘— when we fit the Cox relative risk model šœ†(š‘”āˆ£š‘)=šœ†0ī€·š›½(š‘”)exp1š‘‡1+ā‹Æ+š›½š¾š‘‡š¾ī€ø.(4.9) This can then be used for estimating the hazard function for future samples on the basis of their š‘‹ values. By examining the coefficients of š‘‹ values in the final model with š¾ components, one can rank the covariate effects by the risk score. The number of š¾ can be chosen by applying the cross-validation.

5. Comparison of Different Methods and Numerical Examples

5.1. Comparison Using Survival Prediction

In the previous sections, four different approaches have been used to identify the relevant factors which have influential effects on the survival time. In practice, it would be important and interesting to compare different methods which can be done by using certain measure of survival prediction. To assess the performance of the methods, the data set is first divided into the training sample and the test sample randomly. For example, a šœˆ-fold cross-validation divide the sample into šœˆ parts randomly. One part is retained as test set while the rest šœˆāˆ’1 folds are used as the training set. The training sample gives the estimated risk score for a given model (method) and then used in test sample for prediction. There are many measures for survival prediction. One of them mimics the random clinical trial in assigning the test sample into two groupsā€”one ā€œgoodā€ group and one ā€œbadā€ group. Whether a subject in the test sample falls into a good group or a bad group depends on whether his/her risk score is smaller than a threshold value for the risk score. The log-rank test can then be used to test the hypothesis that there is no difference between the two groups. The smaller the š‘ƒ value the resulting log-rank test has, the better predictive power the estimated risk score has, which translates into the better performance of the method/model. The dataset is randomly split into training and test sample and hence for a large number of replications, the comparison of different methods can be made by looking at the summary of the š‘ƒ-values of the log-rank test, say, the median.

The disadvantage of the log-rank test is that the subjects are only assigned to two groups and the risk score is only utilized in comparing with a threshold value. The information contained in the risk score which is continuous is not fully utilized for survival prediction. Alternatively, we can fit a Cox regression for the test sample using the risk score estimated from the training sample as a single covariate. The predictive power of the estimated risk score can be indicated by the significance of the risk score covariate in the fitted Cox regression model for the test sample. Again, the obtained š‘ƒ values using different methods in a large number of replications can help us assess their performance in terms of survival prediction.

5.2. Simulation Studies

We conduct simulation studies to compare different methods. As a simple illustration, we focus on the univariate shrinkage method (US) and the penalized shrinkage method (PS) reviewed in Section 2. We set the sample size š‘›=500 and the number of covariates š‘=250,500, and 1000, respectively. The covariates are jointly normally distributed with equal correlation coefficient šœŒ=0.5. The first six covariates are the only relevant variables with š›½1=š›½3=š›½5=1 and š›½2=š›½4=š›½6=āˆ’1. The baseline hazard function in (2.1) is set to be constant 1 and the censoring time is generated from the š‘œš‘Ÿš‘š(0,šœ), where šœ is chosen to yield the censoring proportion 30%. For the univariate shrinkage method, the top ranked variables with significance at 0.05 after Bonferroniā€™s correction will be selected. For the penalized shrinkage method, the sure independence screening procedure preselects š‘›/(4logš‘›)=20 and the penalized partial likelihood method is then applied to obtain the final model. As a third method, we directly use the EBIC to select a subset model. We report the median squared estimation error (MSE) and the squared estimation error is defined as š‘ī“š‘—=1||Ģ‚š›½š‘—āˆ’š›½š‘—||2.(5.1) We also report the average number of selected variables (MMS), the average positive selection, and false discovery rates (PSR and FDR), where āˆ‘PSR=š‘š‘—=1šœˆī€·š‘ āˆ—š‘—ā‹‚š‘ 0ī€øī€·š‘ š‘šœˆ0ī€ø,āˆ‘FDR=š‘š‘—=1šœˆī€·š‘ āˆ—š‘—/š‘ 0ī€øāˆ‘š‘š‘—=1šœˆī‚€š‘ āˆ—š‘—ī‚,(5.2)š‘=200 is the number of replications, š‘ 0 denotes the true model, and š‘ āˆ—š‘— denotes the selected model in the š‘—th replication. The simulation results are summarized in Table 1. From Table 1, we can see that both the PS and the EBIC perform better than the US. Compared with the EBIC, the PS selects slightly more variables and has relatively larger FDRs and PSRs.


š‘ MethodMSEFDRPSRMMS

250US1.620.060.583.97
PS0.920.160.766.55
EBIC0.960.100.694.70

500US1.710.070.553.95
PS1.010.180.746.63
EBIC1.060.110.664.73

1000US1.840.080.523.91
PS1.120.200.706.69
EBIC1.150.130.634.78

5.3. A Real Example

We analyzed microarray data by the lung cancer dataset from Beer et al. [29]. The dataset consists of gene expressions of 4966 genes for 83 patients. The patients were classified according to the progression of the disease. Sixty four patients were classified as stage I. Nineteen patients were classified as stage III. For each of the 83 patients, the survival time as well as the censoring status is available. Other covariate variables in addition to the gene expressions are age, gender, and smoking status. Our aim is to study the association of survival time with the gene expressions adjusting for the effects of the other covariates via the Cox model with parametric relative risk. The US, PS, and EBIC are used to select variables. We divide the 83 patients into two groups by randomly assigning 32 of the 64 stage I and 9 of 19 stage III patients to the training group and the remaining patients to the test group. By adjusting for the covariate (gender, age, smoking) effects, we fit the Cox model with the selected genes and construct a risk index. The 50th percentile of the risk index from the training group is employed as the threshold. We then apply the threshold to test dataset to define the low-risk and high-risk groups. To assess the predictability of the so-defined discriminant criterion, we perform a log-rank test of the difference of survivals of the two groups defined by the risk index. If the survival times of the two groups can be well separated (measured by the š‘ƒ-value of the log-rank test), then the method has a better predictability. We therefore use the resulting median š‘ƒ-value (among 1000 random splitting data into training and test sets) as the measure of prediction accuracy of different methods. The results are summarized in Table 2. It is shown that the PS and EBIC have comparable predictability which is much better than the US.


MethodNumber of selected genesMedian š‘ƒ value ( Ɨ 1 0 āˆ’ 4 )

US510.06
PS130.064
EBIC80.082

6. Further Work

We review in this paper some recently developed methods for high-dimensional regression analysis in genetic studies with censored survival outcomes. The identification of relevant variable that have the influential effects on the survival time leads to a better understanding of disease and gene/environment association for many complex diseases. Although the Cox model is widely used to examine the covariate effects through the relative risk, the proportional hazard assumption may be violated in practice, for example, when there are long-term survivors. In some situations, other alternative models such as the additive risks model, the proportional odds model, or more generally the semiparametric transformation models may fare better. Furthermore, as we discussed before, the linearity assumption may not be tenable either. It would be interesting to develop parallel methodologies in these alternative models.

Although the Cox semiparametric relative risk relaxes the assumption to some extent, the classification of the covariates into the parametric component (with linearity assumption) and the nonparametric component (without linearity assumption) is challenging and unsolved. The problem would be more difficult when both the proportional hazards assumption and the linearity assumption are violated. In the presence of a large number of genetic and environment factors, undoubtedly we have to make necessary assumptions on the underlying structure to proceed. It is worth investigating that how the nonproportionality and the linearity assumptions alone or jointly with each other impact on the high-dimensional regression analysis. In particular, how sensitive the identification of relevant variables is to the misspecification of the model and whether there are other good structures to be postulated which have appealing properties and are most suitable for the high-dimensional regression analysis.

It is also worthy to note that for model selection, there are two different purposes. One is selection consistency such as the oracle properties. The other is the prediction accuracy. While the prediction accuracy can be well assessed by cross-validation, the selection consistency should be assessed by using the FDRs and PSRs.

Acknowledgments

The authors are very grateful to Professor Yongzhao Shao and three anonymous references for many helpful comments which improved the presentation of the paper. This work was supported by the grant from National University of Singapore (R-155-000-112-112).

References

  1. L. Breiman, ā€œHeuristics of instability and stabilization in model selection,ā€ Annals of Statistics, vol. 24, pp. 2350ā€“2383, 1996. View at: Publisher Site | Google Scholar
  2. R. Tibshirani, ā€œRegression shrinkage and selection via the lass,ā€ Journal of the Royal Statistical Society B, vol. 58, pp. 267ā€“288, 1996. View at: Google Scholar
  3. J. Fan and R. Li, ā€œVariable selection via nonconcave penalized likelihood and its oracle properties,ā€ Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348ā€“1360, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  4. J. Fan and H. Peng, ā€œNonconcave penalized likelihood with a diverging number of parameters,ā€ The Annals of Statistics, vol. 32, no. 3, pp. 928ā€“961, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  5. J. Fan and J. Lv, ā€œSure independence screening for ultrahigh dimensional feature space,ā€ Journal of the Royal Statistical Society B, vol. 70, no. 5, pp. 849ā€“911, 2008. View at: Publisher Site | Google Scholar
  6. J. Chen and Z. Chen, ā€œExtended Bayesian information criteria for model selection with large model spaces,ā€ Biometrika, vol. 95, no. 3, pp. 759ā€“771, 2008. View at: Publisher Site | Google Scholar
  7. J. Chen and Z. Chen, ā€œExtended BIC for small-n-large-P sparse GLM,ā€ Statistica Sinica. In press. View at: Google Scholar
  8. R. Tibshirani, ā€œThe Lasso method for variable selection in the cox model,ā€ Statistics in Medicine, vol. 16, pp. 385ā€“395, 1997. View at: Publisher Site | Google Scholar
  9. J. Fan and R. Li, ā€œVariable selection for Cox's proportional hazards model and frailty model,ā€ The Annals of Statistics, vol. 30, no. 1, pp. 74ā€“99, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. H. H. Zhang and W. Lu, ā€œAdaptive Lasso for Cox's proportional hazards model,ā€ Biometrika, vol. 94, no. 3, pp. 691ā€“703, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. H. Zou, ā€œA note on path-based variable selection in the penalized proportional hazards model,ā€ Biometrika, vol. 95, no. 1, pp. 241ā€“247, 2008. View at: Publisher Site | Google Scholar
  12. R. J. Tibshirani, ā€œUnivariate shrinkage in the Cox model for high dimensional data,ā€ Statistical Applications in Genetics and Molecular Biology, vol. 8, pp. 3498ā€“3528, 2009. View at: Publisher Site | Google Scholar
  13. J. Bradic, J. Fan, and J. Jiang, ā€œRegularization for Cox's proportional hazards model with NP-dimensionality,ā€ vol. 39, no. 6, pp. 3092ā€“3120, 2011. View at: Publisher Site | Google Scholar
  14. P. Du, S. Ma, and H. Liang, ā€œPenalized variable selection procedure for Cox models with semiparametric relative risk,ā€ The Annals of Statistics, vol. 38, no. 4, pp. 2092ā€“2117, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. H. Li and G. Jiang, ā€œPartial Cox regression analysis for high-dimensional microarray gene expression data,ā€ Bioinformatics, vol. 20, 1, pp. i208ā€“i215, 2004. View at: Publisher Site | Google Scholar
  16. P. Verweij and H. van Houwelingen, ā€œCross-validation in survival analysis,ā€ Statistics in Medicine, vol. 12, pp. 2305ā€“2314, 1993. View at: Publisher Site | Google Scholar
  17. D. L. Donoho and I. M. Johnstone, ā€œIdeal spatial adaptation by wavelet shrinkage,ā€ Biometrika, vol. 81, no. 3, pp. 425ā€“455, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. D. Paul, E. Bair, T. Hastie, and R. Tibshirani, ā€œ‘Preconditioning’ for feature selection and regression in high-dimensional problems,ā€ The Annals of Statistics, vol. 36, no. 4, pp. 1595ā€“1618, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  19. P. J. Bickel, Y. Ritov, and A. B. Tsybakov, ā€œSimultaneous analysis of LASSO and Dantzig selector,ā€ Annals of Statistics, vol. 37, pp. 1705ā€“1732, 2009. View at: Publisher Site | Google Scholar
  20. Y. Kim, H. Choi, and H.-S. Oh, ā€œSmoothly clipped absolute deviation on high dimensions,ā€ Journal of the American Statistical Association, vol. 103, no. 484, pp. 1665ā€“1673, 2008. View at: Publisher Site | Google Scholar
  21. J. Bradic, J. Fan, and W. Wang, ā€œPenalized composite quasi-likelihood for ultrahigh-dimensional variable selection,ā€ Journal of Royal Statistical Society Series B. In press. View at: Google Scholar
  22. T. T. Wu and K. Lange, ā€œCoordinate descent algorithms for lasso penalized regression,ā€ The Annals of Applied Statistics, vol. 2, no. 1, pp. 224ā€“244, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  23. J. Friedman, T. Hastie, and R. Tibshirani, ā€œRegularization paths for generalized linear models via coordinate descent,ā€ Journal of Statistical Software, vol. 33, pp. 1ā€“22, 2010. View at: Google Scholar
  24. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, ā€œLeast angle regression,ā€ The Annals of Statistics, vol. 32, no. 2, pp. 407ā€“499, 2004, With discussion, and a rejoinder by the authors. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  25. H. Wold, ā€œEstimation of principal components and related models by iterative least squares,ā€ in Multivariate Analysis, P. R. Krishaiaah, Ed., pp. 391ā€“420, Academic Press, New York, NY, USA, 1966. View at: Google Scholar | Zentralblatt MATH
  26. D. Nguyen and D. M. Rocke, ā€œPartial least squares proportional hazard regression for application to DNA microarray data,ā€ Bioinformatics, vol. 18, pp. 1625ā€“1632, 2002. View at: Publisher Site | Google Scholar
  27. P. J. Park, L. Tian, and I. S. Kohane, ā€œLinking expression data with patient survival times using partial least squares,ā€ Bioinformatics, vol. 18, pp. S120ā€“S127, 2002. View at: Publisher Site | Google Scholar
  28. B. D. Marx, ā€œIteratively reweighted partial least squares estimation for generalized linear regression,ā€ Technometrics, vol. 38, pp. 374ā€“381, 1996. View at: Google Scholar
  29. D. G. Beer, S. L. Kardia, C. C. Huang et al., ā€œGene-expression profiles predict survival of patients with lung adenocarcinoma,ā€ Nature Medicine, vol. 8, pp. 816ā€“824, 2002. View at: Google Scholar

Copyright © 2012 Jinfeng Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1997
Downloads944
Citations

Related articles