Abstract

Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex regularizations, to select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

1. Introduction

One of the most important objectives for survival analysis is to select a small number of key risk factors from many potential predictors [1]. Commonly, the Cox proportional hazards model [2, 3] is used to study the relationship between predictor variables and survival time. Suppose a dataset has a sample size of to study the survival time on covariate ; we use the data form of to represent the individual’s sample, where the survival time being complete if and right censored if . As in regression, is a potential prediction vector.

By Cox’s proportional hazards model, the hazard function is given as where the baseline hazard function is unspecified and is the regression coefficient vector of variables. Cox’s partial log-likelihood is expressed as where denotes the set of indices of the survival individuals at time .

In practice, only a small number of the predictor variables actually affect the hazard rate. The goal of variable selection in Cox’s proportional hazards model is to select the key risk factors. Recently a series of penalized partial likelihood methods, such as the [47], () [8] and [9, 10] penalized methods were proposed for Cox’s proportional hazards model. These penalized partial likelihood methods find important risk factors by shrinking some regression coefficients to zero.

The standard penalized methods cannot directly be applied to the nonlinear Cox model to obtain parameter estimates. Therefore, Tibshirani [11] proposed an iterative procedure to transform the Cox’s partial log-likelihood function (2) to linear regression problem. Let , , and denote the predictor variable matrix, , , , and , where is a generalized inverse of . Since the general quadratic programming cannot be directly solved to the cases with , Gui and Li [12] applied the Choleski decomposition to obtain such that , , and . By the Taylor expansion, the partial log-likelihood is approximated by the quadratic form: Thus, the regularization methods can directly solve the penalized regression problem: where is the tuning parameter.

Tibshirani [5] proposed the Lasso (least absolute shrinkage and selection operator) method, which has penalty , which shrinks small coefficients to zero and hence results in a sparse representation of the solution. Fan and Li [4] proposed the smoothly clipped absolute deviation (SCAD) penalty, which avoids excessive penalties on large coefficients and enjoys the oracle properties. Zhang [7] proposed the minimax concave plus (MCP) method, which is a continuous and nearly unbiased approach in high-dimensional linear regression. Zhang and Lu [6] suggested an adaptive Lasso method with an adaptively penalty estimate the parameters, which uses the penalty , where the weights are chosen adaptively by the data. Zou and Hastie [13] proposed an elastic net method that combines the and () penalties.

The above mentioned series of regularized regression methods were based on the penalty. Recently, several works on learning sparse models have stressed the need of other penalties for achieving better sparsity profile. For instance, Rosset and Zhu [14] suggested the use of a penalty, which simply consists in replacing the norm with nonconvex norm . Zhang [15] presented a multistage convex relaxation scheme, which can be relaxed to a smoothed regularization. Mazumder et al. [16] pursued a coordinate-descent approach with nonconvex penalties (SparseNet) and study its convergence properties. Xu et al. [9, 10] further explored the properties of the () penalty and revealed the extreme importance and special role of the regularization. In our previous work [17, 18], we developed several fast algorithms using the penalty to solve the logistic regression model and the Cox model. Our computational results showed that regularization outperforms some regularization methods. In this paper, we propose a novel harmonic regularization method which approximates to the () penalties. We also investigate the fast harmonic regularization algorithm to solve the Cox model for the high dimension low sample size problem (“large small problem”).

The rest of the paper is organized as follows. Section 2 describes the harmonic regularization method. Section 3 gives a harmonic regularization algorithm to obtain estimates form Cox model. Section 4 evaluates our method by simulation studies and application to four real microarray datasets, such as the diffuse large B-cell lymphoma (DLBCL) datasets with the survival times and gene expression data. Section 5 concludes the paper with some useful remarks.

2. Harmonic Regularization

In general, a united framework of the regularization in machine learning has a form: where is a loss function, is a penalty function, and is a tuning parameter. Different here is in correspondence with different penalized constraint to the model, so different solution is to be got, respectively. The penalized constraint is the weakest when and becomes stronger as increases.

Obviously a regularization (5) can be divided by two elements, the loss function and the penalty function . Moreover, different loss function and different penalty will result in different algorithm. For example, when the loss function is hinge loss and the penalty , the result is a support vector machine algorithm. Let the loss function be square loss and using denote the regularization methods, if , it is the ridge regression [19] and can be used to solve the ill-posed problem. If , it is the subsets regression [20], which applies regularization with the penalty function . When , it is the Lasso algorithm [21], which applied regularization. Lasso and its variations (or the Lasso type algorithms), such as elastic net [13], SCAD [4], MCP [7], adaptive Lasso [6], and stage-wise Lasso [22] are extensively studied and applied in recent years in the fields of statistics and machine learning.

It is well known that regularization is ideal sparsest for variable selection. Unfortunately, regularization is a combinatorial optimization problem, which is difficult to be solved. In contrast, regularization leads to a convex optimization problem and easy to be solved, but it does not yields sufficiently sparse variable selection. Donoho et al. [23, 24] had shown that regularization is equivalent to regularization under certain conditions. These imposed conditions therefore characterize those problems for which no matter what or regularization is applied, the same sparse solutions will be produced. However, for many practical problems, the sparsity of solutions yielded through and regularization is far from being equivalent. Particularly, the solutions found with regularization is very often less as sparse as the solutions found with regularization.

In fact, when , the regularization automatically performs variable selection by removing predictors with very small nonzero estimated coefficients. The smaller the is, the sparser the solutions found with regularization will be. This leads researchers to study regularization with because it can find the more sparse solutions than those found with regularization and easier to be solved than regularization. For example, Zhang [15] presented a multistage convex relaxation scheme for solving problems with nonconvex objective functions. For learning formulations with sparse regularization, they analyzed the behavior of a specific multistage relaxation scheme.

Nevertheless, the applications to the penalty function with not often attracts much attention done mainly due to the reason that when , the penalty function changes from a convex function to a nonconvex one and so the corresponding optimization problem is not easy to solve. Meanwhile, another difficulty in fact is that the differential quotient of the penalty function at origin is which results in the invalidation of the ordinarily optimization algorithms.

In this paper, we propose the harmonic regularization which can approximate the penalty with , because some research works show that the penalty can be taken as a representative of the () penalty [22]. The harmonic regularization scheme can be expressed as where . When the shrinkage parameter is close to 1, , the harmonic regularization approximates to the regularization. When the parameter is close to 2, and the harmonic regularization approximates to the regularization. Moreover, comparing with the () penalties, the harmonic regularization has the property that its first derivative is finite at origin, which implies that the corresponding regularization problem can be efficiently solved via the direct seeking techniques.

3. The Harmonic Regularization Algorithm for the Cox Model

In this section, we propose a generalized path seeking algorithm of the harmonic regularization for Cox’s model. As mentioned in the last section, when () regularization is to be applied, an inevitable difficulty is how to efficiently solve the nonconvex optimization problem caused by the () regularization (It is easy to see that in the case of () regularization is applied, the penalty term becomes convex). Fortunately, direct path seeking makes it possible to overcome that difficulty. Direct path seeking, which sequentially constructs a path directly in the parameter space, closely approximates that for a penalty function without having to repeatedly solve numerical optimization problems. Popular path seeking based on squared-error includes partial least squares regression (PLS, [23]), forward stepwise regression [22], least angle regression [25], piecewise linear path [14], and gradient boosting. Friedman [26] proposed the generalized path seeking, which can produce solutions that closely approximate those for any convex loss function and nonconvex constraints. The advantages of path seeking methods provide us a new way to solve the problem of regularization with nonconvex penalty. We will propose a new generalized path seeking method to solve the harmonic regularization.

We let where . Note that which shows that each additive term is a monotonically increasing function of absolute value of its argument. This implies that the net regularization penalty function we have suggested meets the validity of the general path seeking algorithm [11]. Let measure length along the path and a small increment. Define and let Then, we give the harmonic regularization algorithm procedures for the Cox model as follows:(1)initialize , , ;(2)compute and based on (3) using the current value , ;(3)loop {(4) compute , ;(5);(6) if ;(7) else ;(8);(9), and ;(10);(11)} until , ;(12), and go back to Step 2 until the convergence criterion is met.

In the above algorithm, after Step 2, at each step those coefficients with sign opposite to that of the corresponding are identified. When the set is empty, the coefficient corresponding to the largest component of , an absolute value is selected at Step 6. And when there are one or more elements in the set , the coefficient with corresponding largest within this subset is instead selected. The selected coefficient is the increments by a small amount in the direction of the sign of its corresponding while all other coefficients remain unchanged, yielding the solution for the next path point . The iterations continue until all components of and the algorithm then reaches a regularized solution for the harmonic regularized Cox model.

4. Simulation

4.1. Selection of the Shrinkage Parameter and the Tuning Parameter

To select the shrinkage parameter a and the tuning parameter , we use the maximization of the cross-validation partial likelihood (CVPL) method proposed by van Houwelingen et al. [27], which is defined as where represents the estimation of based on the harmonic regularization procedure with the parameters and from the data without the th subject. The terms and are the log partial likelihoods with all the subjects and without the th subject, respectively. The optimal value of the parameters and are chosen to maximize the sum of the contributions of each subject to the log partial likelihood over a grid of . CVPL is the special case of a more general cross-validated likelihood approach for model selection and has been demonstrated to perform well in prediction in the context of the penalized Cox regression.

4.2. Model Validation Measures

The performance measures of censored survival data is more complicated: the measure can only be computed if the case is not right censoring. Thus, several specially designed measure method have been proposed in the literatures. In this paper, we employ the integrated brier score (IBS) [28] and the concordance index (CI) [29] to evaluate the prediction ability of the regularization methods.

Integrated Brier Score (IBS). The brier score (BS) is defined as a function of time by where denotes the Kaplan-Meier estimation of the censoring distribution and stands to estimate survival for patient . Note that the is dependent on the point in time , and its values are between 0 and 1. Good predictions at time result in small values of BS. The integrated brier score (IBS) is given by The IBS is used to assess the goodness of the predicted survival functions of all observations at every time between 0 and .

Concordance Index (CI). The concordance index (CI) can be interpreted as the fraction of all pairs of subjects which predicted survival times are correctly ordered among all subjects that can actually be ordered. By the CI definition, we can determine when and , where is survival function. The pairs for which neither nor can be determined are excluded from the calculation of CI. Thus, the CI is defined as Note that the values of CI are between 0 and 1, the perfect predictions of the building model would lead to 1 while have the CI value of 0.5 at random.

4.3. Analyses of the Simulated Data

In this section, we evaluate the performance of the harmonic regularization method for the Cox model in simulation study. We generate high-dimensional and low sample size data which contain many irrelevant features. Six methods are compared with our proposed harmonic regularization approach (HRA): the Lasso penalty (), the smoothly clipped absolute deviation penalty (SCAD), the minimax concave penalty (MCP), the adaptive Lasso (A-Lasso), the elastic net (), and the penalty ().

We adopted the Cox model simulation scheme in Bender’s work [30]. The data generation procedure is as follows.

Step 1. We generated the vectors independently from a standard normal distribution and the predictor vector is generated by   , where is the correlation parameter of the predictor vectors.

Step 2. The survival time   (, indicates the sample size) is constructed from a uniformly distributed variable by , where is the shape parameter, is the scale parameter, is the ground-true regression coefficients, is the independent random error generated from , and is the parameter which controls the signal to noise.

Step 3. Censoring time point is obtained from an exponential distribution , where is determined by the specify censoring rate.

Step 4. Here we define and , the observed data represented as for the Cox model (1) are generated.

In every simulation, the dimension of the predictor vector is 1000, and the first five true coefficients are nonzero: , , , , , and . About 25% of the data are right censored. We consider the cases with the training sample sizes , 150, 200, the correlation coefficient , 0.5, and the noise control parameter , 0.5, respectively. To assess the variability of the experiment, each method is evaluated on a test set including 100 random generated samples.

The estimation of the optimal tuning parameter in the regularization models can be done in many ways and is often done by -fold cross-validation (CV). Note that the choice of will depend on the size of the training set. In our experiments, we use 10-fold cross-validation . The elastic net method has two tuning parameters; we need to cross-validate on a two-dimensional surface.

Table 1 shows the average number of variable selected and the recovery rate by each regularization method in 500 runs. The recovery rate is defined as the ratio of the average number of the selected relevant variables () to the average number of the selected variables [9]. As shown in Table 1, when the sample size increases, the prediction performances of all the seven methods are improved. For example when and , the average of the variables selected by the harmonic regularization method decreased from 6.6 to 5.8 and its recovery rate is improved from 0.72 to 0.86 with the sample sizes increased from 100 to 200. When the correlation parameter and the noise parameter increase, the variable selection performances of all the seven methods are decreased. For example, when and , the average of the recovery rate from the harmonic method decreased from 0.86 to 0.71, in which increased from 0.2 to 0.5. When and , the average of the recovery rate of the harmonic method decrease from 0.63 to 0.50, in which increased from 0.1 to 0.5. Moreover, in our simulation, the influence of the noise may be slightly larger than that of the variable correlation for the prediction performance of all the seven methods. On the other hand, at the same parameter setting case, the recovery rates of the harmonic method and penalty are almost better than the results of the other five methods. For example, when , and , the recovery rate of the harmonic method is 0.72 much better than 0.14, 0.43, 0.54, 0.28, and 0.13 got by the Lasso, SCAD, MCP, adaptive Lasso, and elastic net, respectively, slight better than 0.71 got by penalty method.

To evaluate prediction performance of the seven regularization methods for the Cox model, we presented their average IBC and CI values on the simulated datasets among 500 times in Table 2.

In terms of IBC and CI, for different parameters’ settings, no methods almost performed better than others, but their prediction performances are only small differences. For example, when , , and , the average of IBS from the harmonic method is 0.079, better than 0.081, 0.087, 0.084, 0.08, 0.08, and 0.084 got by Lasso, SCAD, MCP, adaptive Lasso, and elastic net and penalty. When , , and , the average of CI from the harmonic method is 0.845, better than 0.749, 0.788, 0.822, 0.757, and 0.838 got by Lasso, SCAD, MCP, adaptive Lasso, and , but slight worse than 0.851 got by elastic net penalty method.

Combined with the results reported in Table 1, we concluded that the harmonic penalized method showed better or equivalent predictive performance than the other regularization methods.

4.4. Analysis of the Real Microarray Datasets

In this section, we evaluated the performance of the harmonic regularization methods on the real survival gene expression datasets. Four publicly available datasets are used in this part. A brief description of these datasets is given below and summarized in Table 3.

Diffuse Large B-cell Lymphoma Dataset (DLBCL) 2002. This dataset published by Rosenwald et al. [31]. The dataset consists of 240 samples from patients. For each sample, 7399 gene expression measurements were obtained. The clinical outcome was survival time, either observed or censored.

Diffuse Large B-cell Lymphoma Dataset (DLBCL) 2003. This dataset is from Rosenwald et al. [32]. It consists of 92 lymphoma patients, and each patient has 8810 genes.

Lung Cancer Dataset. The lung cancer dataset is from Beer et al. [33]. It consists of gene expressions of 4966 genes for 83 patients. The survival time as well as the censoring status is available.

AML Dataset. The AML dataset is from Bullinger et al. [34]. It contains the expression profiles of 6283 genes for 116 patients, and the number of censored cases is 49.

We evaluated the prediction accuracies of the seven estimated regularization methods using random partition: a training set of about 2/3 of the patients used for estimation and a test set of about 1/3 of the patients used for testing of the prediction capability. For estimating , we employed the five-fold cross-validation scheme using the training set. We repeated each procedure 200 times.

Table 4 reports the average number of genes selected by each method. The harmonic regularization method performs better than those of type methods (Lasso, SCAD, MCP, adaptive Lasso, and elitist net), and slightly better than that of penalty. As shown in Table 4, for DLBCL (2002) dataset, the harmonic penalized methods selected about 71 genes, compared to about 174, 129, 129, 146, and 180 about five Lasso, SCAD, MCP, adaptive Lasso and elitist net, slightly better than 76 got by penalty. For DLBCL (2003) and AML datasets, the best one is penalty and the second is harmonic methods.

To assess predictive performance, we summarize the results of IBS and CI obtained by the seven methods, respectively, in Table 5. Both the results of IBS and CI, the results of all regularization methods, were not much different and the elitist net and harmonic penalized method almost outperforms than other five penalized methods. Combined with the results reported in Tables 4 and 5, we concluded that the harmonic penalized method selected the smaller subset of the key genes while give best or equivalent predictive performance.

5. Conclusion

Variable selection is a fundamental problem in statistics and machine learning, and the regularization method is one of the ways to solve this problem. Generally speaking, a regularization algorithm is always a combination of a loss function and a penalty function in the past research and applications. Particularly, in the procedure of variable selection, the harmonic regularization is like a net which can always catch the correct model. This demonstrates the stronger sparsity and better correctness of the harmonic regularization. We have provided a serous of simulations to demonstrate that type regularization methods are inefficient; the harmonic regularization and penalty methods proved are efficient and effective.

In the simulation part, we use four real datasets. There are the DLBCL (2002), the DLBCL (2003), the Lung cancer, and the AML. Results indicate that our harmonic regularization algorithm is very competitive in analyzing high dimensional survival data in terms of sparsity. Simulation results indicate that the harmonic penalized Cox model is very competitive in analyzing high dimensional survival data, because it was able to reduce the size of the predictor even further at moderate costs for the prediction accuracy [8]. The harmonic penalized Cox model will provide an efficient tool in building a prediction model for survival time based on high dimensional biological data.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by Macau Science and Technology Development Funds (Grant no. 099/2013/A3) of Macau Special Administrative Region of the People’s Republic of China.