Abstract

Variable selections for regression with high-dimensional big data have found many applications in bioinformatics and computational biology. One appealing approach is the regularized regression which penalizes the number of nonzero features in the model directly. However, it is well known that optimization is NP-hard and computationally challenging. In this paper, we propose efficient EM (EM) and dual EM (DEM) algorithms that directly approximate the optimization problem. While EM is efficient with large sample size, DEM is efficient with high-dimensional () data. They also provide a natural solution to all    problems, including lasso with and elastic net with . The regularized parameter can be determined through cross validation or AIC and BIC. We demonstrate our methods through simulation and high-dimensional genomic data. The results indicate that has better performance than lasso, SCAD, and MC+, and with AIC or BIC has similar performance as computationally intensive cross validation. The proposed algorithms are efficient in identifying the nonzero variables with less bias and constructing biologically important networks with high-dimensional big data.

1. Introduction

Variable selection with regularized regression has been one of the hot topics in machine learning and statistics. Regularized regressions identify outcome associated features and estimate nonzero parameters simultaneously and are particularly useful for high-dimensional big data with small sample sizes. Big data are datasets with either huge sample size or very high dimensions or both. In many real applications, such as bioinformatics, image and signal processing, and engineering, a large number of features are measured, but only a small number of features are associated with the dependent variables. Including irrelevant variables in the model will lead to overfitting and deteriorate the prediction performance. Therefore, different regularized regression methods have been proposed for variable selection and model construction. regularized regressions, which directly penalize the number of nonzero parameters, are the most essential sparsity measure. Several popular information criteria, including Akaike information criterion (AIC) [1], Bayesian information criterion (BIC) [2], and risk inflation criteria (RIC) [3], are based on penalty and have been used extensively for variable selections. However, solving a general regularized optimization is NP-hard and computationally challenging. Exhaustive search with AIC or BIC over all possible combinations of features is computationally infeasible with high-dimensional big data.

Different alternatives have been proposed for the regularized regression problem. One common approach is to replace by . is known as the best convex relaxation of . regularized regression [4] is convex and can be solved by an efficient gradient decent algorithm. It has found many applications in statistical genetics, bioinformatics, and medicine [5, 6]. Minimizing is equivalent to minimizing under certain conditions. However, the estimates of regularized regression are asymptotically biased, and lasso may not always choose the true model consistently [7]. Experimental results by Mancera and Portilla [8] also posed additional doubt about the equivalence of minimizing and . Moreover, there were theoretical results [9] showing that while regularized regression never outperforms by a constant, in some cases regularized regression performs infinitely worse than . Lin et al. [9] also showed that the optimal solutions are often inferior to solutions found using greedy classic stepwise regression, although solutions with penalty can be found effectively. More recent approaches aimed to reduce bias and overcome discontinuity include SCAD [10], regularization [11, 12], and MC+ [13]. However, multiple free parameters ( and ) must be tuned in those approaches, which is computationally intensive. They are not suitable for big data mining. Even though there are some effects for solving regularized optimization problems [14, 15], was either approximated by a different continuous smooth function or transformed into a much larger ranking optimization problem. To the best of our knowledge, currently, there is no efficient method directly approximating for big data problem.

In this paper, we propose efficient EM algorithms that directly approximate regularized regression problem. Our proposed approaches effectively deal with optimization by solving a sequence of convex optimizations and are efficient for high-dimensional data. They also provide a natural solution to all problems, including lasso with , elastic net with [16], and the combination of and with [17]. Similar to lasso, the regular parameter can be determined by the generalized information criterion [18]; optimal with regularized regression can also be predetermined with different model selection criteria such as AIC, BIC, and RIC. local graphical model with either AIC or BIC is faster than with cross validation. We demonstrate our methods through simulation and high-dimensional genomic data. The proposed methods identify the nonzero variables with less bias and outperform lasso, SCAD, and MC+ by a large margin. They can also choose the important genes and construct biological networks effectively.

2. Methods

Given an dependent variable and an feature matrix , a linear model is defined aswhere is the number of samples and is the number of variables and , are parameters to be estimated, and are the random errors with mean and variance . Assume that only a small subset of has nonzero s. Let be the subset index of relevant variables with , and let be the index of irrelevant features with coefficients; we have , , and , where . The error function for regularized regression is where counts the number of nonzero parameters. One observation is that (2) is equivalent to (3) when reaching the optimal solution. Our EM methods will be derived from (3). We can rewrite (3) as the following two equations:Given , (4) is a convex quadratic function and can be optimized by taking the first-order derivative:where indicates element-wise division. Rewriting (6), we have In addition, since , where is element-wise multiplication, , and . Let be a diagonal matrix with s on the diagonal and combine (7) and (8) together; we have Solving (9) leads to following explicit solution:where (10) can be considered as the M-step of the EM algorithm maximizing negative cost function and (11) can be regarded as the E-step with . Equations (10) and (11) together can also be treated as a fixed point iteration method in nonlinear optimization. It is guaranteed to have optimal solutions under certain conditions as shown in Theorem 1.

Theorem 1. Given an input matrix , output matrix , and initialized solution , the nonlinear system determined by (10) and (11) will converge to an optimal solution, when .

Proof. Equations (10) and (11) are the same as First, is Lipschitz continuous for , andBecause , it is clear from (13) that there is a Lipschitz constant So the derivative for every is less than 1. Now, given the initial value for (10) and (11) , the sequence remains bounded because, , And therefore Now, , Hence,and therefore is a Cauchy sequence that has a limit solution .
Note that is not a convex function; multiple local optimal solutions may exist. However, given initial , our algorithm always reaches the same optimal solution closest to . Assuming that there were two solutions and ,Since , (19) can only hold, if . That is, , so the optimal solution of the EM algorithm is always the same.
Finally, the EM algorithm will be closer to the optimal solution at each step, because

Theorem 1 indicates that both the regularized parameter and initial values of the parameter are important. Given an initial value , the method converges to an optimal solution as long as

Lemma 2. When and , the algorithm will find a nontrivial optimal solution for . More specifically, it will converge to an optimal solution, when and for and , respectively, where is a diagonal matrix with on the diagonal.

Proof. Since , we haveInequality (21) is satisfied, ifIn addition, we have and let Therefore, if we takethe algorithm will find a nontrivial optimal solution. In particular, when , we haveBoth Theorem 1 and Lemma 2 provide some useful guidance for implementing the method and choosing appropriate and . They show that the EM algorithm always converges to an optimal solution, given certain and initial solution , and the estimated value is closer to the true solution after each EM iteration. Note that there is a trivial solution , , for (10) and (11); therefore, nonzero initial is critical for finding a nontrivial solution. Our experiences with this method indicate that initializing with the estimates from penalized ridge regression will lead to quick convergence and super performance. The algorithm with such initialization usually converges under one hundred iterations and the performance is substantially better than lasso as shown in Section 3. The EM algorithm is as shown in Algorithm 1.
To deal with big data problem with , we also propose an efficient algorithm by solving the much smaller () matrix inverse problem similar to [19]. The algorithm is based on the following fact:So has the analytical solution: The dual EM (DEM) algorithm dealing with problem with (28) is as shown in Algorithm 2.
Apparently, while EM algorithm is efficient for solving the large big data problem, DEM can handle problem more efficiently.

Given a , a small number ,
and training data ,
Initializing ,
While 1,
  E-step: , and
  M-step:
  if , Break; End
End
Given a , a small number ,
and training data ,
Initializing ,
While 1,
  E-step: , and
  M-step:
  if , Break; End
End

Lemma 3. Given appropriate initial , the final solution of EM and DEM algorithm is an optimal solution for approximation problem that minimizes in (3).

Proof. First, we show that the proposed algorithm is approximation. Given a high-dimensional matrix and a threshold for the coefficient estimates, rejects all the coefficient estimates below to 0 and keeps the large coefficients unchanged. This is the same as defining a binary vector , with the value of 0 or 1 for each feature, where , if the coefficient estimate for that feature is above the threshold and 0 otherwise. Let be a diagonal matrix with on its diagonal; we have the selected feature matrix . We can build the standard models with the matrix , if we know in advance. For instance, we can estimate the coefficients of a ridge regression given and with where because of the special structure of matrix . It is guaranteed that the estimate for feature is 0 with . However, in reality, we do not know . Estimating both and is NP-hard problem, since we need to solve a mixed-integer optimization problem. Comparing (29) with the M-step of the primal algorithm, , where ; it is clear that is replaced by and binary is approximated by continuous in the proposed algorithm. Therefore, The proposed algorithm is a direct approximation.
Next, we show that the proposed algorithm leads to a sparse solution. Note that the penalties for regularized regression in (4) are inversely proportional to the squared magnitude of the parameters. That is,and , when EM or DEM algorithm converges. Equation (30) shows that when the true parameter , the penalty goes to infinity, so must be 0 with the proposed algorithms. In addition, when the true parameters ,because , when the algorithm converges. Therefore, Lemma 3 holds. Note that our proposed methods will find a sparse solution with a large number of iterations and small , even though the solution of regularized regression is not sparse. Small parameters (s) become smaller at each iteration and will eventually go to zero (below the machine epsilon). We can also set a parameter to 0 if it is below predefined to speed up the convergence of the algorithm.
The proposed algorithms are similar to the iteratively reweighted least square approach for optimization in the literature [20, 21]. However, instead of solving optimization problem directly, they added a small value in to handle the undefined problem when , leading to approximation and bias estimations. In our proposed algorithm, 0s are multiplied into the feature matrix (). There is no undefined problem in the proposed algorithm. Finally, similar procedures can be extended to general ; without much difficulty. based EM algorithm EM and the statistical properties of penalized regression are reported in the Appendix in Supplementary Material available online at http://dx.doi.org/10.1155/2016/3456153. The proposed EM algorithm is similar to adaptive lasso [7] in that both use a weighted penalty. However, the weights in adaptive lasso are predetermined by ordinary least estimates when and univariate regression coefficients when , which may lead to inaccurate estimate. In contrary, our proposed EM updates the weights with an analytical solution at each iteration and automatically finds the optimal weights and estimates.

Based Local Graphical Model. One important application of regularized regression is to detect high-order correlation structures, which have numerous real-world applications including gene network analysis. Given matrix , let be the th variable, and let be the remaining variables; we have , where the coefficients measure the partial correlations between and the rest of variables. Therefore, we will minimizeThe high-order structure of has been determined via a series of regularized regressions for each with the remaining variables . The collected regression nonzero coefficients are the edges on the graph. local graphical model without cross validation is much efficient computationally than local graphical model. local graphical model is computationally intensive, because the regularized parameter for has to be determined through cross validation [22, 23]. For instance, given a matrix with 100 variables, to find the optimal from 100 candidate ’s with 5-fold cross validation, 500 models need to be evaluated for each variable . Therefore a total of models have to be estimated to detect the dependency among with lasso. It usually takes hours to solve this problem. However, only 100 models are required to identify the same correlation structure with regularized regression and AIC or BIC, which only takes less than one minute to solve a similar problem. Notice that negative correlations between genes are difficult to confirm and seemingly less biologically relevant [24]. Most national databases are constructed with similarity (dependency) measures. It is straightforward to study only the positive dependency by simply setting in the EM algorithm.

Determination of . Regularized determines the sparsity of the model. The standard approach for choosing is cross validation and optimal is determined by the minimal mean squared error (MSE) of the test data (). One could also adapt the stability selection (SS) approach for determination [25, 26]. It chooses smallest that minimizes the inconsistencies in number of nonzero parameters with cross validation. We first calculate the mean and standard deviation (SD) of the number of nonzero parameters for each and then find smallest with SD = 0, where SD = 0 indicates that all models in -fold cross validation have the same number of nonzero estimates. Our experiences indicate that larger chosen from both minimal MSE and stability selection () has the best performance. Choosing optimal from cross validation is computationally intensive and time-consuming. Fortunately, unlike lasso, identifying optimal for does not require using cross validation. Optimal can be determined by variable selection criteria. Optimal can be directly picked using AIC, BIC, or RIC criteria with , , or , respectively. Each of these criteria is known to be optimal under certain conditions. This is a huge advantage of , especially for big data problems. In general, we recommend to use BIC as information criteria for high-dimensional problem () and to use AIC when .

3. Results

3.1. Simulation Study Application

To evaluate the performance of and regulation, we assume a linear model , where the input matrix is from Gaussian distribution with mean and different covariance structures , where with , respectively. The true model is with . Therefore, only three features are associated with output , and the rest of ’s are zero. In our first simulation, we first compare and regularized regressions with a relatively small number of features and a sample size of . Fivefold cross validation is used to determine optimal and compare the models performances. We seek to fit the regularized regression models over a range of regularization parameters . Each is chosen from to with 100 equally log-spaced intervals, where for and for . Lasso function in the statistics toolbox of MATLAB (http://www.mathworks.com/) is used for comparison. Cross validation with MSE is implemented nicely in the toolbox. The computational results are reported in Table 1. Table 1 shows that outperforms lasso in all categories by a substantial margin, when using the popular test MSE measure for model selection. In particular, the number of variables selected by are close to 3, the true number of nonzero variables, while lasso selected more than 11 features on average with different correlation structures (). The test MSEs and bias both increase with the growth of correlation among features for both and lasso, but the test MSE and bias of are substantially lower than these of lasso. The maximal MSE of is 1.06, while the smallest MSE of is 1.19, and the largest bias of is 0.28, while the smallest bias of lasso is 0.38. In addition (results are not shown in Table 1), correctly identifies the true model 81, 74, 81, and 82 times for and 0.8, respectively, over 100 simulations, while lasso never chooses the correct model. Therefore, compared to regularized regression, lasso selects more features than necessary and has larger bias in parameter estimation. Even though it is possible to get a correct model with lasso using larger , the estimated parameters will have a bigger bias and worse predicted MSE.

The same parameter setting is used for our second simulation, but the regularized parameter is determined by larger from both minimal MSE and stability selection (). The computational results are reported in Table 2. Table 2 shows that the average number of associated features is much closer to 3 with slightly larger test MSEs. The maximal average number of features is 3.1 with , reduced from 3.49 with the test MSE only. In fact, with this combined model selection criteria and 100 simulations, EM identified the true model with three nonzero parameters 95, 95, 95, and 97 times, respectively (not shown in the table), while lasso did not choose any correct models. The average bias of the estimates with EM is also reduced. These indicate that the combination of test MSE and stability selection in cross validation leads to better model selection results than MSE alone with EM. However, the computational results did not improve much with lasso. Over 13 features on average were selected under different correlation structures, suggesting that lasso inclines to select more spurious features than necessary. A much more conservative criterion with larger is required to select the right number of features, which will induce larger MSE and bias and deteriorate the prediction performance.

Simulation with High-Dimensional Data. Our third simulation deals with high-dimensional data with the number of samples and the number of features . The correlation structure is set to , and the same model was used for evaluating model performance. Besides , was also compared with two other cutting-edge regulations including SCAD and MC+, implemented in SparseReg package [27]. The simulation was repeated 100 times. The computational results are reported in Table 3. Table 3 shows that outperforms lasso by a large margin when correlations among features are low. When there is no correlation among features, 100 out of 100 simulations identify the true model with , and 78 out of 100 simulations choose the correct model when , while lasso, SCAD, and MC+ choose more features than necessary and no true model was found under any correlation setting. However, when correlations among features are large with , the results are mixed. can still identify 23 out of 100 correct models, but the test MSE and bias of the parameter estimate of are slightly large than those of lasso, MC+, and SCAD. MC+ has the second best performance with less bias and smaller test MSE but chooses more features than necessary. In addition, we notice that is a more sparse model when correlation increases, indicating that tends to choose independent features. One reason for selecting more features with SCAD and MC+ may be that we only tuned the parameter and fixed and for SCAD and MC+, respectively. A regularization path of regression is shown in Figure 1. As shown in Figure 1(a), the three associated features first increase their values when goes larger and then go to zero when becomes extremely big, while the rest of the irrelevant features all go to zero when increases. Unlike lasso, which shrinks all parameters uniformly, will only force the estimates of irrelevant features to go to zero, while keeping the estimates of relevant features to their true value. This is the well-known oracle property of . The oracle property means that the penalized estimator is asymptotically equivalent to the oracle estimator obtained only with signal variables without penalization. For this specific simulation, the three parameters , very close to their true values . Figures 1(b) and 1(c) are the test MSE and the standard deviation of the number of nonzero variables. Optimal is chosen from larger with minimal test MSE and stability selection as shown in the vertical lines of Figure 1.

Regularized Regression without Cross Validation. Choosing the optimal parameter with cross validation is time-consuming, especially with big data. As we mentioned previously, optimal can be picked from theory instead of cross validation. Since we are dealing with big data problem, RIC with tends to penalize the parameters too much. So computational results with AIC and BIC without cross validation are reported in Table 4. Table 4 shows that regularized regression with AIC and BIC performs very well when compared with the results from computationally intensive cross validation in Table 3. Without correlation, BIC identifies the true model (100%), which is the same as cross validation in Table 3 and better than AIC’s 78%. The bias of BIC (0.16) is only slightly higher than that of cross validation (0.14) but lower than that of AIC (0.19). Even though ’s with AIC and BIC are in-sample mean squared errors, which are not comparable to the test MSE with cross validation, larger with BIC indicates that BIC is a more stringent criterion than AIC and selects less variables. With mild correlation () and some sacrifices in bias and , BIC performs better than AIC in variable selection, since the average number of features selected is exactly 3 and 94% of the simulations recognize the true model, while AIC chooses more features (3.72) than necessary and only 73% of the simulations are right on targets. Cross validation is the most tight measure with 2.9 features on average and 75% of the simulations finding the correct model. When the correlations among the variables are high (), the results are mixed. Both BIC and AIC correctly identify more than half of the true models, while cross validation only recognizes 25% (5/20) of the model correctly. Therefore, compared with the computationally intensive cross validation, both BIC and AIC perform reasonably well. The performance of BIC is similar to cross validation with less computational time. In addition, we have suggested to use the result of ridge regression as the initial value for the proposed algorithms. However, the proposed algorithm is quite stable with different initializations. With , , , and 100 times of randomized initializations, the estimates of three nonzero parameters are with BIC criteria.

Simulations for Graphical Models. We simulate two network structures similar to those in Zhang and Mallick [28]: (i) band 1 network, where is a covariance matrix with , so has a band 1 network structure, and (ii) a more difficult problem for a band 2 network with weaker correlations, where with The sample sizes are , 100, and 200, respectively, and the number of variables is . regularized regression with AIC and BIC is used to detect the network (correlation) structure. The consistency between the true and predicted structures is measured by the area under the ROC curve (AUC), false discovery (positive) rate (FDR/FPR), and false negative rate (FNR) of edges. The computational results are shown in Table 5. Table 5 shows that both AIC and BIC performed well. Both achieved at least 0.90 AUC for band 1 network and 0.8 AUC for band 2 network with different sample sizes. AIC performed slightly better than BIC, especially for band 2 network with weak correlations and small sample sizes. This is reasonable because BIC is a heavier penalty and forces most of the weaker correlations with to 0. In addition, BIC has slightly larger AUCs for band 1 network with strong correlation and larger sample size (). One interesting observation is that FDRs of both AIC and BIC are well controlled. Maximal FDRs of AIC for bands 1 and 2 networks are 0.29% and 0.2%, while maximal FDRs of BIC are only 0.1%, and 0.03%, respectively. Controlling false discovery rates is crucial for identifying true associations with high-dimensional data in bioinformatics. In general, AUC increases and both FDR and FNR decrease, as the sample sizes become larger, except for band 2 network with BIC. The performance of BIC is not necessarily better with a larger sample size, since the penalty increases with the sample size. graphical model was also used for comparison purpose [29, 30]. graphical model performed equally well as AIC and BIC with band 1 network but was the worst with the more difficult band 2 network. More interestingly, had the largest FDR, indicating that it selects more features than necessary.

3.2. Application to Real Ovarian Cancer Data

The purpose of this application is to identify subnetworks and study the biological mechanisms of potential prognostic biomarkers for ovarian cancer with multisource gene expression data. The ovarian cancer data was downloaded from the KMplot website (http://www.kmplot.com/ovar/) [31]. They originally got the data from searching Gene Expression Omnibus (GEO; http://www.ncbi.nlm.nih.gov/geo/) and The Cancer Genome Atlas (TCGA; http://cancergenome.nih.gov/) with multiple platforms. All collected datasets have raw gene expression data, survival information, and at least 20 patients available. They merged the datasets across different platforms carefully. The final data has 1287 patients samples and 22277 probe sets representing 13435 common genes. We identified 112 top genes that are associated with patient survival times using univariate Cox regression. We constructed a coexpression network from the 112 genes with regularized regression and identified biologically meaningful subnetworks (modules) associated with patient survival. Network is constructed with positive correlation only and BIC. The computational time for constructing such network is less than 2 seconds. One survival associated subnetwork we identified is given in Figure 2. The 22 genes on the subnetwork were then uploaded onto STRING (http://string-db.org/). STRING is an online database for exploring known and predicted protein-protein interactions (PPI). The interactions include direct (physical) and indirect (functional) associations. The predicted methods for PPI implemented in STRING include text mining, national databases, experiments, coexpression, cooccurrence, gene fusion, and neighborhood on the chromosome. The PPI networks for the 22 genes are presented in Figure 3. Comparing Figures 3 and 2, we conclude that the 22 identified genes on the subnetwork of Figure 2 are functioning together and have enriched important biological interactions and associations. Nineteen out of 22 genes on the survival associated subnetwork also have interactions on the known and predicted PPI network, except for genes LRRC15, ADAM12, and NKX3-2. Even though they are not completely identical, many interactions on our subnetwork can also be verified on the PPI interaction network of Figure 3. For instance, collagen COL5A2 is the most important gene with the largest number of degrees on our subnetwork. Six out of 7 genes that link to COL5A2 also have direct edges on the PPI network. Those direct connected genes (proteins) include FAP, CTSK, VCAN, COL1A1, COL5A1, and COL11A1. The remaining gene SNAI2 was indirectly linked to COL5A2 through FBN1 on the PPI network. In addition, one of the other important genes with the degree of the node is Decorin (DCN). Four out of 6 genes directly connected to DCN on our subnetwork were confirmed on the PPI network, including FBN1, CTSK, LUM, and THBS2. The remaining two genes (SNAI2 and COLEC11) are indirectly connected to DCN on the PPI network. As indicated on Figure 2, the remaining 5 important genes with degree of node 4 are POSTN, CTSK, COL1A1, COL5A1, and COL10A1, and 8 genes with degree of node 3 are FBN1, LUM, LRRC15, COL11A1, THBS2, SPARC, COL1A2, and FAP, respectively. Furthermore, those 22 genes are involved in the biological process of GO terms, including extracellular matrix organization and disassembly and collagen catabolic, fibril, and metabolic processes. They are also involved in several important KEGG pathways including ECM-receptor interaction, protein digestion and absorption, amoebiasis, focal adhesion, and TGF- signaling pathways. Finally, a large proportion of the 22 genes are known to be associated with poor overall survival (OS) in ovarian cancer. For instance, VCAN and POSTN were demonstrated in vitro to be involved in ovarian cancer invasion induced by TGF- signaling [32], and COL11A1 was shown to increase continuously during ovarian cancer progression and to be highly overexpressed in recurrent metastases. Knockdown of COL11A1 reduces migration, invasion, and tumor progression in mice [33]. Other genes such as FAP, CTSK, FBN1, THBS2, SPARC, and COL1A1 are also known to be ovarian cancer associated [3439]. Those genes contribute to cell migration and the progression of tumors and may be potential therapeutic targets for ovarian cancer, indicating that the proposed method can be used to construct biologically important networks efficiently.

4. Discussion

We proposed efficient EM algorithms for variable selection with regularized regression. The proposed algorithms find the optimal solutions of through solving a sequence of based ridge regressions. Given an initial solution, the algorithm will be guaranteed to converge to a unique solution under mild conditions, and the EM algorithm will be closer to the optimal solution after each iteration. Asymptotic properties, namely, consistency and oracle properties for exact , are established under mild conditions. Our method applies to fixed, diverging, and ultra-high-dimensional problems with ten or hundred thousands of features. We compare the performance of regularized regression and lasso with simulated low- and high-dimensional data. regularized regression outperforms lasso, SCAD, and MC+ by a substantial margin under different correlation structures. Unlike lasso, which selects more features than necessary, regularized regression chooses the true model with high accuracy, less bias, and smaller test MSE, especially when the correlation is weak. Cross validation with the computation of the entire regularization path is computationally intensive and time-consuming. Fortunately regularized regression does not require it. Optimal can be directly determined from AIC, BIC, and RIC. Those criteria are optimal under appropriate conditions. We demonstrate that both AIC and BIC performed well when compared to cross validation. Therefore, there is a big computational advantage of , especially with big data. In addition, we demonstrate that regularized regression controls the false discovery (positive) rate (FDR) well with both AIC and BIC with the simulation of graphical models. The FDR is very low under different sample sizes with both AIC and BIC. Controlling FDR is crucial for biomarker discovery and computational biology, because further verifying the candidate biomarkers is time-consuming and costly. We applied our proposed method to construct a network for ovarian cancer from multisource gene expression data and identified a subnetwork that is important both biologically and clinically. We demonstrated that we can identify biologically important genes and pathways efficiently. Even though we demonstrated our method with gene expression data, the proposed method can be used for RNA-seq and metagenomic data, given that the data are appropriately normalized. Finally, because of the nonconvexity of regularized regression, there are multiple local optimal solutions for including a trivial solution , , as shown in (28). However, the nontrivial solution can be found efficiently as long as all parameters were initialized with nonzero values. We recommend the solution of ridge regression as an initial solution for the proposed algorithms.

Competing Interests

The authors declare that they have no competing interests.

Supplementary Materials

LpEM Algorthm and Statistical Properties.

  1. Supplementary Material