About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 569501, 7 pages
http://dx.doi.org/10.1155/2014/569501
Research Article

Multinomial Regression with Elastic Net Penalty and Its Grouping Effect in Gene Selection

1School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
2School of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China

Received 6 January 2014; Accepted 19 February 2014; Published 31 March 2014

Academic Editor: Yisheng Song

Copyright © 2014 Liuyuan Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

For the multiclass classification problem of microarray data, a new optimization model named multinomial regression with the elastic net penalty was proposed in this paper. By combining the multinomial likeliyhood loss and the multiclass elastic net penalty, the optimization model was constructed, which was proved to encourage a grouping effect in gene selection for multiclass classification.

1. Introduction

Support vector machine [1], lasso [2], and their expansions, such as the hybrid huberized support vector machine [3], the doubly regularized support vector machine [4], the 1-norm support vector machine [5], the sparse logistic regression [6], the elastic net [7], and the improved elastic net [8], have been successfully applied to the binary classification problems of microarray data. However, the aforementioned binary classification methods cannot be applied to the multiclass classification easily. Hence, the multiclass classification problems are the difficult issues in microarray classification [911].

Besides improving the accuracy, another challenge for the multiclass classification problem of microarray data is how to select the key genes [915]. By solving an optimization formula, a new multicategory support vector machine was proposed in [9]. It can be successfully used to microarray classification [9]. However, this optimization model needs to select genes using the additional methods. To automatically select genes during performing the multiclass classification, new optimization models [1214], such as the norm multiclass support vector machine in [12], the multicategory support vector machine with sup norm regularization in [13], and the huberized multiclass support vector machine in [14], were developed.

Note that the logistic loss function not only has good statistical significance but also is second order differentiable. Hence, the regularized logistic regression optimization models have been successfully applied to binary classification problem [1519]. Multinomial regression can be obtained when applying the logistic regression to the multiclass classification problem. The emergence of the sparse multinomial regression provides a reasonable application to the multiclass classification of microarray data that featured with identifying important genes [2022]. By using Bayesian regularization, the sparse multinomial regression model was proposed in [20]. By adopting a data augmentation strategy with Gaussian latent variables, the variational Bayesian multinomial probit model which can reduce the prediction error was presented in [21]. By using the elastic net penalty, the regularized multinomial regression model was developed in [22]. It can be applied to the multiple sequence alignment of protein related to mutation. Although the above sparse multinomial models achieved good prediction results on the real data, all of them failed to select genes (or variables) in groups.

For the multiclass classification of the microarray data, this paper combined the multinomial likelihood loss function having explicit probability meanings [23] with multiclass elastic net penalty selecting genes in groups [14], proposed a multinomial regression with elastic net penalty, and proved that this model can encourage a grouping effect in gene selection at the same time of classification.

2. Problem Formulation and Preliminary

Given a training data set of -class classification problem , where represents the input vector of the th sample and represents the class label corresponding to . For the microarray data, and represent the number of experiments and the number of genes, respectively. Restricted by the high experiment cost, only a few (less than one hundred) samples can be obtained with thousands of genes in one sample. Let and , where , . Without loss of generality, it is assumed that

For the binary classification problem, the class labels are assumed to belong to . The logistic regression model represents the following class-conditional probabilities; that is, and then According to the common linear regression model, can be predicted as where represents bias and represents the parameter vector.

In this paper, we pay attention to the multiclass classification problems, which imply that . Let be the decision function, where . The multiclass classifier can be represented as Let and For convenience, we further let and represent the th row vector and th column vector of the parameter matrix . Then extending the class-conditional probabilities of the logistic regression model to -logits, we have the following formula: where represent a pair of parameters which corresponds to the sample , and , . Similarly, we can construct the th as holds if and only if . It can be easily obtained that that is, It should be noted that if . Therefore, the class-conditional probabilities of multiclass classification problem can be represented as

3. Main Results

3.1. Multinomial Regression with the Multiclass Elastic Net Penalty

Following the idea of sparse multinomial regression [2022], we fit the above class-conditional probability model by the regularized multinomial likelihood. Let . It is easily obtained that Hence, Let Then (13) can be rewritten as Note that Hence, the multinomial likelihood loss function can be defined as

In order to improve the performance of gene selection, the following elastic net penalty for the multiclass classification problem was proposed in [14] By combing the multiclass elastic net penalty (18) with the multinomial likelihood loss function (17), we propose the following multinomial regression model with the elastic net penalty: where represent the regularization parameter. Note that . Hence, the optimization problem (19) can be simplified as

3.2. Grouping Effect

For the microarray classification, it is very important to identify the related gene in groups. In the section, we will prove that the multinomial regression with elastic net penalty can encourage a grouping effect in gene selection. To this end, we must first prove the inequality shown in Theorem 1.

Theorem 1. Let be the solution of the optimization problem (19) or (20). For any new parameter pairs which are selected as , the following inequality holds, where and represent the first rows of vectors and and and represent the first rows of matrices and .

Proof. Note that the inequality holds for the arbitrary real numbers and . Hence, the following inequality holds for any pairs , . From (22), it can be easily obtained that that is, Note that Hence, from (24) and (25), we can get where Equation (26) is equivalent to the following inequality:
Hence, inequality (21) holds. This completes the proof.

Using the results in Theorem 1, we prove that the multinomial regression with elastic net penalty (19) can encourage a grouping effect.

Theorem 2. Give the training data set and assume that the matrix and vector satisfy (1). If the pairs () are the optimal solution of the multinomial regression with elastic net penalty (19), then the following inequality holds, where , is the th column of parameter matrix , and is the th column of parameter matrix .

Proof. First of all, we construct the new parameter pairs , where Let Since the pairs () are the optimal solution of the multinomial regression with elastic net penalty (19), it can be easily obtained that Note that the function is Lipschitz continuous. Hence, we have From (33) and (21) and the definition of the parameter pairs , we have Analogically, we have Substituting (34) and (35) into (32) gives that is, From (37), it can be easily obtained that where . This completes the proof.

According to the inequality shown in Theorem 2, the multinomial regression with elastic net penalty can assign the same parameter vectors (i.e., ) to the high correlated predictors (i.e., ). This means that the multinomial regression with elastic net penalty can select genes in groups according to their correlation. According to the technical term in [14], this performance is called grouping effect in gene selection for multiclass classification. Particularly, for the binary classification, that is, , inequality (29) becomes This corresponds with the results in [7].

3.3. Solving Algorithm

Microarray is the typical small , large problem. Because the number of the genes in microarray data is very large, it will result in the curse of dimensionality to solve the proposed multinomial regression. To improve the solving speed, Friedman et al. proposed the pairwise coordinate decent algorithm which takes advantage of the sparse property of characteristic. Therefore, we choose the pairwise coordinate decent algorithm to solve the multinomial regression with elastic net penalty. To this end, we convert (19) into the following form: Equation (40) can be easily solved by using the R package “glmnet” which is publicly available.

4. Conclusion

By combining the multinomial likelihood loss function having explicit probability meanings with the multiclass elastic net penalty selecting genes in groups, the multinomial regression with elastic net penalty for the multiclass classification problem of microarray data was proposed in this paper. The proposed multinomial regression is proved to encourage a grouping effect in gene selection. In the next work, we will apply this optimization model to the real microarray data and verify the specific biological significance.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by Natural Science Foundation of China (61203293, 61374079), Key Scientific and Technological Project of Henan Province (122102210131, 122102210132), Program for Science and Technology Innovation Talents in Universities of Henan Province (13HASTIT040), Foundation and Advanced Technology Research Program of Henan Province (132300410389, 132300410390, 122300410414, and 132300410432), Foundation of Henan Educational Committee (13A120524), and Henan Higher School Funding Scheme for Young Teachers (2012GGJS-063).

References

  1. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B, vol. 58, no. 1, pp. 267–288, 1996. View at Zentralblatt MATH · View at MathSciNet
  3. L. Wang, J. Zhu, and H. Zou, “Hybrid huberized support vector machines for microarray classification and gene selection,” Bioinformatics, vol. 24, no. 3, pp. 412–419, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Wang, J. Zhu, and H. Zou, “The doubly regularized support vector machine,” Statistica Sinica, vol. 16, no. 2, pp. 589–615, 2006. View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  5. J. Zhu, R. Rosset, and T. Hastie, “1-norm support vector machine,” in Advances in Neural Information Processing Systems, vol. 16, pp. 49–56, MIT Press, New York, NY, USA, 2004.
  6. G. C. Cawley and N. L. C. Talbot, “Gene selection in cancer classification using sparse logistic regression with Bayesian regularization,” Bioinformatics, vol. 22, no. 19, pp. 2348–2355, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society B, vol. 67, no. 2, pp. 301–320, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. J. Li, Y. Jia, and Z. Zhao, “Partly adaptive elastic net and its application to microarray classification,” Neural Computing and Applications, vol. 22, no. 6, pp. 1193–1200, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Lee, Y. Lin, and G. Wahba, “Multicategory support vector machines: theory and application to the classification of microarray data and satellite radiance data,” Journal of the American Statistical Association, vol. 99, no. 465, pp. 67–81, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. X. Zhou and D. P. Tuck, “MSVM-RFE: extensions of SVM-RFE for multiclass gene selection on DNA microarray data,” Bioinformatics, vol. 23, no. 9, pp. 1106–1114, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Student and K. Fujarewicz, “Stable feature selection and classification algorithms for multiclass microarray data,” Biology Direct, vol. 7, no. 33, pp. 133–140, 2012. View at Publisher · View at Google Scholar
  12. L. Wang and X. Shen, “On L1-norm multiclass support vector machines: methodology and theory,” Journal of the American Statistical Association, vol. 102, no. 478, pp. 583–594, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  13. H. H. Zhang, Y. Liu, Y. Wu, and J. Zhu, “Variable selection for the multicategory SVM via adaptive sup-norm regularization,” Electronic Journal of Statistics, vol. 2, pp. 149–167, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  14. J.-T. Li and Y.-M. Jia, “Huberized multiclass support vector machine for microarray classification,” Acta Automatica Sinica, vol. 36, no. 3, pp. 399–405, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. M. You and G.-Z. Li, “Feature selection for multi-class problems by using pairwise-class and all-class techniques,” International Journal of General Systems, vol. 40, no. 4, pp. 381–394, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. M. Y. Park and T. Hastie, “Penalized logistic regression for detecting gene interactions,” Biostatistics, vol. 9, no. 1, pp. 30–50, 2008. View at Publisher · View at Google Scholar · View at Scopus
  17. K. Koh, S.-J. Kim, and S. Boyd, “An interior-point method for large-scale L1-regularized logistic regression,” Journal of Machine Learning Research, vol. 8, pp. 1519–1555, 2007. View at MathSciNet · View at Scopus
  18. C. Xu, Z. M. Peng, and W. F. Jing, “Sparse kernel logistic regression based on L1/2 regularization,” Science China Information Sciences, vol. 56, no. 4, pp. 1–16, 2013. View at Publisher · View at Google Scholar
  19. Y. Yang, N. Kenneth, and S. Kim, “A novel k-mer mixture logistic regression for methylation susceptibility modeling of CpG dinucleotides in human gene promoters,” BMC Bioinformatics, vol. 13, supplement 3, article S15, pp. 1471–1480, 2012. View at Publisher · View at Google Scholar
  20. G. C. Cawley, N. L. C. Talbot, and M. Girolami, “Sparse multinomial logistic regression via Bayesian L1 regularization,” in Advances in Neural Information Processing Systems, vol. 19, pp. 209–216, MIT Press, New York, NY, USA, 2007.
  21. N. Lama and M. Girolami, “vbmp: variational Bayesian multinomial probit regression for multi-class classification in R,” Bioinformatics, vol. 24, no. 1, pp. 135–136, 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Sreekumar, C. J. F. ter Braak, R. C. H. J. van Ham, and A. D. J. van Dijk, “Correlated mutations via regularized multinomial regression,” BMC Bioinformatics, vol. 12, article 444, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear models via coordinate descent,” Journal of Statistical Software, vol. 33, no. 1, pp. 1–22, 2010. View at Scopus