Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 6471672 | https://doi.org/10.1155/2016/6471672

Ye Tian, Jian Luo, Xin Yan, "A New Conic Approach to Semisupervised Support Vector Machines", Mathematical Problems in Engineering, vol. 2016, Article ID 6471672, 9 pages, 2016. https://doi.org/10.1155/2016/6471672

A New Conic Approach to Semisupervised Support Vector Machines

Academic Editor: Jean-Christophe Ponsart
Received01 Jan 2016
Accepted23 Feb 2016
Published17 Mar 2016

Abstract

We propose a completely positive programming reformulation of the 2-norm soft margin model. Then, we construct a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying completely positive cone. An -optimal solution can be found in finite iterations using semidefinite programming techniques by our method. Moreover, in order to obtain a good lower bound efficiently, an adaptive scheme is adopted in our approximation algorithm. The numerical results show that the proposed algorithm can achieve more accurate classifications than other well-known conic relaxations of semisupervised support vector machine models in the literature.

1. Introduction

Support vector machine (SVM) is a novel and important machine learning method for classification and pattern recognition. Ever since the first appearance of SVM models around 1995 [1], they have attracted a great deal of attention from numerous researchers due to their attractive theoretical properties and a wide range of applications in the recent two decades [25].

Notice that traditional SVM models use only labeled data points to get the separating hyperplane. However, labeled instances are often difficult, expensive, or time-consuming to be obtained, since they require much effort of experienced human annotators [6]. Meanwhile, unlabeled data points (i.e., the points only with feature information) are usually much easier to be collected but seldom used. Therefore, as the natural extensions of SVM models, semisupervised support vector machines () models address the classification problem by using the large amount of unlabeled data together with the labeled data to build better classifications. The main idea of is to maximize the margin between two classes in the presence of unlabeled data, by keeping the boundary traversing through low density regions while respecting labels in the input space [7].

To the best of our knowledge, most of the traditional SVM models are polynomial-time solvable problems. However, the models are formulated as mixed integer quadratic programming (MIQP) problem, which cause computational difficulty in general [8]. Therefore, researchers have proposed several optimization methods for solving the nonconvex quadratic programming problems associated with . Joachims [9] developed a local combinatorial search. Blum and Chawla [10] proposed a graph-based method. Lee et al. [11] applied the entropy minimization principle to the semisupervised learning for image pixel classification. Besides, some classical techniques for solving MIQP problem are used for models, such as branch-and-bound method [12], cutting plane method [13], gradient descent method [14], convex-concave procedures [15], surrogate functions [16], deterministic methods [17], and semidefinite relaxation [18]. For a comprehensive survey of the methods, we refer to Zhu and Goldberg [19]. It is worth pointing out that the linear conic approaches (semidefinite and doubly nonnegative relaxations) are generally quite efficient among these methods [20, 21].

Recently, a new but important linear conic tool called completely positive programming (CPP) has been used to study the nonconvex quadratic program with linear and binary constraints. Burer [22] has pointed out that this type of quadratic program can be equivalently modeled as a linear conic program over the cone of completely positive matrices. Here, “equivalent” means these two programs have the same optimal value. This result shows a new angle to analyze the structure of the quadratic and combinatorial problems and provides a new way to approach them. But, unfortunately, the cone of completely positive matrices is not computable; that is, detecting whether a matrix belongs to the cone is NP-hard [23]. Thus, a natural way for deriving a polynomial-time solvable approximation is to replace the cone of completely positive matrices by some computable cones. Note that two commonly used computable cones are the cone of positive semidefinite matrices and cone of doubly nonnegative matrices which lead to the semidefinite and doubly nonnegative relaxations, respectively [21].

However, these relaxations cannot further improve the lower bounds. Hence, these methods are not suitable for some situations with high accuracy requirement. Therefore, in this paper, we propose a new approximation to the 2-norm soft margin model. This method is based on a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones. It is worth pointing out that this approximation can get an -optimal solution in finite iterations. This method also provides a novel angle to approach the model. Moreover, we design an adaptive scheme to improve the efficiency of the algorithm. The numerical results show that our method can achieve better classification rates than other benchmark conic relaxations.

The paper is arranged as follows. In Section 2, we briefly review the basic models of and show how to reformulate the corresponding MIQP problem as a completely positive programming problem. In Section 3, we use the computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying cone of completely positive matrices in the reformulation. Then, we prove that this approximation algorithm can get an -optimal solution in finite iterations. An adaptive scheme is proposed to reduce the computational burden and improve the efficiency in Section 4. In Section 5, we investigate the effectiveness of this proposed algorithm using some artificial and real-world benchmark datasets. At last, we summarize the paper in the final section.

2. Semisupervised Support Vector Machines

Before we start the paper, we introduce some notation used later. denotes the set of positive integers. and denote the -dimensional vectors with all elements being 1 and 0, respectively. is the identity matrix of order . For a vector , let denote the component of . For two vectors , denotes the elementwise product of vectors and . Moreover, and denote the set of real symmetric matrices and the set of positive semidefinite matrices, respectively. Besides, denotes the set of real symmetric matrices with all elements being nonnegative. For two matrices , denotes the element-wise product of matrices and , , where and denote the elements of and in the row and column, respectively. For a nonempty set , int denotes the interior of , cl and stands for the closure and the conic hull of .

Now we briefly recall the basic model of 2-norm soft margin . Given a dataset of data points , where . Let be the indicator vector, where is known while is unknown. In order to handle nonlinearity in the data structure, researchers propose a method by projecting these nonlinear problems into some linear problems in a high-dimensional feature space via a feature function , where is the dimension of the feature space [24]. Then the points are separated by hyperplane in the new space, where . For the linearly inseparable data points, the slack variables are used to measure the misclassification errors if the (labeled or unlabeled) points do not fall in certain side of the hyperplane [12]. The error is penalized in the objective function of models by multiplying a positive penalty parameter . Moreover, in order to avoid the nonconvexity in the reformulated problem, a tradition trick is to drop the bias term [8]. It is worth pointing out that this negative effect can be mitigated by centering the data at the origin [18]. Like the traditional SVM model, the main idea of models is to classify labeled and unlabeled data points into two classes with a maximum separation between them.

Above all, the 2-norm soft margin model with kernel function can be written as follows:

To handle the kernel function , a kernel matrix is introduced as . Cristianini and Shawe-Taylor [25] have pointed out that is positive semidefinite for the kernel functions such as linear kernel and Gaussian kernel. It is worth pointing out that problem (1) can be reformulated as the following problem [21]:where and is the dual variables vector. Note that this problem is a MIQP problem which is generally difficult to solve.

In order to handle the nonconvex objective function of problem (2), we reformulate it into a completely positive programming problem. Note that Bai and Yan [21] have proved that the objective function of problem (2) can be equivalently written as . Moreover, let and replace , where . Then problem (2) can be equivalently reformulated as follows:Moreover, let , . It is worth pointing out that is positive semidefinite [8]. Let and denote the two index sets, respectively. It is easy to verify that problem (3) can be equivalently written asNow, problem (4) is a nonconvex quadratic programming problem with mixed binary and continuous variables. Since , we have . Thus, for a feasible solution of problem (4), we can get a feasible solution of problem (2) by the following equation:Following the standard relaxation techniques in [22], we get the equivalent completely positive programming problem as follows:where is the cone of completely positive matrices; that is, .

From Theorem of [22], we know that the optimal value of problem (6) is equal to the optimal value of problem (2). However, detecting whether a matrix belongs to is NP-hard [23]. Thus, is not computable. A natural way for deriving a polynomial-time solvable approximation is to replace by some commonly used computable cones, such as or [21]. However, and are simple relaxations which lead to some loose lower bounds. Moreover, these conic relaxations cannot get improved to a desired accuracy. Thus, they are not proper for some situations with high accuracy requirements. Therefore, we propose a new approximating way to get an -optimal solution of the original mixed integer constrained quadratic programming (MIQP) problem in the next section.

3. An -Optimal Approximation Method

We first introduce the definitions of the cone of nonnegative quadratic forms and its dual cone. Given a nonempty set , Sturm and Zhang [26] defined the cone of nonnegative quadratic forms over as Its dual cone is From the definition, it is easy to see that if then . Therefore, if is a tight approximation of , then is a good approximation of . In particular can be a union of several computable cones. Let be nonempty cones; the summation of these sets is defined by Note that Corollary   in [27] and Lemma   in [28] indicate that the summation of cone of nonnegative forms over each cone is an approximation of when .

In the rest of this paper, we will set each to be a nontrivial second-order cone , where and . Here, “nontriviality” means the second-order cone contains at least one point other than the origin. The author and his coauthors have proved that has a linear matrix representation (LMI), thus computable. This result will be shown in the next theorem.

Theorem 1 (Theorem   in [28]). Let be a nontrivial second-order cone with , , and . Then, if and only if satisfies that and .

Then the next issue is to cover by a union of second-order cones. Notice that , where is the standard simplex. Let be a set of polyhedrons in , where for . Assume is a partition of ; then we have . If we can find a corresponding second-order cone such that for , then leads to . Thus, we can replace the uncomputable cone by the computable cone in problem (6) to generate a better lower bound.

Now, we show how to generate a proper second-order cone to cover a polyhedron. Suppose are the vertices of . Since , rank, where is the matrix formed by vectors . Note that is the unique solution of the system of equations: . Let ; then we have if and only if for . Therefore, we only need to find proper for by solving the following convex programming problem [28]:It is worth pointing out that if is the optimal solution of problem (8), then is the smallest ellipsoid that centers at the origin and covers . Hence, is a tight cover of .

Now suppose there is a fixed polyhedron partition of and the corresponding second-order cone covers for . Then, and . Let ; then we have the following computable approximation:

In order to measure the accuracy of the approximation, we define the maximum diameter of a polyhedron partition and a -neighborhood of a domain .

Definition 2. For a polyhedron partition , the maximum diameter of is defined as

Definition 3 (Definition of [29]). For a set and , the -neighborhood of is defined as where denotes the infinity norm.

Then the next theorem shows that the lower bounds obtained from this approximation can converge to the optimal value as the number of polyhedrons increases.

Theorem 4. Let be the optimal objective value of problem (2) and let , , be the lower bounds sequentially returned by solving corresponding problem (9) with polyhedrons in the partition of . For any , if converges to 0 when the number of polyhedrons in increases, then there exists such that for any .

Proof. Since is positive semidefinite, problem (4) is bounded below. Thus, the optimal value of problem (2) and (6) is finite. Let be a function, where is the optimal value of problem (6) by replacing with . By definition, for , we have . Thus is monotonically increasing as tends to zero. Note that when , , thus is an upper bound of function . It is easy to verify that is also a continuous function. Thus, for any ; there exists , when , we have . Because converges to 0, there exists such that when . Then, for each simplex and its corresponding second-order cone , . Thus, for each simplex , the corresponding second-order cone and we have . Therefore, . Let and denote the feasible domain and optimal value of problem (9), where simplices are used in the partition. Then, we have . Thus, , for any .

In summary, we have shown that our new approximation method indeed can get an -optimal solution of problem (6) in finite iterations. However, the exact number required to achieve an -optimal solution depends on the instance and the strategy for the partition of .

4. An Adaptive Approximation Algorithm

Theorem 4 guarantees that an -optimal solution can be obtained in finite iterations. However, it may take an enormous amount of time to partite the standard simplex finely enough to meet an extremely high accuracy requirement. Therefore, consider the balance between computing burden and accuracy; we design an adaptive approximation algorithm by partitioning the underlying into special polyhedrons at one time to get a good approximated solution in this section.

First, we solve problem (9) for to find an optimal solution . Let and ; then . According to the decomposition scheme in Lemma   in [30], there exists a rank-one decomposition for such that and , . Let be the index set for the decomposition. If for each , then is completely positive and feasible to problem (6). Hence the optimal value of problem (9) is also the optimal value of problem (6). Otherwise, there exists at least one such that . Denote as an index set of the decomposed vector ’s which violate the nonnegative constraints. Let denote the th element in the vector . For any , let be a new vector such that for and for , . Then we can define the sensitive index as follows.

Definition 5. Let be a decomposition of an optimal solution of problem (9) for . Define any to be a sensitive solution, where . Pick the smallest number of indexes among all sensitive solutions and suppose is the smallest index such that is the smallest among all the components in that sensitive solution then define to be the sensitive index.

Actually, the sensitive index is the one in which the decomposed vector most “violates” the nonnegative constraint. Based on this information, we can shrink the feasible domain of problem (9) and cut off the corresponding optimal solution which is infeasible to problem (6).

For , let denote the vector with all elements being 0 except the th element being 1. Obviously, is a vertex of the standard simplex . Now suppose is the sensitive index; the simplex can be partitioned into small polyhedrons based on this index as follows. Let be the set of all vertices of except the vertex and let be a series of real values such that . Then we can generate some new sets of points in by convexly combining the vertices in set and the vertex according to different weight values: Note that, for , has linearly independent points while contains only one point . Let be the polyhedron generated by the convex hull of the points in the union set for . Let be the matrix formed by the points in and as the column vectors. It is easy to verify that . Hence, there is only one unique solution of the system of equations: for . Thus, for each polyhedron , we only need to solve problem to find the corresponding second-order cone such that .

Remark 6. Since is highly symmetric with respect to the sensitive index and vertices in are rotated, the positive definite matrix in the second-order cone which covers has a simple structure. Thus, problem can be simplified and quickly solved.

Note that practical users can adjust the number of polyhedrons in the partition to get solutions with different accuracies as they wish. Besides, redundant constraints can be added to improve the performance of a relaxed problem [31]. Note that and for . Therefore, we can add the redundant constraints to further improve problem (9).

5. Numerical Tests

In this section, we compare our algorithm () to some existing solvable conic relaxations on the 2-norm soft margin model, such as TSDP in [18] and SSDP and TDNNP in [21]. Besides, we also add the transductive support vector machine (TSVM) [9] which is a classical local search algorithm to the comparison. Several artificial and real-world benchmark datasets are used to test the performances of these methods.

To obtain the artificial datasets for the computational experiment, we first generate different quadratic surfaces with various matrices and vectors in different dimensions. Here, the eigenvalues of each matrix are randomly selected from the interval . Then, for each case, we randomly generate some points on one side of the surface (labeled as Class ) and some points on the other side (labeled as Class ). Moreover, in order to prevent some points too far away from the separating surface, all the points are generated in the ball , where is a positive value.

The real-world datasets come from the semisupervised learning (SSL) benchmark datasets [32] and UC Irvine Machine Learning Repository (UCI) datasets [33]. We use four datasets in our computational experiment, two from SSL (Digit 1 and USPS) and two from UCI (Iono and Sonar). Particularly, since our main aim is to compare our method with the state-of-the-art methods in [21], we follow the same way to restrict the total number of points as 70 in all real-world datasets.

For each dataset, we conduct 100 independent trials by randomly picking the labeled points. The information of these datasets is provided in Table 1.


Dataset Number of features Number of total points Number of labeled points

Art 1 3 60 10
Art 2 10 60 10
Art 3 20 60 10
Art 4 30 60 10
Digit 1 241 70 10
USPS 241 70 10
Sonar 60 70 20
Iono 34 70 20

The kernel matrices are constructed by the Gaussian kernel throughout the test, where the parameter is chosen as the median of the pairwise distances. Besides, the optimal penalty parameter is tuned by the grid method: . We set and the value series = , . All the tests are carried out by Matlab 7.9.0 on a computer equipped with Intel Core i5 CPU 3.3 Ghz and 4 G memory. Moreover, the cvx [34] and SeDuMi 1.3 [35] solvers are incorporated to solve those problems.

The performance of these methods is measured by their classification error rates, standard deviation of the error rates, and computational times. Here the classification error rate is the ratio of the number of misclassified points to the total number of unlabeled points, which is a very important index to reflect the classification accuracy of the method. Moreover, the standard deviation implies the stability of the method while the computational time indicates the efficiency of the method.

Table 2 summarizes the comparison results of four methods on both artificial and real-world datasets. In this table, “rate” denotes the classification error rate as a percentage and “time” denotes the corresponding CPU time in seconds. The number outside the bracket denotes the average value while the number inside the bracket denotes the standard deviation. Note that all of these results are derived from 100 independent trials.


Dataset TSVM TSDP SSDP TDNNP
Rate Time Rate Time Rate Time Rate Time Rate Time

Art 1 4.01 (1.65)6.98 (1.29)6.31 (1.25)3.30 (1.04)1.04 (0.62)15.4 (3.86)9.41 (1.03)5.56 (0.73)22.7 (2.04)89.6 (5.42)
Art 2 6.39 (3.21)10.16 (3.47)8.99 (3.53)6.12 (2.71)3.03 (1.43)17.1 (4.28)9.93 (1.21)5.84 (0.81)23.6 (1.85)86.1 (5.21)
Art 3 7.83 (3.67)12.72 (3.56)11.12 (3.41)7.61 (2.72)3.55 (1.51)19.3 (4.41)9.77 (1.16)6.12 (0.74)24.8 (2.33)87.5 (6.43)
Art 4 11.62 (4.39)15.17 (4.10)13.64 (3.85)9.94 (2.74)5.60 (1.48)23.6 (4.91)9.15 (1.13)6.07 (0.78)27.3 (2.40)90.5 (6.87)
Digit 1 23.42 (8.82)28.85 (7.21)27.84 (7.45)25.27 (7.21)16.93 (5.52)25.4 (5.37)12.8 (1.40)8.01 (0.95)33.6 (2.87)119.7 (7.01)
USPS 16.27 (6.83)20.04 (6.33)18.73 (6.20)16.11 (5.48)10.71 (3.91)27.1 (5.31)13.2 (1.37)7.30 (1.13)34.2 (3.04)122.5 (7.27)
Sonar 19.34 (6.07)27.44 (5.67)25.78 (5.51)18.45 (4.73)12.42 (4.03)25.8 (5.49)12.4 (1.53)7.62 (1.07)33.1 (2.77)116.1 (7.48)
Iono 12.35 (5.94)15.73 (6.62)15.34 (5.31)13.03 (4.17)7.22 (3.12)26.3 (5.91)12.7 (1.27)7.28 (1.18)34.7 (2.94)113.2 (7.36)

The results from Table 2 clearly show that our algorithm () achieves promising classification error rates in all instances. provides much better classification accuracy than any other methods in any case. Therefore, our method can be a very effective and powerful tool in solving the 2-norm soft margin model, especially for the situation with high accuracy requirement. On the other hand, our method takes much longer time than other methods. This long computational time is due to the slow speed of current SDP solver. And it is a kind of sacrifice for the improvement on the classification accuracy.

Moreover, we analyze the impact of the number of second-order cones on the classification accuracy for our proposed method. As we mentioned in Section 3, a finer cover of the completely positive cone would lead to a better classification accuracy. In this test, we take 9 different numbers as 1, 4, 7, 10, 13, 16, 19, 22, and 25. For simplicity, we averagely assign the values for the serious in each case. For example, if , then the series is . Besides, we also check the effect of the level of data uncertainty on the classification accuracy for different methods. Note that we take 6 different levels of data uncertainty (5%, 10%, 15%, 20%, 25%, and 30% of data points are randomly picked as the labeled ones). The numerical results for these two tests on the artificial dataset (Artificial 3) are summarized in Figure 1.

As we expect, we achieve a better classification accuracy by using more second-order cones. Moreover, as the number of second-order cones increases, the classification error rate decreases rapidly at the beginning and slowly at the end. Thus, the marginal contribution of the second-order cones decreases as the total number increases. Similarly, the classification accuracy increases as the level of data uncertainty decreases. And our proposed method beats other methods in all levels of data uncertainty. Therefore, our method has a very good and robust performance with data uncertainty.

6. Conclusion

In this paper, we have provided a new conic approach to the 2-norm soft margin model. This new method achieves a better approximation and leads to a more accurate classification than the classical local search method and other known conic relaxations in the literature. Moreover, in order to improve the efficiency of the method, an adaptive scheme is adopted. Eight datasets, including both artificial and real-word ones, have been used in the numerical experiments to test the performances of four different conic relaxations. The results show that our proposed approach produces a much smaller classification error rate than other methods in all instances. This verifies that our method is quite effective and indicates a big potential for some real-life applications. Besides, our approach provides a novel angle to study the conic relaxation and sheds some light on the future research of approximation for .

Achilles’ hell of this method is the efficiency of solving the SDP relaxations. Thus, it is not proper for solving some large-sized problems. However, the computational time can be significantly shortened as the efficiency of the SDP solver gets improved. Note that some new techniques, such as the alternating direction method of multipliers (ADMM), have been proved to be very efficient in solving the SDP problems. Therefore, our future research can consider incorporating these techniques in our scheme.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Tian’s research has been supported by the National Natural Science Foundation of China Grants 11401485 and 71331004.

References

  1. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995. View at: Publisher Site | MathSciNet
  2. Z. Mingheng, Z. Yaobao, H. Ganglong, and C. Gang, “Accurate multisteps traffic flow prediction based on SVM,” Mathematical Problems in Engineering, vol. 2013, Article ID 418303, 8 pages, 2013. View at: Publisher Site | Google Scholar
  3. X. Wang, J. Wen, S. Alam, X. Gao, Z. Jiang, and J. Zeng, “Sales growth rate forecasting using improved PSO and SVM,” Mathematical Problems in Engineering, vol. 2014, Article ID 437898, 13 pages, 2014. View at: Publisher Site | Google Scholar
  4. F. E. H. Tay and L. Cao, “Application of support vector machines in financial time series forecasting,” Omega, vol. 29, no. 4, pp. 309–317, 2001. View at: Publisher Site | Google Scholar
  5. H. Fu and Q. Xu, “Locating impact on structural plate using principal component analysis and support vector machines,” Mathematical Problems in Engineering, vol. 2013, Article ID 352149, 8 pages, 2013. View at: Publisher Site | Google Scholar
  6. X. Zhu, “Semi-supervised learning literature survey,” Computer Sciences Technical Report 1530, University of Wisconsin, Madison, Wis, USA, 2006. View at: Google Scholar
  7. O. Chapelle, V. Sindhwani, and S. S. Keerthi, “Optimization techniques for semi-supervised support vector machines,” Journal of Machine Learning Research, vol. 9, no. 1, pp. 203–233, 2008. View at: Google Scholar
  8. Y. Q. Bai, Y. Chen, and B. L. Niu, “SDP relaxation for semi-supervised support vector machine,” Pacific Journal of Optimization, vol. 8, no. 1, pp. 3–14, 2012. View at: Google Scholar | MathSciNet
  9. T. Joachims, “Transductive inference for text classification using support vector machines,” in Proceedings of the 16th International Conference on Machine Learning (ICML '99), pp. 200–209, Bled, Slovenia, June 1999. View at: Google Scholar
  10. A. Blum and S. Chawla, “Learning from labeled and unlabeled data using graph mincuts,” in Proceedings of the 18th International Conference on Machine Learning (ICML '01), pp. 19–26, San Francisco, Calif, USA, 2001. View at: Google Scholar
  11. C. Lee, S. Wang, F. Jiao, D. Schuurmans, and R. Greiner, “Learning to model spatial dependency: semi-supervised discriminative random fields,” in Advance of Neural Information Processing System 19: Proceedings of the 2006 Conference, The MIT Press, Cambridge, Mass, USA, 2006. View at: Google Scholar
  12. K. P. Bennett and A. Demiriz, “Semi-supervised support vector machines,” in Proceedings of the Conference on Advances in Neural Information Processing Systems 11, pp. 368–374, MIT Press, 1999. View at: Google Scholar
  13. B. Zhao, F. Wang, and C. Zhang, “CutS3VM: a fast semi-supervised SVM algorithm,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08), pp. 830–838, Las Vegas, Nev, USA, August 2008. View at: Publisher Site | Google Scholar
  14. F. Gieseke, A. Airola, T. Pahikkala, and O. Kramer, “Fast and simple gradient-based optimization for semi-supervised support vector machines,” Neurocomputing, vol. 123, no. 1, pp. 23–32, 2014. View at: Publisher Site | Google Scholar
  15. R. Collobert, F. Sinz, J. Weston, and L. Bottou, “Large scale transductive SVMs,” Journal of Machine Learning Research, vol. 7, no. 1, pp. 1687–1712, 2006. View at: Google Scholar | MathSciNet
  16. I. S. Reddy, S. Shevade, and M. N. Murty, “A fast quasi-Newton method for semi-supervised SVM,” Pattern Recognition, vol. 44, no. 10-11, pp. 2305–2313, 2011. View at: Publisher Site | Google Scholar
  17. V. Sindhwani, S. Keerthi, and O. Chapelle, “Deterministic anealing for semi-supervised kernel machines,” in Proceedings of the ACM 23rd International Conference of Machine Learning (ICML '06), pp. 841–848, Pittsburgh, Pa, USA, 2006. View at: Publisher Site | Google Scholar
  18. T. de Bie and N. Cristianini, “Semi-supervised learning using semi-definite programming,” in Semi-Supervised Learning, O. Chapelle, B. Schoikopf, and A. Zien, Eds., MIT Press, Cambridge, Mass, USA, 2006. View at: Google Scholar
  19. X. Zhu and A. Goldberg, Introduction to Semi-Supervised Learning, Morgan & Claypool Publishers, New York, NY, USA, 2009.
  20. Y. Q. Bai, B. L. Niu, and Y. Chen, “New SDP models for protein homology detection with semi-supervised SVM,” Optimization, vol. 62, no. 4, pp. 561–572, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  21. Y. Bai and X. Yan, “Conic relaxations for semi-supervised support vector machines,” Journal of Optimization Theory and Applications, pp. 1–15, 2015. View at: Publisher Site | Google Scholar
  22. S. Burer, “On the copositive representation of binary and continuous nonconvex quadratic programs,” Mathematical Programming, vol. 120, no. 2, pp. 479–495, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  23. K. G. Murty and S. N. Kabadi, “Some NP-complete problems in quadratic and nonlinear programming,” Mathematical Programming, vol. 39, no. 2, pp. 117–129, 1987. View at: Publisher Site | Google Scholar | MathSciNet
  24. B. Schölkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Cambridge University Press, Cambridge, UK, 2002.
  25. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, Cambridge University Press, Cambridge, UK, 2000. View at: Publisher Site
  26. J. F. Sturm and S. Zhang, “On cones of nonnegative quadratic functions,” Mathematics of Operations Research, vol. 28, no. 2, pp. 246–267, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  27. R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970. View at: MathSciNet
  28. Y. Tian, S.-C. Fang, Z. Deng, and W. Xing, “Computable representation of the cone of nonnegative quadratic forms over a general second-order cone and its application to completely positive programming,” Journal of Industrial and Management Optimization, vol. 9, no. 3, pp. 703–721, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  29. Z. Deng, S.-C. Fang, Q. Jin, and W. Xing, “Detecting copositivity of a symmetric matrix by an adaptive ellipsoid-based approximation scheme,” European Journal of Operational Research, vol. 229, no. 1, pp. 21–28, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  30. Y. Ye and S. Zhang, “New results on quadratic minimization,” SIAM Journal on Optimization, vol. 14, no. 1, pp. 245–267, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  31. K. M. Anstreicher, “Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming,” Journal of Global Optimization, vol. 43, no. 2-3, pp. 471–484, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  32. O. Chapelle, B. Schölkopf, and A. Zien, Semi-Supervised Learning, MIT Press, Cambridge, UK, 2006.
  33. K. Bache and M. Lichman, UCI Machine Learning Repository, School of Information and Computer Science, University of California, 2013, http://archive.ics.uci.edu/ml.
  34. M. Grant and S. Boyd, “CVX: matlab Software for Disciplined Programming,” 2010, version 1.2, http://cvxr.com/cvx/. View at: Google Scholar
  35. J. F. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,” Optimization Methods & Software, vol. 11, no. 1, pp. 625–653, 1999. View at: Publisher Site | Google Scholar

Copyright © 2016 Ye Tian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views706
Downloads509
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.