Abstract

We propose a completely positive programming reformulation of the 2-norm soft margin model. Then, we construct a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying completely positive cone. An -optimal solution can be found in finite iterations using semidefinite programming techniques by our method. Moreover, in order to obtain a good lower bound efficiently, an adaptive scheme is adopted in our approximation algorithm. The numerical results show that the proposed algorithm can achieve more accurate classifications than other well-known conic relaxations of semisupervised support vector machine models in the literature.

1. Introduction

Support vector machine (SVM) is a novel and important machine learning method for classification and pattern recognition. Ever since the first appearance of SVM models around 1995 [1], they have attracted a great deal of attention from numerous researchers due to their attractive theoretical properties and a wide range of applications in the recent two decades [25].

Notice that traditional SVM models use only labeled data points to get the separating hyperplane. However, labeled instances are often difficult, expensive, or time-consuming to be obtained, since they require much effort of experienced human annotators [6]. Meanwhile, unlabeled data points (i.e., the points only with feature information) are usually much easier to be collected but seldom used. Therefore, as the natural extensions of SVM models, semisupervised support vector machines () models address the classification problem by using the large amount of unlabeled data together with the labeled data to build better classifications. The main idea of is to maximize the margin between two classes in the presence of unlabeled data, by keeping the boundary traversing through low density regions while respecting labels in the input space [7].

To the best of our knowledge, most of the traditional SVM models are polynomial-time solvable problems. However, the models are formulated as mixed integer quadratic programming (MIQP) problem, which cause computational difficulty in general [8]. Therefore, researchers have proposed several optimization methods for solving the nonconvex quadratic programming problems associated with . Joachims [9] developed a local combinatorial search. Blum and Chawla [10] proposed a graph-based method. Lee et al. [11] applied the entropy minimization principle to the semisupervised learning for image pixel classification. Besides, some classical techniques for solving MIQP problem are used for models, such as branch-and-bound method [12], cutting plane method [13], gradient descent method [14], convex-concave procedures [15], surrogate functions [16], deterministic methods [17], and semidefinite relaxation [18]. For a comprehensive survey of the methods, we refer to Zhu and Goldberg [19]. It is worth pointing out that the linear conic approaches (semidefinite and doubly nonnegative relaxations) are generally quite efficient among these methods [20, 21].

Recently, a new but important linear conic tool called completely positive programming (CPP) has been used to study the nonconvex quadratic program with linear and binary constraints. Burer [22] has pointed out that this type of quadratic program can be equivalently modeled as a linear conic program over the cone of completely positive matrices. Here, “equivalent” means these two programs have the same optimal value. This result shows a new angle to analyze the structure of the quadratic and combinatorial problems and provides a new way to approach them. But, unfortunately, the cone of completely positive matrices is not computable; that is, detecting whether a matrix belongs to the cone is NP-hard [23]. Thus, a natural way for deriving a polynomial-time solvable approximation is to replace the cone of completely positive matrices by some computable cones. Note that two commonly used computable cones are the cone of positive semidefinite matrices and cone of doubly nonnegative matrices which lead to the semidefinite and doubly nonnegative relaxations, respectively [21].

However, these relaxations cannot further improve the lower bounds. Hence, these methods are not suitable for some situations with high accuracy requirement. Therefore, in this paper, we propose a new approximation to the 2-norm soft margin model. This method is based on a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones. It is worth pointing out that this approximation can get an -optimal solution in finite iterations. This method also provides a novel angle to approach the model. Moreover, we design an adaptive scheme to improve the efficiency of the algorithm. The numerical results show that our method can achieve better classification rates than other benchmark conic relaxations.

The paper is arranged as follows. In Section 2, we briefly review the basic models of and show how to reformulate the corresponding MIQP problem as a completely positive programming problem. In Section 3, we use the computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying cone of completely positive matrices in the reformulation. Then, we prove that this approximation algorithm can get an -optimal solution in finite iterations. An adaptive scheme is proposed to reduce the computational burden and improve the efficiency in Section 4. In Section 5, we investigate the effectiveness of this proposed algorithm using some artificial and real-world benchmark datasets. At last, we summarize the paper in the final section.

2. Semisupervised Support Vector Machines

Before we start the paper, we introduce some notation used later. denotes the set of positive integers. and denote the -dimensional vectors with all elements being 1 and 0, respectively. is the identity matrix of order . For a vector , let denote the component of . For two vectors , denotes the elementwise product of vectors and . Moreover, and denote the set of real symmetric matrices and the set of positive semidefinite matrices, respectively. Besides, denotes the set of real symmetric matrices with all elements being nonnegative. For two matrices , denotes the element-wise product of matrices and , , where and denote the elements of and in the row and column, respectively. For a nonempty set , int denotes the interior of , cl and stands for the closure and the conic hull of .

Now we briefly recall the basic model of 2-norm soft margin . Given a dataset of data points , where . Let be the indicator vector, where is known while is unknown. In order to handle nonlinearity in the data structure, researchers propose a method by projecting these nonlinear problems into some linear problems in a high-dimensional feature space via a feature function , where is the dimension of the feature space [24]. Then the points are separated by hyperplane in the new space, where . For the linearly inseparable data points, the slack variables are used to measure the misclassification errors if the (labeled or unlabeled) points do not fall in certain side of the hyperplane [12]. The error is penalized in the objective function of models by multiplying a positive penalty parameter . Moreover, in order to avoid the nonconvexity in the reformulated problem, a tradition trick is to drop the bias term [8]. It is worth pointing out that this negative effect can be mitigated by centering the data at the origin [18]. Like the traditional SVM model, the main idea of models is to classify labeled and unlabeled data points into two classes with a maximum separation between them.

Above all, the 2-norm soft margin model with kernel function can be written as follows:

To handle the kernel function , a kernel matrix is introduced as . Cristianini and Shawe-Taylor [25] have pointed out that is positive semidefinite for the kernel functions such as linear kernel and Gaussian kernel. It is worth pointing out that problem (1) can be reformulated as the following problem [21]:where and is the dual variables vector. Note that this problem is a MIQP problem which is generally difficult to solve.

In order to handle the nonconvex objective function of problem (2), we reformulate it into a completely positive programming problem. Note that Bai and Yan [21] have proved that the objective function of problem (2) can be equivalently written as . Moreover, let and replace , where . Then problem (2) can be equivalently reformulated as follows:Moreover, let , . It is worth pointing out that is positive semidefinite [8]. Let and denote the two index sets, respectively. It is easy to verify that problem (3) can be equivalently written asNow, problem (4) is a nonconvex quadratic programming problem with mixed binary and continuous variables. Since , we have . Thus, for a feasible solution of problem (4), we can get a feasible solution of problem (2) by the following equation:Following the standard relaxation techniques in [22], we get the equivalent completely positive programming problem as follows:where is the cone of completely positive matrices; that is, .

From Theorem of [22], we know that the optimal value of problem (6) is equal to the optimal value of problem (2). However, detecting whether a matrix belongs to is NP-hard [23]. Thus, is not computable. A natural way for deriving a polynomial-time solvable approximation is to replace by some commonly used computable cones, such as or [21]. However, and are simple relaxations which lead to some loose lower bounds. Moreover, these conic relaxations cannot get improved to a desired accuracy. Thus, they are not proper for some situations with high accuracy requirements. Therefore, we propose a new approximating way to get an -optimal solution of the original mixed integer constrained quadratic programming (MIQP) problem in the next section.

3. An -Optimal Approximation Method

We first introduce the definitions of the cone of nonnegative quadratic forms and its dual cone. Given a nonempty set , Sturm and Zhang [26] defined the cone of nonnegative quadratic forms over as Its dual cone is From the definition, it is easy to see that if then . Therefore, if is a tight approximation of , then is a good approximation of . In particular can be a union of several computable cones. Let be nonempty cones; the summation of these sets is defined by Note that Corollary   in [27] and Lemma   in [28] indicate that the summation of cone of nonnegative forms over each cone is an approximation of when .

In the rest of this paper, we will set each to be a nontrivial second-order cone , where and . Here, “nontriviality” means the second-order cone contains at least one point other than the origin. The author and his coauthors have proved that has a linear matrix representation (LMI), thus computable. This result will be shown in the next theorem.

Theorem 1 (Theorem   in [28]). Let be a nontrivial second-order cone with , , and . Then, if and only if satisfies that and .

Then the next issue is to cover by a union of second-order cones. Notice that , where is the standard simplex. Let be a set of polyhedrons in , where for . Assume is a partition of ; then we have . If we can find a corresponding second-order cone such that for , then leads to . Thus, we can replace the uncomputable cone by the computable cone in problem (6) to generate a better lower bound.

Now, we show how to generate a proper second-order cone to cover a polyhedron. Suppose are the vertices of . Since , rank, where is the matrix formed by vectors . Note that is the unique solution of the system of equations: . Let ; then we have if and only if for . Therefore, we only need to find proper for by solving the following convex programming problem [28]:It is worth pointing out that if is the optimal solution of problem (8), then is the smallest ellipsoid that centers at the origin and covers . Hence, is a tight cover of .

Now suppose there is a fixed polyhedron partition of and the corresponding second-order cone covers for . Then, and . Let ; then we have the following computable approximation:

In order to measure the accuracy of the approximation, we define the maximum diameter of a polyhedron partition and a -neighborhood of a domain .

Definition 2. For a polyhedron partition , the maximum diameter of is defined as

Definition 3 (Definition of [29]). For a set and , the -neighborhood of is defined as where denotes the infinity norm.

Then the next theorem shows that the lower bounds obtained from this approximation can converge to the optimal value as the number of polyhedrons increases.

Theorem 4. Let be the optimal objective value of problem (2) and let , , be the lower bounds sequentially returned by solving corresponding problem (9) with polyhedrons in the partition of . For any , if converges to 0 when the number of polyhedrons in increases, then there exists such that for any .

Proof. Since is positive semidefinite, problem (4) is bounded below. Thus, the optimal value of problem (2) and (6) is finite. Let be a function, where is the optimal value of problem (6) by replacing with . By definition, for , we have . Thus is monotonically increasing as tends to zero. Note that when , , thus is an upper bound of function . It is easy to verify that is also a continuous function. Thus, for any ; there exists , when , we have . Because converges to 0, there exists such that when . Then, for each simplex and its corresponding second-order cone , . Thus, for each simplex , the corresponding second-order cone and we have . Therefore, . Let and denote the feasible domain and optimal value of problem (9), where simplices are used in the partition. Then, we have . Thus, , for any .

In summary, we have shown that our new approximation method indeed can get an -optimal solution of problem (6) in finite iterations. However, the exact number required to achieve an -optimal solution depends on the instance and the strategy for the partition of .

4. An Adaptive Approximation Algorithm

Theorem 4 guarantees that an -optimal solution can be obtained in finite iterations. However, it may take an enormous amount of time to partite the standard simplex finely enough to meet an extremely high accuracy requirement. Therefore, consider the balance between computing burden and accuracy; we design an adaptive approximation algorithm by partitioning the underlying into special polyhedrons at one time to get a good approximated solution in this section.

First, we solve problem (9) for to find an optimal solution . Let and ; then . According to the decomposition scheme in Lemma   in [30], there exists a rank-one decomposition for such that and , . Let be the index set for the decomposition. If for each , then is completely positive and feasible to problem (6). Hence the optimal value of problem (9) is also the optimal value of problem (6). Otherwise, there exists at least one such that . Denote as an index set of the decomposed vector ’s which violate the nonnegative constraints. Let denote the th element in the vector . For any , let be a new vector such that for and for , . Then we can define the sensitive index as follows.

Definition 5. Let be a decomposition of an optimal solution of problem (9) for . Define any to be a sensitive solution, where . Pick the smallest number of indexes among all sensitive solutions and suppose is the smallest index such that is the smallest among all the components in that sensitive solution then define to be the sensitive index.

Actually, the sensitive index is the one in which the decomposed vector most “violates” the nonnegative constraint. Based on this information, we can shrink the feasible domain of problem (9) and cut off the corresponding optimal solution which is infeasible to problem (6).

For , let denote the vector with all elements being 0 except the th element being 1. Obviously, is a vertex of the standard simplex . Now suppose is the sensitive index; the simplex can be partitioned into small polyhedrons based on this index as follows. Let be the set of all vertices of except the vertex and let be a series of real values such that . Then we can generate some new sets of points in by convexly combining the vertices in set and the vertex according to different weight values: Note that, for , has linearly independent points while contains only one point . Let be the polyhedron generated by the convex hull of the points in the union set for . Let be the matrix formed by the points in and as the column vectors. It is easy to verify that . Hence, there is only one unique solution of the system of equations: for . Thus, for each polyhedron , we only need to solve problem to find the corresponding second-order cone such that .

Remark 6. Since is highly symmetric with respect to the sensitive index and vertices in are rotated, the positive definite matrix in the second-order cone which covers has a simple structure. Thus, problem can be simplified and quickly solved.

Note that practical users can adjust the number of polyhedrons in the partition to get solutions with different accuracies as they wish. Besides, redundant constraints can be added to improve the performance of a relaxed problem [31]. Note that and for . Therefore, we can add the redundant constraints to further improve problem (9).

5. Numerical Tests

In this section, we compare our algorithm () to some existing solvable conic relaxations on the 2-norm soft margin model, such as TSDP in [18] and SSDP and TDNNP in [21]. Besides, we also add the transductive support vector machine (TSVM) [9] which is a classical local search algorithm to the comparison. Several artificial and real-world benchmark datasets are used to test the performances of these methods.

To obtain the artificial datasets for the computational experiment, we first generate different quadratic surfaces with various matrices and vectors in different dimensions. Here, the eigenvalues of each matrix are randomly selected from the interval . Then, for each case, we randomly generate some points on one side of the surface (labeled as Class ) and some points on the other side (labeled as Class ). Moreover, in order to prevent some points too far away from the separating surface, all the points are generated in the ball , where is a positive value.

The real-world datasets come from the semisupervised learning (SSL) benchmark datasets [32] and UC Irvine Machine Learning Repository (UCI) datasets [33]. We use four datasets in our computational experiment, two from SSL (Digit 1 and USPS) and two from UCI (Iono and Sonar). Particularly, since our main aim is to compare our method with the state-of-the-art methods in [21], we follow the same way to restrict the total number of points as 70 in all real-world datasets.

For each dataset, we conduct 100 independent trials by randomly picking the labeled points. The information of these datasets is provided in Table 1.

The kernel matrices are constructed by the Gaussian kernel throughout the test, where the parameter is chosen as the median of the pairwise distances. Besides, the optimal penalty parameter is tuned by the grid method: . We set and the value series = , . All the tests are carried out by Matlab 7.9.0 on a computer equipped with Intel Core i5 CPU 3.3 Ghz and 4 G memory. Moreover, the cvx [34] and SeDuMi 1.3 [35] solvers are incorporated to solve those problems.

The performance of these methods is measured by their classification error rates, standard deviation of the error rates, and computational times. Here the classification error rate is the ratio of the number of misclassified points to the total number of unlabeled points, which is a very important index to reflect the classification accuracy of the method. Moreover, the standard deviation implies the stability of the method while the computational time indicates the efficiency of the method.

Table 2 summarizes the comparison results of four methods on both artificial and real-world datasets. In this table, “rate” denotes the classification error rate as a percentage and “time” denotes the corresponding CPU time in seconds. The number outside the bracket denotes the average value while the number inside the bracket denotes the standard deviation. Note that all of these results are derived from 100 independent trials.

The results from Table 2 clearly show that our algorithm () achieves promising classification error rates in all instances. provides much better classification accuracy than any other methods in any case. Therefore, our method can be a very effective and powerful tool in solving the 2-norm soft margin model, especially for the situation with high accuracy requirement. On the other hand, our method takes much longer time than other methods. This long computational time is due to the slow speed of current SDP solver. And it is a kind of sacrifice for the improvement on the classification accuracy.

Moreover, we analyze the impact of the number of second-order cones on the classification accuracy for our proposed method. As we mentioned in Section 3, a finer cover of the completely positive cone would lead to a better classification accuracy. In this test, we take 9 different numbers as 1, 4, 7, 10, 13, 16, 19, 22, and 25. For simplicity, we averagely assign the values for the serious in each case. For example, if , then the series is . Besides, we also check the effect of the level of data uncertainty on the classification accuracy for different methods. Note that we take 6 different levels of data uncertainty (5%, 10%, 15%, 20%, 25%, and 30% of data points are randomly picked as the labeled ones). The numerical results for these two tests on the artificial dataset (Artificial 3) are summarized in Figure 1.

As we expect, we achieve a better classification accuracy by using more second-order cones. Moreover, as the number of second-order cones increases, the classification error rate decreases rapidly at the beginning and slowly at the end. Thus, the marginal contribution of the second-order cones decreases as the total number increases. Similarly, the classification accuracy increases as the level of data uncertainty decreases. And our proposed method beats other methods in all levels of data uncertainty. Therefore, our method has a very good and robust performance with data uncertainty.

6. Conclusion

In this paper, we have provided a new conic approach to the 2-norm soft margin model. This new method achieves a better approximation and leads to a more accurate classification than the classical local search method and other known conic relaxations in the literature. Moreover, in order to improve the efficiency of the method, an adaptive scheme is adopted. Eight datasets, including both artificial and real-word ones, have been used in the numerical experiments to test the performances of four different conic relaxations. The results show that our proposed approach produces a much smaller classification error rate than other methods in all instances. This verifies that our method is quite effective and indicates a big potential for some real-life applications. Besides, our approach provides a novel angle to study the conic relaxation and sheds some light on the future research of approximation for .

Achilles’ hell of this method is the efficiency of solving the SDP relaxations. Thus, it is not proper for solving some large-sized problems. However, the computational time can be significantly shortened as the efficiency of the SDP solver gets improved. Note that some new techniques, such as the alternating direction method of multipliers (ADMM), have been proved to be very efficient in solving the SDP problems. Therefore, our future research can consider incorporating these techniques in our scheme.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Tian’s research has been supported by the National Natural Science Foundation of China Grants 11401485 and 71331004.