Abstract

The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system’s outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

1. Introduction

Data classification algorithms, such as logistic regression (LR) [16] and support vector machine (SVM) [710], are crucial in many applications. SVM is a local optimum classification which pursues a maximum interval interface using a loss of the distance from the remote instances to the SVM interface [1113]. The discriminant equation of the SVM model can be written as where denotes an eigenvector of an arbitrary instance input and is a concrete feature in an eigenvector in which . The model is trained with all positive instances of labels for which and the negative instances are trained with label in order to pursue the appropriate values for the parameters and . Thus, for an unknown instance , it will be classified to a positive case when , and vice versa.

Traditional SVM guarantees a strict classification that the classification model are constructed by the positive vectors with , the negative vectors with and the SVM interface that . Since all the positive instances hold the distances ≥1 and negative instances ≤1, this leads to the following problems: in many datasets, positive and negative instances are interlaced which can not be classified under a regular kernel function; a meticulous training may cause an overfitting phenomenon which to the maximum extent satisfies the classification in the training set by sacrificing the systematic performance for the data in the probe set; and overtraining usually costs more computation. In order to solve these shortcomings, C-SVM is introduced to improve the adaptability of the traditional SVM model [14, 15]. In C-SVM model, coefficient is used to control the tolerance of the systematic outliers which allows less outliers to exist in the opponent classification. Coefficient is an empirical parameter which is usually worked out via a gird search process. C-SVM holds a uniform for both positive instances and negative ones, which only satisfies the datasets with the similar distributions of each class. In LIBSVM, C-SVM model is improved by the number proportion of the positive instances to the negative ones [13, 14]; however the spatial distribution of the initial instances has not been involved in the model training process. In this paper, we aim to provide a better solution of the value of parameter , thus under the same conditions it can achieve a relatively accurate classification result.

2. Traditional Model of SVM

According to (1), since is positive (or negative) when (or ), can be denoted by , where is the distance between an arbitrary instance and the SVM interface. When seeking the appropriate and in order to maximize the distance between the support vectors and the SVM interface, on the proportionally scale, the distance will not change the values of and . Thus, can be presented by

Normalizing the geometric interval to , (2) can be subjected to

Since is not convex, for the , (3) is subject to

Computing the minimum under the condition of , the Lagrangian function can be imported by

The minimum can be acquired by the derivation of the parameters and such that

Integrating (5) and (6), we finally obtain

Combining (5) and (7), we get

In (8), the value of is only related to the parameter . The training process of can be solved by the sequential minimal optimization (SMO) algorithm [1618].

3. C-SVM on the Penalty Parameter of the Error Term

The selection of the SVM interface is determinate according to the distribution of the support vectors. This means that a slight position change of one single support vector could lead to an obvious movement of the SVM interface. In another situation, if there is an instance in which a system outlier exists in the area of the opposite class, the SVM interface must be inflected so that it will no longer generate accurate classification results. Therefore, an error term is introduced to tolerate some erroneous instances in the opponent classification. In the C-support vector machine (C-SVM) model [19, 20], we use a nonnegative parameter , for example, slack error term, which enables the geometric interval between some erroneous instances and the SVM interface, according to (2). Slackening the restriction, we must rebuild the constraint function for the penalty of the outliers:

In (9), coefficient is the penalty parameter of the error term, which is used to control the tolerance of the systematic outliers. A larger value allows less outliers to exist in the opponent classification, or vice versa. Utilizing the Lagrangian function to calculate the extremum of (9), (5) can be rebuilt by

In (10), parameters and are the Lagrangian factors for the training instances and the systematic outliers, respectively. The extremum of can be acquired in correspondence with (6). Since and are not related to and for SVM model, (9) can be subject to

When calculating (11) via the SMO process, one can adjust only two at each iteration and consider the rest as the constants until it satisfies all Karush-Kuhn-Tucker (KKT) conditions [2123]:

The output is labeled with +1 or −1 as the positive or the negative instance. Thereby, when , (12) can be regarded as a line with gradient of 1: ( or ). When , it can be regarded as a line with gradient of −1: ( or ). When adjusting and , the value of the parameters should satisfy the functions of the lines according to Figure 1. Meanwhile, they must be restricted within the square with length , where is the penalty parameter of the error term in (9). Therefore, when ,

otherwise,

Continue the SMO process; set :

In (16), is the dissimilarity between the real value of the model in and the output of in . By definition of , equals the square of the distance of the vectors; that is, . As vector follows a certain distribution, is a constant in (16). The training process of the Lagrangian factors and is calculated by (15) and (16), and it is limited by (13) and (14). Integrated with KKT conditions, the final training process of can be demonstrated by where , , and . The training process of can be finalized by

The training process stops when all values satisfy the KKT conditions:

4. Optimization of the Penalty Parameter of the Error Term

Since there is no theoretical selection of the penalty parameter of the error term, grid-search is recommended on the value of using cross-validation. Once the appropriate is determined (e.g., ), the same value must be implemented on both positive and negative instances.

Hypothesis 1. In Figure 2, the red dots represent positive instances, and the blue diamonds represent negative instances. Assume four instances (two positive and two negative) are outliers, which are circled by the black ellipses. The following will happen: since there is a large number of positive instances, deleting two of them as the support vectors may not change the position of the SVM interface. However, the same phenomenon does not occur with the negative instances. Deleting two negative support vectors will produce an obvious change in the position of the SVM interface. Thus, an unbeknown instance represented by a black dot will be erroneously classified to the positive set, which should have belonged to the negative set if were not implemented in the SVM model.

According to the analysis above, we provide different values of for positive instances and negative instances instead of a constant value of the penalty parameter for all nodes. Thus, (9) can be improved by

In (20), presents all positive instances, and denotes the negative instances. Since the positive instances can tolerate more system outliers due to the large number of instances, can be assigned a smaller value than .

Hypothesis 2. In Figure 3, the number of positive instances is equal to the number of negative instances, but the negative instances can tolerate more system outliers due to the initial distribution of the data.

Hypothesis 3. In Figure 4, the number of positive instances is even larger than the number of negative instances, but the penalty parameter for the positive instances can be stricter than for the negative instances. Therefore, must be assigned a larger value in Hypothesis 2 than in Hypothesis 3.

Integrated with all the hypotheses, we find that the proportion of and is relevant to the number of positive and negative instances and the distribution of the initial data samples. Therefore, we propose a density-based, penalty parameter optimization of the error term:

In (21), and present the sample density of the positive instances and the negative instances, respectively. The larger the value of is, the smaller the sample density is, and, thus, a smaller can be assigned. According to Figure 5, the density of the corresponding instances is decided by the distance between the remotest node and the nearest node from the SVM interface divided by the number of instances.

5. Experiments

We chose a dataset from the official website of LIBSVM, which contains many classifications, regressions, and multilabel datasets stored in LIBSVM format. Many are from UCI, Statlog, StatLib, and other collections [24]. The data groups used in our experiments are listed in Table 1.

In order to evaluate the accuracy of our proposed algorithm, we optimized the C-SVM model based on LIBSVM tools [14] using linear kernel function . The comparative tests are set by the uniform for both instances as traditional C-SVM, the and that correspond to the ratio of the positive instance number and the negative instance number, and the and that correspond to our proposed, density-based, penalty parameter optimization.

Aiming at testing whether the proposed algorithm has a positive performance under all circumstances, we simply assigned the values of 0.5, 1, 10, 50, and 100 instead of doing the grid-search. In our proposed optimization, (21) can provide only the proportions of and , but not the exact values. Therefore, we used

For comparative test 2, the proportion of the and was decided by the number of positive instances and the number of negative instances :

In our proposed algorithm, the SVM interface is unknown before the classification. In order to calculate the density of the corresponding class, we first implement a traditional C-SVM and confirm the position of the SVM interface. In this way, the densities of the positive and negative instances can be computed via (21), and then and eventually can be determined by (23).

We evaluated the accuracy of our proposed algorithm via precision, recall, and -measure. The precision rate was the number of correctly classified instances divided by the number of total instances. Table 2 shows the experimental results of the precision rate of the different algorithms for different values, where a1a-1, a1a-2, and a1a-3 present the traditional C-SVM, improved C-SVM on number proportion (23), and our proposed, density-based C-SVM (22), respectively.

Recall rate indicates the number of the right classified positive instances by the number of the total positive instances in the testing set. Table 3 shows the experimental results of the recall rates provided by the different algorithms different values of .

Table 1 indicates that the size of the testing set of w1a was 47,272, which was composed of 1,407 positive instances and 45,865 negative instances. For this distribution, we predict all unknown inputs as the negative instances. In this way, all of the 45,865 negative instances can be classified correctly with the precision of 97.02%. Therefore, the recall rate is of great importance as a supplementary measure. In a1a, a2a, a3a, and as a4a datasets, the size of the negative instances is about double that of the positive instances. Method 2 (number proportion-based optimization) sacrifices the precision rate in an acceptable range, but it improves the recall rate in a large scale. In Method 3 (our proposed density-based C-SVM), 12 groups in 20 experiments had slightly decreased precision rates, while the other eight groups successfully enhanced it. All 20 experiments by Method 3 improved the recall rate, but not to the extent that Method 2 did.

In w1a, w2a, w3a, and w4a datasets, the size of the negative instances was many times greater than that of the positive instances. Our proposed method indicated that there were obvious advantages in both precision rate and recall rate. Traditional C-SVM has a high precision rate, but it performs poorly with respect to recall rate. Method 2 improved the recall performance and decreased the precision rate, which was similar to the findings of previous experiments. Method 3 enhanced the precision rate to a greater extent than traditional C-SVM, and it simultaneously improved the recall rate over that of Method 2.

The -measure is a comprehensive evaluation of both precision and recall. In (24), is the parameter that adjusts the weights between the precision rate and the recall rate. When we consider precision more important, the value of should be > 1. On the contrary, in some cases, such as alarming or warning, the recall rate is significant in determining all of the potential risks. Thus, the value of should be < 1:

Table 4 provides the evaluation results by -measure with . Figures 6 and 7 explicitly demonstrate that the comparisons among M-1 (traditional C-SVM), M-2 (number proportion-based C-SVM optimization), and M-3 (density-based C-SVM optimization). Each statistical result was obtained by the average of one certain data group for = 0.5, 1, 10, 50, and 100.

Figure 6 shows datasets a1a, a2a, a3a, and a4a, in which the size of the negative instances was several times greater than that of the positive instances. Both M-2 and M-3 can generate better -measure evaluations than M-1, traditional C-SVM. Concerning the -measure, M-2 performs even better, but, in doing so, systematic precision was sacrificed in order to achieve better recall. Our proposed M-3 minimizes the losses of systematic precision and evidently enhances the -measure to a greater extent than M-1.

Figure 7 shows datasets w1a, w2a, w3am, and w4a, in which the size of the negative instances is far greater than that of the positive instances. M-3 had the best results for precision, recall, and -measure. Therefore, for the given data distribution, our proposed density-based C-SVM optimization provided a remarkable advantage for the classification of data.

6. Conclusions

In this paper, we presented density-based penalty parameter optimization in C-SVM algorithm. In traditional C-SVM, as the penalty parameter of the error term, is used to control the tolerance of the systematic outliers. A larger value of allows less outliers to exist in the opponent classification. Grid-search is generally implemented in the computation of the values of . In order to enhance the accuracy of the algorithm, LIBSVM sets different values of for positive and negative slack error terms based on the number proportion of the positive and negative instances. The principle of number proportion-based C-SVM optimization is that the weight of each instance is decided by the possibility that this instance itself is a system outlier and by the extent to which it will lead the change in the position of the SVM interface. Motivated by this idea, our proposed density-based penalty parameter optimization is more integrated consideration that includes the sizes of the positive and negative instances and takes the distribution of those instances into account. We implemented our experiments in the standard datasets for classifications. The results of the evaluation indicated that number proportion-based C-SVM optimization normally deserves a better -measure, but it enhances the systematic recall in a large scale while simultaneously decreasing the systematic precision. Compared with number proportion-based C-SVM optimization, our proposed density-based method improved the systematic recall and maintained systematic precision according to traditional C-SVM. Our proposed density-based method demonstrated outstanding performance on both precision and recall, especially for datasets in which the number of negative instances was far greater than the number of positive instances.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation under Grant 61371071, Beijing Natural Science Foundation under Grant 4132057, Beijing Science and Technology Program under Grant Z121100007612003, and the Academic Discipline and Postgraduate Education Project of the Beijing Municipal Commission of Education.