Abstract

Imbalanced data learning is one of the most active and important fields in machine learning research. The existing class imbalance learning methods can make Support Vector Machines (SVMs) less sensitive to class imbalance; they still suffer from the disturbance of outliers and noise present in the datasets. A kind of Fuzzy Smooth Support Vector Machines (FSSVMs) are proposed based on the Smooth Support Vector Machine (SSVM) of O. L. Mangasarian. SSVM can be computed by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm or the Newton-Armijo algorithm easily. Two kinds of fuzzy memberships and three smooth functions can be chosen in the algorithms. The fuzzy memberships consider the contribution rate of each sample to the optimal separating hyperplane. The polynomial smooth functions can make the optimization problem more accurate at the inflection point. Those changes play the active effects on trials. The results of the experiments show that the FSSVMs can gain the better accuracy and the shorter time than the SSVMs and some of the other methods.

1. Introduction

Learning from imbalanced datasets is an important and ongoing issue in machine learning research. The classification problem with imbalanced training data corresponds to domains for which one class is represented by a large number of instances while the other is represented by only a few. There are many such problems in the real world [13]. Conventional classifiers, which are trained with an imbalanced dataset, can produce a model that is biased toward the majority class. It has a low performance on the minority class. These methods can be broadly divided into two categories, namely, external methods and internal methods. External methods involve preprocessing of training datasets in order to make them balanced, such as randomly undersampling [4], randomly oversampling [5], while internal methods deal with modifications of the learning algorithms in order to reduce their sensitiveness to class imbalance, such as Synthetic Minority Oversampling Technique (SMOTE) [6]. In addition, a genetic algorithm based sampling has been proposed in [7], and Z-SVM has been proposed in [8].

The general SVM considers all the training examples uniformly. It is sensitive to outliers and noise in the datasets [9]. They exist in most of the real world. A fuzzy membership technique [10] is introduced to SVM and assigned a different fuzzy membership values (weights) for the different examples. It can reflect the importance of each sample in the algorithms and reduce the effect of outliers and noise. However, the Fuzzy Support Vector Machine (FSVM) can still be influenced by the imbalanced problem. Considering those factors, we define the imbalanced adjustment factor and two kinds of fuzzy membership functions in the models. On the other hand, three smooth functions are applied to the SSVM models which can change the differentiability and make the model be computed by Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [11] or the Newton-Armijo algorithm easily [12].

The rest of this paper is organized as follows. Section 2 briefly reviews the SSVM learning theory and its smooth functions, and Section 3 defines the two fuzzy membership functions. In Section 4, we present the FSSVM algorithm, and in Section 5 we discuss the two algorithms. Section 6 is the experiment results. Finally, Section 7 concludes the paper.

2. SSVM and Its Smooth Function

Given an unknown independent and identically distributed dataset: , SSVM is a variant of the SVM learning algorithm which was originally proposed in [13]. The reformation of SVM can be expressing as follows:

Here is given by , and the sign function replaces negative components of a vector by zeros. Thus, we can replace in (1) by and convert the SVM problem (1) into an equivalent SVM, which is an unconstrained optimization problem as follows: This is a strongly convex minimization problem without any constraints and it has a unique solution. However, the objective function in (2) is not twice differentiable which precludes the use of the fast Newton method. We thus apply the smoothing techniques and approximately replace by an accurate sigmoid smooth function: In order to gain the more accurate smooth function, some researchers [14] proposed several polynomial smooth functions in his paper as follows: Obviously, function is a piecewise continuous function and first-order differentiable about . Function is another piecewise continuous function and twice-order differentiable about . The sigmoid function is arbitrary-order differentiable. But the smoothness of those functions is different.

Lemma 1. For a given and , the features of the smooth functions can be gained:

Lemma 2. For a given and , the relation of the above smooth functions and the can be gained the following inequality:

The proofs of Lemmas 1 and 2 can be found in paper [14]. From these features, we can gain the advantages and disadvantages of each smooth function: the sigmoid function is arbitrary-order differentiable, but it is inaccuracy at the inflection point; function is first-order differentiable. It is more accurate than that of the sigmoid function, but not as good as the function ; function is twice-order differentiable and it is the most accurate at the inflection point among the functions. But for the quadratic convergence, the speed of iterations of is quick, but it may lead to the expensive computation. The convergence of the three functions are relative to the parameter . In order to gain the better effectiveness, we apply those smooth functions and the fuzzy memberships to the SSVM. The comparison of those smooth functions graph can be seen from Figure 1, where in the sigmoid function is the base of natural logarithm and is the smooth factor. These SSVMs algorithms with smooth functions can be showed as follows generally: Formula (7) is an unconstrained optimization problem, which can be computed by the gradient descent algorithms, but the disturbing of some noise and outliers in datasets was not considered. In order to improve the accuracy of the SSVMs and increase the complexity of the algorithms not too much, we introduce a kind of fuzzy membership to the SSVMs.

3. Fuzzy Membership for the Imbalanced Dataset

In order to deal with the problem of outliers and noise, we introduce a kind of fuzzy membership technique which can consider the effect of the noise and outliers in the imbalanced datasets. The and are assigned to reflect the unbalancedness. Therefore, a positive-class example is given a membership value in the interval, while a negative-class example is given a membership value in the interval. We assign and to show the imbalanced ratio, where is the minority-to-majority class ratio. According to this assignment of values, a positive-class example can take a membership value in the interval, and the negative-class example can take the value in the interval, where .

At the same time, we define the function based on the distance from the actual separating hyperplane to , which is found by training a normal SVM model on the imbalanced dataset. The examples closer to the actual separating hyperplane are treated as more informative and assigned higher membership values. The membership function can define as follows.(1)Train a normal SVM model with the original imbalanced dataset.(2)Find the functional margin of each example . The functional margin is proportional to the geometric margin of a training example with respect to the hyperplane: (3)Define the linear-decaying function and the exponential-decaying functions as follows: Here is a small positive number. For the imbalanced dataset, we define the fuzzy membership of an example as and apply the two fuzzy memberships to the SSVM.

4. SSVM with the Fuzzy Membership (FSSVM)

After preprocessing dataset, we can gain dataset with the fuzzy membership as follow:

Based on the SSVM classifiers of (7), the optimization problem of FSSVM in the higher feature space is given by the following model: where is the dataset, is the slack variable, and is the fuzzy membership matrix. At a solution of problem (11), is given by the following plus function: According to the smoothness and the differentiability, we replace the plus function with the above smooth function, respectively. At the same time, in order to find a better separation of classes, the data are first transformed into a higher dimensional feature space by a mapping function . As an important property of SVMs, it is not necessary to know the mapping function explicitly. By defining the kernel function in the feature space , we gain the FSSVMs model as the following: where is a kernel map from to . We note that this problem, which is capable of generating highly nonlinear separating surfaces, still retains the strong convexity and differentiability properties for any arbitrary kernel. Hence we can apply the BFGS algorithm to solve the problems (13)(15) and the Newton-Armijo algorithm to solve (13) and (15). We call them , , and in the following experiments. We turn our attention now to the algorithms.

5. Algorithms

In this section, we introduce the BFGS algorithm and Newton-Armijo algorithm for the above unconstraint optimizations (13)~(15).

5.1. BFGS Algorithm

If the objection function is the first-order differentiable, we can use BFGS algorithm to compute the unconstrained optimization problem according to the following algorithm.(i)Set , , , let . Set constant , , where , , .(ii)Compute . If , stop, take , otherwise compute the descent direction from .(iii)Compute the iteration step with the linear search. Let , is the smallest positive integral which make the inequality: let:  , , let: , .(iv)From to : If , let , otherwise: .(v)Let: , go to step (ii).

5.2. Newton-Armijo Algorithm

If the objection function is the twice-order differentiable, we can use Newdon-Armijo algorithm to compute the unconstrained optimization problem according to the following algorithm.(i)Set the initial point , and , let .(ii)Let compute , if , then take , stop, otherwise go to (iii).(iii)Compute and the descent direction from .(iv)Armijo step-size, choose a step-size such that where , .(v)Set , , go to step (ii).

6. Numerical Experiment

6.1. The Parameters Selection

Before our experiments, there are three important parameters to be selected. The first parameter is the smooth factor . Some researchers [14] gave them the upper in the each optimization problem as follows: where is the number of sample and is the accuracy. After gaining the upper value, we select the weight parameters and the kernel parameters by using 5-fold cross validation. This whole training and testing procedure is repeated 5 times with different training and testing partitions; finally, the results on the testing partitions are averaged and reported. The figure of 5-fold cross validation can be found in Figure 2. The optimal value of is chosen from the range and the in linearly decaying function is set .

6.2. Evaluation Criterion and Datasets

According to the proposed FSSVMs, the accuracy is not suitable for the high imbalanced datasets. We employ the geometric mean of the , [15] to evaluate the performances of algorithms in our experiments: where and indicate the accuracy rate of the positives and the negative, respectively. The impact of the change of or value on the value depends on the size of or value: the smaller or value, the bigger the change of the value. That means the more minority class samples have wrong points, the greater the cost of misclassification gain.

We demonstrate the effectiveness of the selected FSSVMs with five benchmark real-word imbalanced datasets from the UCI machine learning repository [16]. These real-world datasets contain some outliers and noisy examples and the features can be found in Table 1.

6.3. Experimental Results

For the differentiability of the smooth functions is different, some of the first-order differentiable FSSVMs or SSVMs are designed to test under the BFGS algorithm and the two-order differentiable ones are tested under the Newton-Armijo algorithm. Among the methods, there are three kinds of smooth functions and two kinds of decaying fuzzy membership functions to be chosen. For brevity, We use subscript operator , , , , to stand for the linear-decaying function, exponential-decaying function, , , and in the FSSVMs. The result of experiments can be found in Tables 2 and 3. From the tables, it is clear that the G-means of the FSSVMs are better than those of the SSVMs and the normal SVM for the five datasets. Those show that the two fuzzy memberships and the imbalanced factor play an active role in the FSSVMs. On the other hand, the smooth functions change the constraints of the optimization into the unconstraint ones, which make the BFGS algorithm and the Newton-Armijo algorithm be used in the computational process. Because the features and the imbalanced ratios of the datasets are discriminating, the effectiveness of tests is different: for the higher imbalanced ratio datasets, it is better to select the linear decaying SVM and the BFGS algorithm, and the fuzzy membership function can well weaken the effect of the outliers and noise; for the lower imbalanced ratio datasets, it is better to choose the exponential decaying SVM and the Newton-Armijio algorithm. It seem that the smooth function increases the complexity of the optimization; in fact, it makes the speed of convergence become faster in Newton-Amijio algorithm. Obviously, the running time (exclusive the time of the preprocessing normal SVM) of the Newton algorithm is much smaller than that of the other SVMs.

Table 4 presents some of the best FSSVMs which are compared with some other methods for the five datasets. From the results, we can observe that most of the FSSVMs methods obtained better classification results than the results given by the undersampling, oversampling, SMOTE, ADASYN, Z-SVM, and the normal SVM training. But for the different datasets, the optimal method is different and it needs to select the best combination among the fuzzy memberships functions and smooth functions.

7. Conclusions

Choosing a proper fuzzy membership function and a proper smooth function are quite important to solve classification problem with FSSVM. A kind of FSSVM with two fuzzy functions and three smooth functions for nonlinear classification has been proposed in this paper, which can use the BFGS algorithm or Newton-Armijo algorithm to compute. Experiments results confirm that it is effective according to the datasets features to choose fuzzy membership function and smooth function. It can achieve better performance on reducing the disturbance of outliers and noise than some existing methods in imbalanced datasets.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The project is supported by the Natural Science Ningxia Foundation (NZ13095).