Abstract

In medical datasets classification, support vector machine (SVM) is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM) for the class imbalance problem (called FSVM-CIP) is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

1. Introduction

Computer techniques such as machine learning and pattern recognition have been widely adopted by modern medicine. One reason is that an enormous amount of data has to be gathered and analyzed which is very hard or even impossible without making use of computer techniques. The other reason is that computer techniques have led toward digital analysis of pathological diagnosis, automatic classification differentiating, and detecting diseases. In some cases, an early symptom of some diseases is lighter and gives no obvious pointer to a possible diagnosis; moreover, many symptoms look very similar to each other, though they are caused by different diseases. So it may be difficult even for experienced doctors to make correct diagnosis. Therefore, an automatic classification system can help doctor diagnose accurately, assess disorders remotely and evaluate the treatment process [1].

In recent years, researchers have proposed a lot of approaches for medicine classification, such as neural network, Bayesian network, and support vector machine (SVM). Among them SVM is considered to be one of the most successful ones [2]. For example, to improve time and accuracy in differentiating diffuse interstitial lung disease for computer-aided quantification, a hierarchical SVM is introduced which shows promise for various real-time and online image-based classification applications in clinical fields [3]. SVM as a classifier is used for liver disorders and its correct classification rate is highly successful compared to the other results attained [4]. A two-stage approach is proposed for medical datasets classification, in which the artificial bee colony algorithm is used for feature selection and SVM is used for classification [5].

The support vector machine (SVM) proposed by Vapnik [6, 7] is a novel approach for solving pattern recognition problems. SVM maps the sample points into a high-dimensional feature space to seek for an optimal separating hyperplane through maximizing the margin between two classes. In addition, SVM is a quadratic programming (QP) problem that assures that its solution is obtained once it is the global unique solution, and the sparsity of solution assures better generalization. However, most of the real-world medical datasets usually contain some outliers and noisy examples. The classical SVM is very sensitive to outliers/noise. To solve this problem, fuzzy support vector machine (FSVM) [8] is proposed, in which each sample is given a fuzzy membership that denotes the attitude of the corresponding point toward one class. The membership represents how important the sample is to the decision surface.

Nevertheless, many medical datasets are composed of “normal” samples with only a small percentage of “abnormal” ones, which leads to the so-called class imbalance problems. FSM does not take into consideration the class distribution and can be sensitive to the class imbalance problem. As a result, the hyperplane of FSVM can be skewed towards the minority class, and this skewness can degrade the performance of FSVM with respect to the minority class. To tackle this problem, Veropoulos et al. [9] have proposed a method called different error costs (DEC), where the SVM objective function has been modified to assign two different misclassification cost values. It is noticed that One-Class Classification [10, 11] is sometimes used in novelty detection, and it only uses the normal training data. However, in many real medical datasets, abnormal examples exist, although they are very few. Furthermore, in classification tasks, the scatter matrix can play an important role when incorporated with local intrinsic geometry structures of samples [12]. Some methods have been recently proposed to incorporate the structure of the data distribution into SVM. A linear manifold learning method named locality preserving projection (LPP) is proposed in [13, 14], which aims at preserving the local manifold structure of the samples space. Although LPP considers enhancing the local data compactness with each manifold, it does not separate manifolds with different class labels.

In this paper, we propose a new FSVM method for the class imbalance problem (FSVM-CIP) which can be used to address both the problem of class imbalance and outliers/noise. FSVM-CIP not only considers the fuzziness of each training sample but also extends manifold regularization and maximizes the localized relative margin. It takes the positive samples and negative samples into consideration with different misclassification costs according to their unbalanced distributions. We systematically evaluated the FSVM-CIP on five real-world medical datasets and compared its performance with four different SVM methods for classification. The results showed that the proposed method can improve the classification accuracy and handle the classification problems with outliers/noise and imbalanced datasets more effectively.

The rest of this paper is organized as follows. Section 2 briefly reviews the related works. Section 3 presents the details of FSVM-CIP in the linear case. Section 4 presents FSVM-CIP in the nonlinear case in detail. The experimental results on five medical datasets are reported in Section 5, and some concluding remarks are given in Section 6.

2.1. Fuzzy Support Vector Machines (FSVMs)

In traditional SVM, all the data points are considered with equal importance and assigned the same penal parameter in its objective function. However, in many real-world classification applications, some sample points, such as the outliers or noises, may not be exactly assigned to one of these two classes, and each sample point does not have the same meaning to the decision surface. To solve this problem, the theory of fuzzy support vector machine was originally proposed in [8]. Fuzzy membership to each sample point is introduced such that different sample points can make different contributions to the construction of decision surface.

Suppose the training samples are where is the -dimension sample point, represents its class label, and () is a fuzzy membership which satisfies with a sufficiently small constant . The quadratic optimization problem for classification is considered as follows: where is a normal vector of the separating hyperplane, is a bias term, and is a parameter which has to be determined beforehand to control the tradeoff between the classification margin and the cost of misclassification error. Since is the attitude of the corresponding point towards one class and the slack variables are a measure of error, then the term can be considered a measure of error with different weights. It is noted that the bigger the is, the more importantly the corresponding point is treated; the smaller the is, the less importantly the corresponding point is treated; thus, different input points can make different contributions to the learning of decision surface. Therefore, FSVM can find a more robust hyperplane by maximizing the margin by letting some misclassification of less important points.

In order to solve the FSM optimal problem, (2) is transformed into the following dual problem by introducing Lagrangian multipliers :

Compared with the standard SVM, the above statement only has a little difference, which is the upper bound of the values of . By solving this dual problem in (3) for optimal , and can be recovered in the same way as in the standard SVM.

2.2. Locality Preserving Projections (LPP)

Locality preserving projection (LPP) [13, 14] is a linear dimensionality reduction algorithm by feature extraction or projection. It builds an adjacency graph incorporating neighborhood information of the data set using the Laplacian graph and then computes a transformation matrix which maps the data points into a subspace. This linear transformation optimally preserves local neighborhood information in a certain sense. The representation map generated by this method can be viewed as a linear discrete approximation to a continuous map that naturally arises from the geometry of the manifold.

For a set (), let denote nearest neighbors of node , and let denote the adjacency graph of dataset . Here, the th node corresponds to the data point and nodes and are connected by an edge if node is among the nearest neighbors of node or if node is among the nearest neighbors of node ; that is, or . The adjacency graph can be weighed as follows: where is called the heart kernel function and is a constant. is the Euclidean distance in between point and point . LPP tries to find the transformation vector by minimizing the following objective function: where is a diagonal matrix whose entries are column sum of and normalizes each weight. is the Laplacian matrix. The transformation vector in the objective function in (5) is given by the minimum eigenvalue solution to the generalized eigenvalue problem. LPP preserves the intrinsic geometry and local structure of the data by minimizing the objective function.

3. FSVM for the Class Imbalance Problem in the Linear Case

In this section, we first define the local within-class preserving scatter matrix in the linear case. Secondly, the optimization problem formulation of FSVM-CIP in the linear case is given. Moreover, the fuzzy membership functions for linear FSVM-CIP are defined. Finally, the algorithm of linear FSVM-CIP is summarized.

3.1. The Local within-Class Preserving Scatter Matrix in the Linear Case

Following the idea of [15], we build the nearest within-class neighbor graph to model intrinsic geometry and local structure of the data. The graph preserves local neighborhood information in a certain sense and it can be viewed as a linear discrete approximation to a continuous map that naturally arises from the geometry of the manifold.

Considering the fact that we have a binary classification problem, one class denoted as contains sample points with and the other class denoted as contains sample points with . Set and , and the total number of sample points is .

Definition 1. For each data , suppose its nearest within-class neighbors set and an edge is put between and its neighbors. The corresponding weight matrix is where normalizes each weight.

Definition 2. The local within-class preserving scatter matrix where is an diagonal matrix. In this case, the obtained nearest within-class neighbor graph attempts to preserve the local structure of the data set and preserves locality of nearby points with same class label in the embedding space during the unfolding process of nonlinear structures [15]. In fact, a heavy penalty is applied to the objective function through the weight if the neighboring data and are mapped far apart. Hence, the minimization criterion is an attempt to ensure points and close to each other as well as and being close.

It is worthwhile to note that the local within-class scatter matrix is symmetric and positive semidefinite. looks similar to the within-class scatter matrix [16, 17] and the Laplacian matrix in LPP. However, reflects the intrinsic geometry and local structure of the data, and only considers the mean value of samples in different classes. carries the class label information and discriminating information but only considers the information of nearest neighbors for each data point in the input space, without considering the class labels.

3.2. FSVM-CIP in the Linear Case

To tackle the imbalance classification problem with noise and outliers, we integrate FSVM, the ideas of imbalance classification problem, and the local within-class preserving scatter. On one hand, as shown in Figure 1, the linear classifier presented by the hyperplane is and defines a field for majority-class examples () and another field for minority-class examples () which is used to weaken the skewness towards the minority class and enhance the locality maximum margin. On the other hand, by assigning a higher misclassification cost for the minority class examples than the majority class examples, the effect of class imbalance could be reduced. In addition, to minimize the amount of misclassifications, the local within-class scatter matrix is used to preserve intrinsic geometry and local structure of the data.

Due to this, we define the primal problem of FSVM-CIP as follows: where , denote the number of positive (normal class or majority class) and negative (abnormal class or minority class) training points, and . is a nonnegative number, and is the margin between the hyperplane and the minority class examples. is a nonnegative regulation constant which is the tradeoff between the local within-class scatter and the margin. Variables are positive penalty parameters, which tune penalty cost of the training error for positive and negative training data, respectively. are the slack variables, and , are fuzzy memberships for two-class examples.

Obviously, provides prior geometrical information into the penalty terms based on manifold regularization. Minimizing means that close data originally in the same class in the input space are likely to be close in the output place. Therefore, aims to preserve the local information of the manifold structure.

It is noted that, in FSVM-CIP, we assign different fuzzy membership values for training examples to reflect their different classes of importance. We also showed that it is similar to assign different misclassification costs for different training examples. In order to reduce the effect of class imbalance, we can assign higher membership values or lower parameter for the minority class examples, while we assign lower membership values or higher for the majority class. That is, our proposed method would not tend to skew the separating hyperplane towards the minority class examples as the minority class examples are now assigned with a higher misclassification cost. By means of setting and extending manifold regularization, the learned optimal separating hyperplane enhances the relative maximum margin and FSVM-CIP will be less sensitive to imbalanced class problems.

Then, we transform this problem into its corresponding dual problem as follows.

The primal Lagrangian is with Lagrangian multipliers , , and . The derivatives of with respect to the primal variables using the Karush-Kuhn-Tucker (KKT) conditions should vanish. Consider where is an -dimensional vector of ones, and . We have .

Substituting (10)–(14) into (9), we obtain the dual form of the optimization problem: where is a matrix with entry , and vectors .

Equation (15) is a typical convex quadratic programming problem which is easy to be numerically solved. Suppose can be used to solve the above optimization problem, and then the optimal weight vector is

Denote a training sample () called a support vector (SV) if the corresponding Lagrange multiplier . Denote the SV sets as and while and denote the number of SVs in and , respectively. According to KKT condition, (15) becomes equations for the input data in and , respectively, with slack variables and being 0. Thus, the optimal thresholds and can be calculated. However, from the numerical perspective, it is better to take the mean value of and resulting from all such data. Therefore, the optimal thresholds and are computed by the following formula:

As a result, the corresponding decision function of the linear FSVM-CIP will be

Note that, to deal with the small sample size problem, is regularized by adding a scale multiple of the identity matrix with before any inversion takes place. Hence, is always nonsingular, and the inverse of exists.

Following the terminology in [18], a training sample () is called a margin error (ME) if the corresponding slack variable . We give the following theorem for parameter selection later.

Theorem 3. Let and denote the number of MEs in the positive and negative classes; and denote the number of SVs in the positive and negative classes, respectively. Then one has where and denote the mean fuzzy membership of MEs in the positive and negative classes; and denote the mean fuzzy membership of SVs in the positive and negative classes, respectively.

A proof of the above theorem can be found in Appendix.

3.3. Fuzzy Membership Functions in the Linear Case

In FSVM, the fuzzy membership is used to reduce the effects of outliers or noises and different fuzzy membership functions have different influences on the fuzzy algorithm. Basically, the rule to assign proper membership values to data points can depend on the relative importance of date points to their own classes. In this paper, we consider two fuzzy membership functions given in [19].

Given the sequence of training points, denote the mean of positive class and negative class as and .

Definition 4. The is called the linear fuzzy membership and can be defined as where is a small positive value, which is used to avoid becoming zero. is the Euclidean distance.

Definition 5. The is called the exponential fuzzy membership and can be defined as where parameter determines the steepness of the decay.

3.4. Solution

Based on the above, we can state the approach of proposed FSVM-CIP in the linear case as Algorithm 1.

Input:
Training samples
Testing samples
Output:
The predicted labels of data
Procedure:
(1) Compute fuzzy membership using (22) or (23) for the data
(2) Construct data adjacency graph using nearest neighbors and compute the edge weights matrix with examples
(3) Construct local within-class preserving scatter matrix using (8)
(4) Choose parameters (6); and (8)
(5) Compute using (15) and using (17) with a QP Solver
(6) Using decision function (19) with samples , and output the final class labels

4. FSVM for the Class Imbalance Problem in the Nonlinear Case

In this section, we extend the local within-class preserving scatter matrix and FSVM-CIP into feature space. Moreover, the fuzzy membership functions in feature space are defined. Finally, the algorithm of kernel FSVM-CIP is summarized.

4.1. Kernel Extension

In order to handle nonlinear classification, the kernelization trick [20] is used to map the -dimensional date points into an arbitrary reproducing kernel Hilbert space (RKHS) [21] via a mapping function ; that is, . Then a linear hyperplane in feature space would correspond to a nonlinear hyperplane in the original space where , , and .

Let denote the date matrices in feature space , ; then the kernel function is a matrix with entry .

Here the kernel local within-class scatter matrix in feature space is where , are -order, -order identity matrixes, respectively. Based on the above notations, , are , matrixes, respectively; thus .

The weight matrixes and are the nonlinear version of and , respectively. and could be built by , and the nonlinear version of is where is a normalizer.

Thus, the kernel FSVM-CIP can be easily achieved by solving the following quadratic problem:

Like its linear counterpart, the solution to this optimization problem can be easily found using Lagrange multipliers. By using the representer theorem, can be given by . We obtain the dual form of the optimization problem: where and + . Vectors , and is a diagonal matrix.

Equation (27) is a typical convex quadratic programming problem which is easy to be numerically solved. Suppose can be used to solve the above optimization problem; then the optimal weight vector . Therefore, the optimal thresholds and are computed by the following formula:

Finally, a more robust decision function of kernel FSVM-CIP will be

Theorem 6. The matrix in (27) is symmetric and positive semidefinite.

A proof of the above theorem can be found in Appendix.

Next, we consider fuzzy membership functions in feature space.

Definition 7. The is called the linear fuzzy membership in feature space and can be defined as where is a small positive value. is the Euclidean distance.

Definition 8. The is called the exponential fuzzy membership in feature space and can be defined as where parameter determines the steepness of the decay. Consider

Thus, the distance can be given by Likewise, the can be given in a similar manner.

4.2. Solution

Based on the above, we can state the approach of kernel FSVM-CIP as Algorithm 2.

Input:
training samples
Testing samples
Output:
The predicted labels of data
Procedure:
(1) Choose a kernel function . Compute the Gram matrix .
(2) Compute fuzzy membership using (31) or (32) for the data
(3) Construct data adjacency graph using nearest neighbors and compute the edge weights matrix with examples
(4) Construct local within-class preserving scatter matrix using (24)
(5) Choose parameters (25); and (26)
(6) Compute using (27) and using (28) with a QP Solver
(7) Using decision function (30) with samples , and output the final class labels

5. Experiments and Discussions

To evaluate the performance of our proposed FSVM-CIP, in this section, FSVM-CIP is evaluated compared with other related representative methods, such as standard FSVM [8], SVDD [11], FSVM for class imbalance learning (FSVM-CIL) [22], and FSVM with minimum within-class scatter (WCS-FSVM) [23]. We implement FSVM-CIP using the linear fuzzy membership and the exponential fuzzy membership, respectively, which are represented as and . All the experiments are performed in Matlab (R2010a) on personal computer, whose configuration is as follows: CPU 2.99 GHz, 4.0 G RAM, and Microsoft Windows XP.

5.1. Data Preparation

In this section, we use five real-world medical datasets from the UCI repository of machine learning database [24], to demonstrate the classification performance of the method proposed in this paper. These five medical datasets are breast, heart, hepatitis, BUPA liver, and pima diabetes. It is highly likely that these real-world datasets contain some outliers and noisy examples in different amounts [22]. In each of them, the positive class consists of the data corresponding to the healthy, normal, or benign cases, while the negative class contains the data for diseased, abnormal, or malignant cases. Further details of these datasets are provided in Table 1. This contains the total number of positive data #pos, the total number of negative data #neg, the number of positive training examples , the number of negative training examples , the positive-to-negative imbalance ratio Ratio, and the data dimensionality .

5.2. Performance Measure and Experimental Settings

We used the geometric mean of sensitivity (sensitivity = proportion of the positives correctly recognized), specificity (specificity = proportion of the negatives correctly recognized), and accuracy (accuracy = proportion of correctly classified instances) for the classifier performance evaluation in experiments, as commonly used in medical datasets classification research [7].

Like the existing SVM and FSVM algorithms, the solution is sensitive to the setting of the parameters. In order to evaluate the performance, a strategy is that a set of the parameters is given first and then the best cross-validation mean rate among the set is used to estimate the generalized accuracy. We adopt this strategy in this paper. For FSVM-CIP, the parameter is searched in , while and are selected from . is selected from . The heat kernel parameter is searched in and the neighborhood parameter is searched in . In addition, when the linear fuzzy function is used, we set . When the exponential fuzzy function is used, the optimal value of is chosen from the range .

The regularization parameter for FSVM, SVDD, FSVM-CIL, and WCS-FSVM is selected from the set . In WCS-FSVM, is selected from . For FSVM-CIL, the fuzzy membership is based on the distance from the actual hyperplane and uses the exponential fuzzy membership . is chosen from the range .

For the kernel-based methods, we use a Gaussian RBF kernel, that is, , where is the spread of Gaussian kernel, and is searched in , where is the mean norm of the training data.

For parameter selection, we conduct fivefold cross-validation in a stratified manner so that each validation set has the same positive to negative ratio as in the training set. Finally, the experiment is repeated 10 times independently of each dataset.

5.3. Experimental Results

FSVM-CIP method test results developed for the breast, heart, hepatitis, BUPA liver, and pima diabetes datasets are given both in the linear case and nonlinear case. Tables 2, 3, 4, 5, and 6 display the comparison results with the other methods on these five databases, respectively.

The main observations from the performance comparisons include the following.

(1) We can see that, in many real-world applications, a linear classifier seems powerless. In terms of accuracy, kernel method can improve the classification performance for all five medical datasets.

(2) We can clearly observe that the FSVM-CIP outperforms other methods on almost datasets both in the linear case and nonlinear case, which gives higher accuracy. This fortifies the fact that the locality maximum margin and the local structure information presented by local within-class preserving scatter could improve classification performance; furthermore, the method of different misclassification costs based on the number of two classes is a sensitive learning solution to overcome the imbalance problem in SVMs.

(3) It is noted that, for all the datasets considered, the classification accuracy given by the setting is higher than the setting. Therefore, we can state that setting with the appropriate selection of value would be an effective choice applied to any medical dataset. In other words, when dealing with medical datasets classification, the performance of the exponential fuzzy membership is better than linear fuzzy membership in FSVM-CIP.

(4) For breast and heart datasets, the class imbalance is not obviously shaped; WCS-FSVM yielded standard FSVM, SVDD, and FSVM-CIL. We can say that the performance can indeed be improved when the structure of the data is taken into consideration. For the other three datasets, the class imbalance strikingly improved, the results given by standard FSVM and WCS-FSVM for datasets are biased towards the majority class represented as lower specificity and lower accuracy. These results justify the fact that these two methods are sensitive to the class imbalance problem. Meanwhile, SVDD and FSVM-CIL yielded standard FSVM and WCS-FSVM. BY assigning different misclassification costs for the minority class and majority class, the effect of class imbalance could be reduced.

5.4. Parameter Selection for Kernel

The parameter is an essential parameter in our proposed method which controls the tradeoff between the local within-class scatter and the margin. Figure 2 shows the impact of parameter on the classification accuracy of in kernel case with each value of selected from . It can be seen that the best accuracy is obtained for all the datasets and therefore is searched in a reasonable range.

Compared with standard FSVM, the additional neighbor parameter is employed in FSVM-CIP. To evaluate the influence of this parameter on the performance, the classification accuracy of kernel for five medical databases is recorded for each value of in . Figure 3 shows the results. It can be seen that the classification accuracy is not high when value is small and, by increasing , the classification accuracy increases; however, if continues to increase, the classification accuracy begins to drop severely down. It is because, when is too small, the number of nearest neighbors is sparse; when is too large, the number of nearest neighbors is excessive, so to preserve so much local relation may be inappropriate.

6. Conclusion

Computer tools have improved the medical practice implementation to a greater extent. Although computer tools cannot replace the doctors, they can make their work easier and more effective. In this paper, a new fuzzy support machine called FSVM-CIP, used for medical datasets classification, is proposed. The proposed method is based on local within-class preserving scatter and assigned two misclassification costs in the SVM objective function, which is for learning from imbalance datasets in the presence of outliers/noise and enhancing the locality maximum margin. Experiments were performed on several UCI medical datasets with a comparison of the proposed method with several other related methods such as standard FSVM, SVDD, FSVM-CIL, and WCS-FSVM. Obtained results show that the performance of the proposed method is highly successful compared to other results attained and seems very promising. Finally, we can recommend that which uses the exponential fuzzy membership would be an effective choice for medical datasets classification applications. In future work, we intend to perform investigations to large-scale classification problems.

Appendix

Proof of Theorem 3 in Section 3.2.

Proof. According to the dual form of the optimization problem (15), we can derive Likewise, according to the KKT conditions, with satisfy by (12). According to (11), all samples with satisfy . In view of (13), this implies that holds for every positive ME. Summing up over the positive MEs using (A.1), we have Furthermore, in view of (15), each SV in the positive class can control at most to the ; as a result, Combining (A.2) and (A.3), inequality (20) can hold true. Likewise, inequality (21) can be proven in a similar manner.

Proof of Theorem 6 in Section 4.1.

Proof. We know that , and is a Gram matrix, so is symmetric and positive semidefinite. The transpose of the matrix is So is a symmetric matrix and then is symmetric. Set , where For any nonzero vector , where . The local within-class scatter matrix is semidefinite, so the matrix is semidefinite. That is, the matrix is semidefinite, and then is semidefinite.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the National Natural Science Foundation of China under Contact (61070121).