Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014 (2014), Article ID 872697, 6 pages
http://dx.doi.org/10.1155/2014/872697
Research Article

A Novel Support Vector Machine with Globality-Locality Preserving

Institute of Metrology and Computational Science, China Jiliang University, Hangzhou, Zhejiang 310018, China

Received 25 April 2014; Accepted 27 May 2014; Published 17 June 2014

Academic Editor: Shan Zhao

Copyright © 2014 Cheng-Long Ma and Yu-Bo Yuan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Support vector machine (SVM) is regarded as a powerful method for pattern classification. However, the solution of the primal optimal model of SVM is susceptible for class distribution and may result in a nonrobust solution. In order to overcome this shortcoming, an improved model, support vector machine with globality-locality preserving (GLPSVM), is proposed. It introduces globality-locality preserving into the standard SVM, which can preserve the manifold structure of the data space. We complete rich experiments on the UCI machine learning data sets. The results validate the effectiveness of the proposed model, especially on the Wine and Iris databases; the recognition rate is above 97% and outperforms all the algorithms that were developed from SVM.

1. Introduction

In the past decades, support vector machine (SVM) [1] was thought to be a powerful tool for classification tasks. It can separate different classes by hyperplanes, which are determined by optimal directions and support vectors, while the optimal directions are obtained by maximizing the margins between each two classes. SVM and its variants [25] have been successfully applied to many research areas such as face detection and recognition [6], speech recognition [7], text classification [8], and image retrieval [9].

It is well known that SVM is an optimization problem. The optimal solution can be found by solving a quadratic programming problem. Because the objective function is convex, the global minimum solution is guaranteed. However, the traditional SVM solution is susceptible to class distribution, which means it is nonrobust to data samples. So, in order to overcome this shortcoming, Zafeiriou et al. [10] proposed a minimum class variance support vector machine (MCVSVM), which was inspired by optimization Fisher’s discriminant ratio. By taking the manifold structure of the data space into consideration, Wang et al. [11] introduced within-class locality preserving into SVM and proposed the minimum class locality preserving variance support vector machine (MCLPV_SVM). Besides, since SVM deals with a subset of data points (support vectors) rather than the entire data set, so to some extent, SVM solution is based on “local" characteristics of the data; therefore, Xiong and Cherkassky [12] incorporated global discriminant information into SVM and proposed the SVM+LDA. Analogously, Khan et al. [13] presented a novel SVM+NDA (nonparametric discriminant analysis) model for classification; it fused some partially global information and local information. In particular, both SVM+LDA and SVM+NDA can cope with the small sample problem, which benefited from the construction method of the models.

According to the above analysis, none of the mentioned methods takes the manifold structure of the data space into consideration, except the MCLPV_SVM method, but MCLPV_SVM loses some discriminant information. In the basic learning algorithms, the locality of learning data set should be considered; in recent years, many excellent publications showed the importance of locality, such as [1416]. It is also important for finding the clusters in high dimensional data set, such as [17, 18]. Recently, discriminant locality preserving projections (DLPP) [1921] is proposed, and it is keen to find the subspace which can best discriminate different classes by maximizing the locality preserving between-class distances while minimizing the locality preserving within-class distances. So, DLPP can not only preserve local structure, but also implicate discriminant information. Inspired by DLPP, and considering that the mean sample can reflect the characteristics of data structure, which is also center-invariant in each class [22], this paper proposed a novel learning algorithm, called support vector machine with globality-locality preserving (GLPSVM). It introduced globality and locality preserving ability into the SVM. The proposed method preserves intrinsic manifold structure of the data space, takes the class distribution into consideration, and obtains a robust solution.

In summary, this paper is organized as follows. Section 2 gives a brief review of SVM. Section 3 gives the proposed method, including derivation, solving, and analysis. The experimental results are given in Section 4. Finally, conclusions are in Section 5.

2. A Brief Review of SVM

Given a set of pairwise samples , where is a sample point in -dimensional space and is the corresponding label. The direct way to separate these samples into two classes is to find a separating hyperplane.

For the linearly separable case, the SVM model is as follows:

By transforming this optimization problem into its corresponding dual problem, the optimal discriminant vectors can be found through where and are the dual variable and data sample (called support vector), respectively. The support vectors are crucial for classification since removing these points may change the solution of SVM. In SVM, the separable directions are decided by the optimal discriminant vectors obtained through (2). So, if we project the data into a feature space spanning by the optimal discriminant vectors, then these data will be separable in the feature space.

Usually, in real world applications, we need to deal with the multiclassification cases, such as face recognition [6] and text categorization [8]. In such cases, we need to extend SVM to multiclass SVM. The general approach is to code the classes according to a certain strategy, like one-against-all (OAA) or one-against-one (OAO) [23]. The OAA coding approach compares data in a single class with all the samples in others classes to generate the decision boundary; in this case, -many decision boundaries are built for -many classes. The OAO strategy generates decision boundaries from all possible pair of classes, which obtains -many decision boundaries. Comparatively, the OAO can obtain more discriminant vectors than OAA but will cost more computational time.

3. The Proposed GLPSVM

In this section, we will propose a novel support vector classier which takes the class distribution into consideration, and a robust solution is expected. Firstly, we will introduce the definition of globality-locality preserving.

3.1. Globality-Locality Preserving (GLP)

Discriminant locality preserving projections (DLPP) [1921] is a powerful method for extracting the manifold structure of data samples. Given a set of samples , where is a sample point in -dimensional space, and is the corresponding labels. Let and , and then all label samples belonging to and the others belonging to . Thus, we have , where is the number of samples in and is the number of samples in . Suppose that is the low-dimensional feature projections of and DLPP tries to maximize an objective function as follows: where and represent the mean vectors of the projected samples in the th and th class, respectively. and are elements of the within-class weight matrix and the between-class weight matrix defined as where , denote local neighbors of and , respectively, is the sample neighborhood, and is the mean sample neighborhood. The parameters and are empirically determined, and is the mean vector of samples in the th class. Suppose is a mapping from high-dimensional data space to low-dimensional feature space; that is, . Then, the objection function (3) can be rewritten as follows: where , and are the Laplacian matrices [24, 25], , , is a diagonal matrix and its elements are column (or row) sums of , , is a block diagonal matrix, that is, , and is also a block diagonal matrix; each block of is a diagonal matrix and its elements are the column (or row) sums of each block of .

Formula (5) is also called locality preserving discriminant ratio (criterion); that is, DLPP is keen to find the feature subspace via maximizing this ratio, which means simultaneously maximizing the locality preserving between-class distance and minimizing the locality preserving within-class distance.

On the other hand, Huang et al. [22] had another view of the weight matrix of DLPP, and they believe that the mean sample is center-invariant in the same class, and it can reflect the characteristics of data structure, so to some extent it decides the accuracy in classification tasks. In this paper, we preserve the structure information of mean samples, which can to a large extent make up the loss of global information when only local structure is preserved. In a word, we define locality preserving matrix and globality preserving matrix as follows:(i)locality preserving matrix: ,(ii)globality preserving matrix: .

3.2. Derivation of GLPSVM

Now, we give the proposed extension of SVM, called GLPSVM. For the linearly separable data, GLPSVM can be described as follows: where represents the regularization matrix, which is added to cope with small sample problems. This model not only maximizes the margin of the separating hyperplane, but also minimizes the scatter of the data in discriminant directions, which benefited from taking both the locally manifold structure of the data space and globality manifold structure of the mean sample space into consideration. Here, is an empirically determined key parameter which controls the tradeoff.

According to the model, we can see that the optimal discriminant directions are no longer the same as classical SVM. It is because that we introduce the obtained GLP to the optimization model of SVM. The classification performance of the proposed method will be shown in Section 4.

3.3. Solution to the GLPSVM

Similar to SVM, the proposed model can be viewed as a quadratic optimization problem. Lagrange’s method of undetermined multipliers is used to solve this problem. Suppose () is positive Lagrange multipliers, and let , and then the corresponding Lagrangian is Taking derivatives with respect to and , respectively, we obtain

Hence, we have the following dual problem:

Suppose is the optimization solution of this dual problem. Then, the optimal discriminant vectors can be found as

So, the corresponding decision surface is

Finally, the corresponding optimal bias can be calculated as where is the number of support vectors.

As can be seen, in linearly separable case, GLPSVM is required to obtain a completely accurate decision hyperplane. However, in real world applications, the decision hyperplane no longer needs to be completely accurate, so we extend the GLPSVM to soft margin situations.

3.4. Soft Margin GLPSVM

Reference [13] proposed the soft margin method for SVM, to cope with cases when we do not need to obtain a completely accurate decision hyperplane; that is, we permit an error tolerability within limits. Then, the soft margin GLPSVM can be described as follows: where is a predefined positive real number and larger values of correspond to higher penalty assigned to errors. is slack variables which reflect the degree of misclassification. Apparently, the soft margin GLPSVM is also a quadratic optimization problem, and we can solve it in the same way as standard GLPSVM.

3.5. Effective Algorithm for GLPSVM

Note that the objection function of classical SVM is ; however, in this paper, we use for replacement. In fact, if we define some relational expressions as follows: and then the GLPSVM model is equivalent to

Then, we can find that the solution to GLPSVM can be solved by standard SVM software package, but the optimal discriminant vectors are different. Since we introduce globality-locality preserving to SVM, the optimal discriminant directions in GLPSVM can preserve the intrinsic manifold structure of the data in low-dimensional feature space. Besides, matrices and can be calculated through the eigenvalue decomposition of the matrix ; the interested readers can refer to literature [13] for more details.

4. Performance Evaluation

4.1. The Parameters’ Influence on the Performance

In the proposed model, there are totally six parameters; they are the neighborhood parameters and the heat kernel parameters in locality preserving matrix and globality preserving matrix, respectively, and a trade-off parameter along with the regularization parameter . In this section, to show the parameters’ influence on the performance, we do experiments on a binary ionosphere database. We select 30% of samples of each class for training, the rest of samples are for testing, and all the samples are normalized before the experiment.

For the parameters setting, the regularization parameter is selected from the set , the heat kernel parameter is selected from the set , the neighborhood parameter is selected from the set , and the trade-off parameter is set to .

Firstly, we set the trade-off parameter to be 0.2 and use the same heat kernel parameter () to see the effect of the parameters and . Table 1 shows classification accuracy under different settings of these three parameters. We can find that the regularization parameter plays an important role in classification performance. Besides, appropriate parameter selection can provide better classification results.

tab1
Table 1: The effects of parameter and sample neighborhood on classification performance.

Next, we will explore the effect of the trade-off parameter . Table 2 presents the classification accuracy of GLPSVM under different trade-off parameter and different regularization parameter . Here, we give the highest accuracy with its corresponding sample neighborhood parameter . It can be seen that the also plays an important role in classification results and small may be more appropriate than large .

tab2
Table 2: The effects of trade-off parameter on classification performance; let .
4.2. Comparative Analysis

In this subsection, comparative experiments are conducted to test the ability of the proposed GLPSVM. We compare it with SVM, SVM+LDA, MCVSVM, and MCLPV_SVM on seven different databases selected from the well-known UCI database. The seven databases include three binary databases and four multiclass databases. In the multiclass tasks, one-against-one coding strategy is employed. The detailed information of these selected databases is shown in Table 3. For all these databases, 30% of samples in each class are randomly selected for training, the remaining samples are used for testing, and all the data samples are normalized before experiment.

tab3
Table 3: Database information for comparative analysis.

Table 4 gives the classification accuracy of SVM, SVM+LDA, MCVSVM, MCLPV_SVM, and GLPSVM under different regularization parameter, neighborhood parameter, and heat kernel parameter, where the highest average classification accuracy is presented. Here, the average classification accuracy is obtained through repeating the operation 20 times. It can be seen from Table 4 that the proposed GLPSVM has always the highest accuracy, especially on the Wine and the Iris databases. The accuracy is more than 97%, far more than the other algorithms.

tab4
Table 4: Classification accuracy for comparative analysis.

5. Conclusions

In this paper, a new extension of SVM was proposed, which was called support vector machine with globality-locality preserving (GLPSVM). It took the intrinsic manifold structure of the data space into consideration. Besides, the soft margin GLPSVM was also presented. The effective algorithm of GLPSVM showed that the model could be solved through transferring it to the standard SVM model and using the standard SVM software package for solving, which would greatly improve the implementation efficiency. Finally, experimental results on real world databases validated that the proposed method could have better performance than SVM, SVM+LDA, MCVSVM, and MCLPV_SVM.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research has been supported by the National Natural Science Foundation under Grant (no. 61001200).

References

  1. V. Vapnik, Statistical Learning Theory, Wiley, New York, NY, USA, 1998.
  2. Y. B. Yuan and T. Huang, “A polynomial smooth support vector machine for classification,” in Advanced Data Mining and Applications, vol. 3584 of Lecture Notes in Artificial Intelligence, pp. 370–377, Springer, Berlin, Germany, 2005. View at Google Scholar
  3. Y. B. Yuan and C. Li, “A new smooth support vector machine,” in Computational Intelligence and Security, vol. 3801 of Lecture Notes in Artificial Intelligence, pp. 392–397, Springer, Berlin, Germany, 2005. View at Google Scholar
  4. Y. B. Yuan, “Canonical duality solution for alternating support vector machine,” Journal of Industrial and Management Optimization, vol. 8, no. 3, pp. 611–621, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. B. Yuan, W. Fan, and D. Pu, “Spline function smooth support vector machine for classification,” Journal of Industrial and Management Optimization, vol. 3, no. 3, pp. 529–542, 2007. View at Google Scholar · View at Scopus
  6. Y. Li, S. Gong, J. Sherrah, and H. Liddellb, “Support vector machine based multi-view face detection and recognition,” Image and Vision Computing, vol. 22, no. 5, pp. 413–427, 2004. View at Publisher · View at Google Scholar
  7. A. Ganapathiraju, J. E. Hamaker, and J. Picone, “Applications of support vector machines to speech recognition,” IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2348–2355, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Tong and D. Koller, “Support vector machine active learning with applications to text classification,” The Journal of Machine Learning Research, vol. 2, pp. 45–66, 2002. View at Google Scholar
  9. S. Tong and E. Chang, “Support vector machine active learning for image retrieval,” in Proceedings of the 9th ACM International Conference on Multimedia, pp. 107–118, October 2001. View at Scopus
  10. S. Zafeiriou, A. Tefas, and I. Pitas, “Minimum class variance support vector machines,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2551–2564, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. X. Wang, F. L. Chung, and S. Wang, “On minimum class locality preserving variance support vector machine,” Pattern Recognition, vol. 43, no. 8, pp. 2753–2762, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Xiong and V. Cherkassky, “A combined SVM and LDA approach for classification,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '05), pp. 1455–1459, August 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. N. M. Khan, R. Ksantini, I. S. Ahmad, and B. Boufama, “A novel SVM+NDA model for classification with an application to face recognition,” Pattern Recognition, vol. 45, no. 1, pp. 66–79, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. L. Zhang, “Locally regressive projections,” International Journal of Software and Informatics, vol. 7, no. 3, pp. 435–451, 2013. View at Google Scholar
  15. L. Zhang, C. Chen, J. Bu, and Z. Chen, “Locally discriminative coclustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 6, pp. 1025–1035, 2012. View at Publisher · View at Google Scholar
  16. L. Zhang, C. Chen, J. Bu, and Z. Chen, “Active learning based on locally linear reconstruction,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. 10, pp. 2026–2038, 2011. View at Google Scholar
  17. Q. Q. Gu, Z. Li, and J. Han, “Linear discriminant dimensionality reduction,” in Machine Learning and Knowledge Discovery in Databases, vol. 6911 of Lecture Notes in Computer Science, pp. 549–564, 2011. View at Google Scholar
  18. Q. Q. Gu and J. Han, “Clustered Support Vector Machines,” in Proceedings of the 16th International Conference on Artificial Intelligence and Statistics, pp. 307–315, 2013.
  19. W. Yu, X. Teng, and C. Liub, “Face recognition using discriminant locality preserving projections,” Image and Vision Computing, vol. 24, no. 3, pp. 239–248, 2006. View at Publisher · View at Google Scholar
  20. L. Yang, W. Gong, X. Gu, W. Li, and Y. Liang, “Null space discriminant locality preserving projections for face recognition,” Neurocomputing, vol. 71, no. 16–18, pp. 3644–3649, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. X. Gu, W. Gong, and L. Yang, “Regularized locality preserving discriminant analysis for face recognition,” Neurocomputing, vol. 74, no. 17, pp. 3036–3042, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Huang, D. Yang, F. Yang et al., “Face recognition via globality-locality preserving projections,” Computer Vision and Pattern Recognition, 2013, http://arxiv.org/abs/1311.1279.
  23. C. W. Hsu and C. J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 415–425, 2002. View at Publisher · View at Google Scholar
  24. X. F. He and P. Niyogi, “Local preserving projections,” Neural Information Processing Systems (NIPS), 2003. View at Google Scholar
  25. M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: a geometric framework for learning from labeled and unlabeled examples,” Journal of Machine Learning Research, vol. 7, pp. 2399–2434, 2006. View at Google Scholar · View at Scopus