Abstract

Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.

1. Introduction

Support Vector Machine (SVM) is an important machine learning method [1], which trains linear learner in feature space derived by the kernel function, and utilizes generalization theory to avoid overfitting phenomenon. Recently, Multiple Kernel Learning (MKL) method has received intensive attention due to its more desirable recognition effect over the classical SVM [2, 3]. However, parameter optimization of multiple kernel introduces a high computing cost in searching the entire feature space and solving tremendous convex quadratic optimization problems. Hassan et al. [4] utilize the Genetic Algorithm (GA) to improve the search efficiency in MKL, but the availability of GA remains to be proved and its search direction is too complex to be determined. Besides, with the data volume increasing exponentially in the real world, it is intractable to solve large scale problems by using conventional optimal methods. Therefore, many approaches have been put forward to improve MKL. For example, the Sequential Minimal Optimization (SMO) algorithm [5] is a typical decomposition approach that updates one or two Lagrange multipliers at every training step to get the iterative solutions. And some online algorithms [6, 7] refine predictors through online-to-batch conversion scheme, whereas it should be noted that the convergence rate of such decomposition approaches is unstable. Another approach is to approximate the kernel matrix such as Cholesky decomposition [8, 9] which is used to reduce the computational cost, however, at the cost of giving up recognition accuracy due to lost information.

Generally, when we set as sample size and as the number of kernels, the complexities of solving convex quadratic optimization problems in SVM and MKL are and [10], respectively. It can be observed that the computing scale depends on the size of training set rather than the kernel space dimension [8]. In this big data era, it is imperative to find an approach that can minimize the computing scale while capturing the global data structure to perfect SVM or MKL. Low-Rank Representation (LRR) [11] recently has attracted great interest in many research fields, such as image processing [12, 13], computer vision [14], and data mining [15]. LRR, as a compressed sensing approach, aims to find the lowest-rank linear combination of all training samples for reconstructing test samples under a global low-rank constraint. When the training samples are sufficiently complete, the process of representing data with low-rank will augment the similarities among the intraclass samples and the differences among the interclass samples. Meanwhile, if the data is corrupted, since the rank of coefficient matrix will be largely increased, the lowest-rank criterion can enforce noise correction. LRR integrates data clustering and noise correction into a unified framework, which can greatly improve the recognition accuracy and robustness in the preprocessing stage. In this sense, the recognition of SVM or MKL can be increasingly accurate and of high speed when they are combined with LRR.

In this paper, combining LRR and MKL, we will develop a novel recognition approach so as to construct a Low-Rank MKL (LR-MKL) algorithm. In the proposed algorithm, the combined Low-Rank SVM (LR-SVM) will simultaneously be utilized as the reference. We will conduct extensive experiments on public databases to show that our proposed LR-MKL algorithm can achieve better performance than original SVM and MKL.

The remainder of the paper is organized as follows: We start by a brief review on SVM in next section. In Section 3 we describe some existing MKL algorithms and their structure frames. Section 4 is devoted to introducing efficient MKL algorithms using LRR, which we present and call LR-MKL. Experiments, which demonstrate the utility of the suggested algorithm on real data, are presented in Section 5. Section 6 gives the conclusions.

2. Overview of SVM

Given input space and label vector , meets independent and identically distributed conditions, so the training set can be denoted as (contains samples). According to the theory of structural risk minimization [1], SVM can find the classification hyperplane with the maximum margin in the mapping space . Hence, the SVM training with -norm soft margin is a quadratic optimization problem:Here, is the weight coefficient vector, is the penalty factor, is the slack variable, and is the bias term of classification hyperplane. The optimization problem can be transformed into its dual form by introducing Lagrangian multiplier , and the data can be implicitly mapped to the feature space by utilizing the kernel function , so formula (1) changes intoSimplify the objective function of formula (2) into vector form:where is the vector of Lagrangian multiplier, is the label vector, is a vector of s (-dimension), and . If the solution of optimization problem is , the discriminant function can be represented asThe kernel functions commonly used in SVM are linear kernel, polynomial kernel, radial basis function kernel, and sigmoid kernel, respectively, denoted asTo obtain the high recognition accuracy in monokernel SVM, we need to discern what kind of kernel distribution characteristics the test data will obey. Nevertheless, it is unpractical and wasteful of resources to try different distribution characteristics one by one. In this sense, we need MKL to allocate the kernel weights based on the data structure automatically.

3. Multiple Kernel Learning (MKL) Algorithms

To improve the universal applicability of SVM algorithm, MKL is applied instead of one specific kernel function:where is the monokernel function. The multiple kernel can be obtained by function combining different . And is the proportion parameter of kernel. There are many different methods to assign kernel weights.

Pavlidis et al. [16] propose a simple combination mode using an unweighted sum or product of heterogeneous kernels. The combining function of this Unweighted Multiple Kernel Learning (UMKL) method isIn a follow-up study, the distribution of in MKL becomes a vital limiting factor of availability. Chapelle and Rakotomamonjy [17] report that the optimization problem can be solved by a project gradient method in two alternative steps: first, solving a primal SVM with the given ; second, updating through the gradient function with calculated in the first step. The kernel combining function, objective function, and gradient function of this Alternative Multiple Kernel Learning (AMKL) method areThe Generalized Multiple Kernel Learning (GMKL) method [18] also employs the gradient tool to approach solution, but it regards kernel weights as a regularization item , which is taken as . So the objective function and gradient function can be transformed intoAnd the kernel combined function isThere is another two-step alternate method using a gating model called Localized Multiple Kernel Learning (LMKL) method [19]. The formula of locally combined kernel is represented aswhere is the mapping space of feature space. To ensure nonnegativity, kernels can be composed in competitive or cooperative mode by using softmax form and sigmoid form [25], respectively:where denotes the parameter of gating model. On the other hand, Qiu and Lane [20] quantify the fitness between kernel and accuracy in a Heuristic Multiple Kernel Learning (HMKL) way by exploiting the relationship between kernel matrix and sample label . The relationship can be expressed by kernel alignment:where is the Frobenius inner product. Using kernel alignment weighs the proportion of multikernels:Then, the concentration bound is added in kernel alignment by Cortes et al. [21] to form centering kernel:Accordingly, the multikernel weights of this Centering Multiple Kernel Learning (CMKL) method arewhere and .

Later, Cortes et al. [22] studied a Polynomial Multiple Kernel Learning (PMKL) method, which utilized the polynomial combination of the base kernels with higher degree based on the Kernel Ridge Regression (KRR) theory:However, the computing complex of coefficients is , which is too large to apply in practice. So can be simplified as a product form by nonnegative coefficients , and the special case can be expressed asHere, the related optimization of learning can be formulated as the following min-max form:where is a positive, bounded, and convex set. Two bounded sets -norm and -norm are the appropriate choices to construct :Here, and are model parameters, and is generally equal to 0 or .

Other than approaches described above, inspired by the consistency between group Lasso and MKL [26], Xu et al. [23] and Kloft et al. [24] propose an MKL iterative method in a generalized -norm form. They are collectively called Arbitrary Norms Multiple Kernel Learning (ANMKL) method. On the basis of duality condition , the updated formula of kernels weight isIt can be seen from the formulas in this section that the operation complexity of MKL is mainly decided by . So trying to simplify the feature space is an efficient way to improve the performance of MKL. Through the optimization of basis vectors, LRR can reduce dimension while retaining the data features, which is ideal for improving MKL.

4. MKL Using Low-Rank Representation

4.1. Low-Rank Representation (LRR)

The theoretical advances on LRR enable us to use latent low-rank structure in data [27, 28]. And it simultaneously obtains the representation of all samples under a global low-rank constraint. Meantime, the LRR procedure can operate in a relatively short time with guaranteed performance.

Let the input samples space be represented by a linear combination in the dictionary :where is the coefficient matrix and each is a representation coefficient vector of . When the samples are sufficient, serves as the dictionary . By considering the noise or tainted data in practical application, LRR aims at approximating into by the means of minimizing the rank of matrix while reducing the -norm of , in which is a low-rank matrix and is the associated sparse error. It can be generally formulated asHere, is used to balance the effect of low-rank and error term. -norm as NP-hard problem can be substituted for -norm or -norm. We choose -norm as the error term measurement here, which is defined as . Meantime, can relax into nuclear-norm [29]. Consequently, the convex relaxation of formula (23) isThe optimal solution can be obtained via the Augmented Lagrange Multipliers (ALM) method [11].

4.2. Efficient SVM and MKL Using LRR

Kernel matrix remarkably impacts the computational efficiency and accuracy of SVM and MKL. How to find an appropriate variant of kernel matrix that contains both the initial label and the data geometry structure for recognition is a crucial task. Since LRR has been theoretically proved to be superior, in the sequel, we adopt LRR to transform the kernel for augmenting the similarities among the intraclass samples and the differences among the interclass samples. Moreover, a representation of all samples under a global low-rank constraint can be attained, which is more conducive to capturing the global data structure [30]. So LR-SVM and LR-MKL are two alternative techniques that we propose to use to improve the performance of SVM and MKL.

Firstly, based on the LRR theory, we improve the monokernel SVM as the reference item, from which the improvement brought by LRR can be displayed visually. The specific procedure of efficient LR-SVM is presented in Algorithm 1.

Algorithm 1 (efficient SVM using LRR (LR-SVM)).   
Input. This includes the whole training set , the feature space of testing set , the parameters of SVM, and the parameter of LRR.
Step 1. Normalize .
Step 2. Perform (24) procedure on the normalized to project them on the coefficient feature space , respectively.
Step 3. Plug and the label vector into SVM for training classification model.
Step 4. Utilize the obtained classification model to classify the coefficient feature of testing set , and the discriminant function is .
Output. Compare the actual label vector of test set and the prediction label vector to obtain the recognition results.
It is well known that SVM suffers from instability for the various data structures. Thus, MKL recognition becomes the development trend. Next, we combine LRR and MKL algorithms mentioned in the Section 3 and change binary-classification model into multiclassification model by pairwise (one-versus-one) strategy, through which a classifier between any two categories of samples ( is the number of categories) can be designed. Then we adopt voting method and assign sample to the category the most votes obtained. All the combined algorithms can be summarized into a frame, which is given in Algorithm 2 and we refer to it collectively as LR-MKL.

Algorithm 2 (efficient MKL using LRR (LR-MKL)).   
Input. This includes the whole training set , the feature space of testing set , and the parameter of LRR.
Step 1 Step 2. They are the same as the LR-SVM algorithm.
Step 3. Plug and the label vector into MKL to train classifiers with the pairwise strategy.
Step 4. Utilize each one of the binary MKL classifiers to classify the coefficient feature of testing set .
Step 5. According to the prediction label vectors vote for the category of each sample to get the multilabels .
Output. Compare the actual label vector of test set and the prediction label vector to obtain the recognition results and the kernel weight vector .

5. Experiments and Analysis

In this section, we conduct extensive experiments to examine the efficiency of proposed LR-SVM and LR-MKL algorithms. The operating environment is based on MATLAB (R2013a) under the Intel Core i5 CPU processor, 2.53 GHz frequency parameters. The SVM toolbox used in this paper is the LIBSVM [31], which can be easily applied and is shown to be fast in large scale databases.

The simulations are performed on diverse datasets to ensure the universal recognition effect. The test datasets range over the frequently used face databases and the standard test data of UCI repository. In the simulations, all the samples are normalized first.(1)Yale face database (http://vision.ucsd.edu/content/yale-face-database): it contains 165 grayscale images of 15 individuals with different facial expression or configuration, and each image is resized to pixels with 256 grey levels.(2)ORL face database (http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.htm): it contains 400 images of 40 distinct subjects taken at different times, varying light, facial expressions, and details. We resize them to pixels with 256 grey levels per pixel.(3)LSVT Voice Rehabilitation dataset (http://archive.ics.uci.edu/ml/datasets/LSVT+Voice+Rehabilitation) [32]: it is composed of 126 speech signals from 14 people with 309 features, divided into two categories.(4)Multiple Features Digit dataset (http://archive.ics.uci.edu/ml/datasets/Multiple+Features): it includes 2000 digitized handwritten numerals 0–9 with 649 features.

5.1. Experiments on LR-SVM

In order to demonstrate the recognition performance of SVM improved by the presented LR-SVM, we carry out numerous experiments on the Yale and ORL face database. According to the different rate of training sample (20%, 30%, 40%, 50%, 60%, 70%, and 80%), we implement seven groups of experiments on each database. To ensure stable and reliable test, each group has ten different divisions randomly, and we average them as the final results. The kernel functions are and ( is the dimension of feature space).

The classification accuracy and run time of Yale database by using SVM and LR-SVM are shown in Figures 1 and 2, respectively. Similarly, the classification accuracy and run time of ORL database are shown in Figures 3 and 4. The solid lines depict the result of SVM with different kernels, while the patterned lines with the corresponding colour depict that of LR-SVM. As can be seen from the Figures 1 and 3, the proposed LR-SVM method consistently achieves an obvious improvement in classification accuracy compared to the original SVM method. In most cases, the classification accuracy increases with the rise in training sample rate. It is shown that the more complete the training set, the better the classification accuracy. But it is impossible for the training set to include so many samples in reality. LR-MKL has a high accuracy even under the low training sample rate, which is suitable for the real applications. Meanwhile, Figures 2 and 4 show that through LRR conversion, the run time can be reduced more than an order of magnitude, which is reasonable for the real-time requirements of data processing in the big data era.

5.2. Experiments on LR-MKL

In this section, we compare the performance of the MKL algorithms involved in Section 3 and their corresponding LR-MKL algorithms. The multikernel is composed of . The proportion parameter vector of kernel is . The comparative algorithms are listed below:(i)Unweighted MKL (UMKL) [16] and LR-UMKL: (+) indicates sum form, and () indicates product form(ii)Alternative MKL (AMKL) [17] and LR-AMKL(iii)Generalized MKL (GMKL) [18] and LR-GMKL(iv)Localized MKL (LMKL) [19] and LR-LMKL: (sof) distribute into softmax mode, and (sig) distribute into sigmoid mode(v)Heuristic MKL (HMKL) [20] and LR-HMKL(vi)Centering MKL (CMKL) [21] and LR-CMKL(vii)Polynomial MKL (PMKL) [22] and LR-PMKL: () adopts the bounded set with -norm, and () adopts the bounded set with -norm(viii)Arbitrary Norm MKL (ANMKL) [23, 24] and LR-ANMKL: () iterates with -norm, and () iterates with -norm(ix)Besides the highest accuracy among the four monokernel SVM selected as the reference item, which is referred to as SVM(best)We conduct experiments on the test datasets Yale, ORL, LSVT Voice Rehabilitation (LSVT, for short), and Multiple Features Digit (Digit, for short). The 60% samples of dataset are drawn out randomly to train classification model and the remaining samples serve as the test set. Through the optimized results of and penalty factor by grid search method, we find that the classification accuracy varies not too much with and ranging in a certain interval. So there is no need to search the whole parameter space which inevitably increases the computational cost. The penalty factor can be given that trying values 0.01, 0.1, 1, 10, 100, and are fixed on . Then we assign a value which has the highest average accuracy on the cross validation sets to . Each algorithm is conducted with 10 independent runs, and we average them as the final results. The bold numbers represent the preferable recognition effect between the original algorithms and their LRR combined algorithms. The numbers in italic font denote the algorithms whose recognition precision is inferior to the SVM(best). The recognition performance of algorithms is measured by the classification accuracy and run time, illustrated by Table 1.

In most cases, our proposed LR-MKL methods consistently achieve superior results to the original MKL, which verifies the higher classification accuracy and shorter operation time. It is indicated that LRR can augment the similarities among the intraclass samples and the differences among the interclass samples while simplifying the kernel matrix. Note that UMKL() fails to achieve the ideal recognition effects in many cases, even less accurate than SVM(best). However, combining with LRR improves its effects to a large extent. This illustrates that simply combining kernels without according data structure is infeasible, and LRR can offset part of the consequences of irrational distribution. In general, PMKL, ANMKL, and their improved algorithms have the preferable recognition effects, especially the improved algorithms with the accuracy over 90 percent all the time. In terms of run time, it is clearly observed that the real-time performance of MKL is much worse than SVM, because MKL has a process of allocating kernel weights and the process can be very time consuming. Among them, LMKL is the worst and fails to satisfy the real-time requirement. Obviously, our combined LR-MKL can reduce the run time manifold even more than one order of magnitude, so it can speed high-precision MKL up to satisfy the real-time requirement. In brief, the proposed LR-MKL can boost the performance of MKL to a great extent.

6. Conclusion

The complexity of solving convex quadratic optimization problem in MKL is , so it is infeasible to apply in large scale problems for its large computational cost. Our effort has been made on decreasing the dimension of training set. Note that LRR just can capture the global structure of data in relatively few dimensions. Therefore, we have given a review of several existing MKL algorithms. Based on this point, we have proposed a novel combined LR-MKL, which largely improves the performance of MKL. A large number of experiments have been carried on four real world datasets to contrast the recognition effects of various kinds of MKL and LR-MKL algorithms. It has been shown that in most cases, the recognition effects of MKL algorithms are better than SVM(best), except UMKL(). And our proposed LR-MKL methods have consistently achieved the superior results to the original MKL. Among them, PMKL, ANMKL, and their improved algorithms have shown possessing the preferable recognition effects.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (no. 51208168), Hebei Province Natural Science Foundation (no. E2016202341), and Tianjin Natural Science Foundation (no. 13JCYBJC37700).