Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2017 (2017), Article ID 3678487, 9 pages
Research Article

Efficient Multiple Kernel Learning Algorithms Using Low-Rank Representation

1School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, China
2Key Lab of Big Data Computation of Hebei Province, Tianjin 300401, China

Correspondence should be addressed to Kewen Xia; nc.ude.tubeh@aixwk

Received 23 February 2017; Revised 20 June 2017; Accepted 5 July 2017; Published 22 August 2017

Academic Editor: Cheng-Jian Lin

Copyright © 2017 Wenjia Niu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.