Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2012, Article ID 915920, 18 pages
Research Article

Error Bounds for -Norm Multiple Kernel Learning with Least Square Loss

1Statistics School, Southwestern University of Finance and Economics, Chengdu 611130, China
2The 2nd Geological Party of Bureau of Geology and Mineral Resources, Henan, Jiaozuo 450000, China

Received 24 April 2012; Accepted 10 July 2012

Academic Editor: Gerd Teschke

Copyright © 2012 Shao-Gao Lv and Jin-De Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing an -norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications (Kloft et al., 2009, 2011). In this paper, we present a theoretical analysis on the approximation error and learning ability of the -norm MKL. Our analysis shows explicit learning rates for -norm MKL and demonstrates some notable advantages compared with traditional kernel-based learning algorithms where the kernel is fixed.