Abstract

A kernel-based greedy algorithm is presented to realize efficient sparse learning with data-dependent basis functions. Upper bound of generalization error is obtained based on complexity measure of hypothesis space with covering numbers. A careful analysis shows the error has a satisfactory decay rate under mild conditions.

1. Introduction

Kernel methods have been extensively utilized in various learning tasks, and its generalization performance has been investigated from the viewpoint of approximation theory [1, 2]. Among these methods, a family of them can be considered as coefficient-based regularized framework in data-dependent hypothesis spaces; see, for example, [38]. For given samples , the solution of these kernel methods has the following expression , where and is a Mecer kernel. The aim of these coefficient-based algorithms is to search a set of coefficients with good predictive performance.

Inspired by greedy approximation methods in [912], we propose a sparse greedy algorithm for regression. The greedy approximation has two advantages over the regularization methods: one is that the sparsity is directly controlled by a greedy approximation algorithm, rather than by the regularization parameter; the other is that greedy approximation does not change the objective optimization function, while the regularized methods usually modify the objective function by including a sparse regularization term [13].

Before introducing the greedy algorithm, we recall some preliminary background for regression. Let the input space be a compact subset and for some constant . In the regression model, the learner gets a sample set , where , are randomly independently drawn from an unknown distribution on . The goal of learning is to pick a function with the expected error as small as possible. Note that the regression function is the minimizer of , where is the conditional probability measure at induced by .

The empirical error is defined as

We call a symmetric and positive semidefinite continuous function a Mercer kernel. The reproducing kernel Hilbert space (RKHS) is defined to be the closure of the linear span of the set of functions with the inner product defined by . For all and , the reproducing property is given by . We can see because of the continuity of and the compactness of .

Different from the coefficient-based regularized method [36], we use the idea of sequential greedy approximation to realize sparse learning in this paper. Denote , where and . The hypothesis space (depending on ) is defined as For any hypothesis function space , we denote .

The definition of tells us , so it is natural to restrict the approximating functions to . The projection operator has been used in error analysis of learning algorithms (see, e.g., [2, 14]).

Definition 1.1. The projection operator is defined on the space of measurable functions as

The kernel-based greedy algorithm can be summarized as below. Let be a stopping time and let be a positive constant. Set . And then for , define Different from the regularized algorithms in [6, 12, 1418], the above learning algorithm tries to realize efficient learning by greedy approximation. The study for its generalization performance can enrich the learning theory of kernel-based regression. In the remainder of this paper, we focus on establishing the convergence rate of to the regression function under choice of suitable parameters. The theoretical result is dependent on weaker conditions than the previous error analysis for kernel-based regularization framework in [4, 5].

2. Main Result

Define a data-free basis function set

To investigate the approximation of to , we introduce a data-independent function

Observe that Here, the three terms on the right-hand side are called as the sample error, the hypothesis error, and the approximation error, respectively.

To estimate the sample error, we usually need the complexity measure of hypothesis function space . For this reason, we introduce some definitions of covering numbers to measure the complexity.

Definition 2.1. Let be a pseudometric space and denote a subset . For every , the covering number of with respect to is defined as the minimal number of balls of radius whose union covers , that is, where is a ball in .

The empirical covering number with metric is defined as below.

Definition 2.2. Let be a set of functions on , and . Set . The empirical covering number of is defined by where metric

Denote as the ball of radius with , where . We need the following capacity assumption on , which has been used in [5, 6, 18].

Assumption 2.3. There exist an exponent , with and a constant such that

We now formulate the generalization error bounds for . The result follows from Propositions 3.23.5 in the next section.

Theorem 2.4. Under Assumption 2.3, for any , the following inequality holds with confidence

From the result, we know there exists a constant independent of such that with confidence In particular, if for some fixed constant and , we have with decay rate . The learning rate is satisfactory as .

Here, the estimate of the hypothesis error is simple and does not need the strict condition on and in [35] for learning with data-dependent hypothesis spaces.

If there are some additional conditions on approximation error with the increasing of , we can obtain the explicit learning rates with suitable parameter selection.

Corollary 2.5. Assume that the RKHS satisfies (2.7) and for some . Choose . For any and , one has with confidence . Here is a constant independent of .

Observe that the learning rate depends closely on the approximation condition between and . This means that only the target function can be well described by the functions from the hypothesis space, the learning algorithm can achieve good generalization performance. In fact, similar approximation assumption is extensively studied for error analysis in learning theory; see, for example, [1, 2, 4, 17].

From Corollary 2.5, when the kernel , can be arbitrarily small, one can easily see that the learning rate is quite low. Future research direction may be furthered to improve the estimate by introducing some new analysis techniques.

3. Proof of Theorem 2.4

In this section, we provide the proof of Theorem 2.4 based on the upper bound estimates of sample error and hypothesis error. Denote We can observe that the sample error

Here can be bounded by applying the following one-side Bernstein type probability inequality; see, for example, [1, 2, 14].

Lemma 3.1. Let be a random variable on a probability space with mean and variance . If for almost all , then for all ,

Proposition 3.2. For any , with confidence , one has

Proof. Following the definition of , we have , where random variable .
From the definition of , we know and . Then and . Moreover,
Applying Lemma 3.1 with and , we get with confidence at least . By setting , we derive the solution Thus, with confidence , we have This completes the proof.

To establish the uniform upper bound of , we introduce a concentration inequality established in [18].

Lemma 3.3. Assume that there are constants and such that and for every . If for some and , then there exists a constant depending only on such that for any , with probability at least , there holds where

Proposition 3.4. Under Assumption 2.3, for any , one has with confidence at least

Proof. From the definition of , we have . Denote We can see that and . Since and , we have For , we have Then, from Assumption 2.3,
Applying Lemma 3.3 with and , for any and for all , holds with confidence . This completes the proof.

Different from the previous studies related with regularized framework [35], we introduce the estimate of hypothesis error based on Theorem  4.2 in [11] for sequential greedy approximation.

Proposition 3.5. For a fixed sample , one has

The desired result in Theorem 2.4 can be derived directly by combining Propositions 3.23.5.

Acknowledgments

This work was supported partially by the National Natural Science Foundation of China under Grant no. 11001092, Humanities and Social Science Projects of the Ministry of Education of China (Program no. 11y3jc630197), and the Fundamental Research Funds for the Central Universities (Programs nos. 2011PY130, and 2011QC022).