Abstract

This paper considers the ERM scheme for quantile regression. We conduct error analysis for this learning algorithm by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The learning rates are derived by applying concentration techniques involving the -empirical covering numbers.

1. Introduction

In this paper, we study empirical risk minimization scheme (ERM) for quantile regression. Let be a compact metric space (input space) and . Let be a fixed but unknown probability distribution on which describes the noise of sampling. The conditional quantile regression aims at producing functions to estimate quantile regression functions. With a prespecified quantile parameter , a quantile regression function is defined by its value to be a -quantile of , that is, a value satisfying where is the conditional distribution of at .

We consider a learning algorithm generated by ERM scheme associated with pinball loss and hypothesis space . The pinball loss is defined by The hypothesis space is a compact subset of . So there exists some such that for any . We assume without loss of generality for any .

The ERM scheme for quantile regression is defined with a sample drawn independently from as follows:

A family of kernel based learning algorithms for quantile regression has been widely studied in a large literature [14] and references therein. The form of the algorithms is a regularized scheme in a reproducing kernel Hilbert space (RKHS, see [5] for details) associated with a Mercer kernel . Given a sample the kernel based regularized scheme for quantile regression is defined by In [1, 3, 4], error analysis for general has been done. Learning with varying Gaussian kernel was studied in [2].

ERM scheme (3) is very different from kernel based regularized scheme (4). The output function produced by the ERM scheme has a uniform bound, under our assumption, . However, we cannot expect it for . It is easy to see that by choosing . It happens often that as . The lack of a uniform bound for has a serious negative impact on the learning rates. So in the literature of kernel based regularized schemes for quantile regression, values of the output function are always projected onto the interval , and error analysis is conducted for the projected function, not itself.

In this paper, we aim at establishing convergence and learning rates for the error in the space . Here depends on the pair which will be decided in Section 2 and is the marginal distribution of on . In the rest of this paper, we assume which in turn leads to values of the target function lie in the same interval.

2. Noise Condition and Main Results

There has been a large literature in learning theory (see [6] and references therein) devoted to the least square regression. It aims at learning the regression function . The identity for the generalization error leads to a variance-expectation bound with the form of , where on . It plays an essential role in error analysis of kernel based regularized schemes.

However, this identity relation and expectation-variance bound fail in the setting of the quantile regression. The reason is that the pinball loss is lack of strong convexity. If we add some noise condition on distribution named -quantile of -average type (see Definition 1), we can also get a similar identity relation which in turn enables us to have a variance-expectation bound stated in the following which is proved by Steinwart and Christman [1].

Definition 1. Let and . A distribution on is said to have a -quantile of -average type if for -almost every , there exist a -quantile and constants such that for each , and that the function on defined by satisfies .

We also need capacity of the hypothesis space for our learning rates. Here in this paper, we measure the capacity by empirical covering numbers.

Definition 2. Let be a pseudometric space and be a subset of . For every , the covering number of with respect to and is defined as the minimal number of balls of radius whose union covers , that is, where is a ball in .

Definition 3. Let be a set of functions on , and . Set . The -empirical covering number of is defined by Here is the normalized -metric on the Euclidean space given by

Assumption. Assume that the empirical covering number of the hypothesis space is bounded for some and ,

Theorem 4. Assume that satisfies (5) with some and . Denote . One further assumes that is uniquely defined. If and satisfies (9) with , then for any , with confidence , one has where and is a constant independent of and .

Remark 5. In the ERM scheme, we can choose which in turn makes the approximation error described by (23) equal to zero. However, it is impossible for the kernel based regularized scheme because of the appearance of the penalty term .
If , all conditional distributions around the quantile behave similar to the uniform distribution. In this case and for all . Hence, . Furthermore, when is large enough, the parameter tends to and the power index for the above learning rate arbitrarily approaches to which shows that the learning rate power index for is arbitrarily close to independent of . In particular, can be arbitrarily small when is smooth enough. In this case, the power index of the learning rates can be arbitrarily close to which is the optimal learning rate for the least square regression.

Let us take some examples to demonstrate the above main result.

Example 6. Let be a unit ball of the sobolev space with . Observe that the empirical covering number is bounded above by the uniform covering number defined in Definition 2. Hence we have (see [6, 7]) where is the dimension of the input space and .
Under the same assumptions as Theorem 4, we get that by replacing by , for any , with confidence , where and is a constant independent of and .
We carry out the same discussions on the case of and large enough as Remark 5. Therefore the power index of the learning rates for is arbitrarily close to independent of . Furthermore, can be arbitrarily large if the Sobolev space is smooth enough. In this special case, the learning rate power index arbitrarily approaches to .

Example 7. Let be a unit ball of the reproducing kernel Hilbert space generated by a Gaussian kernel (see [5]). Reference [7] tells us that where depends only on and . Obviously, the right-hand side of (15) is bounded by .
So from Theorem 4, we can get different learning rates with power index If and is large enough, the power index of the learning rates for is arbitrarily close to which is very slow if is large. However, in most data sets the data are concentrated on a much lower dimensional manifold embedded in the high dimensional space. In this setting an analysis that replaces by the intrinsic dimension of the manifold would be of great interest (see [8] and references therein).

3. Error Analysis

Define the noise-free error called generalization error associated with the pinball loss as Then the measurable function is a minimizer of . Obviously, .

We need the following results from [1] for our error analysis.

Proposition 8. Let be the pinball loss. Assume that satisfies (5) with some and . Then for all one has Furthermore, with one has

The above result implies that we can get convergence rates of in the space by bounding the excess generalization error .

To bound , we need a standard error decomposition procedure [6] and a concentration inequality.

3.1. Error Decomposition

Define the empirical error associated with the pinball loss as Define

Lemma 9. Let be the pinball loss, be defined by (3) and by (22). Then

Proof. The excess generalization error can be written as The definition of implies that . Furthermore, by subtracting and adding and in the first term and third term, we see that Lemma 9 holds true.

We call the term (23) approximation error. It has been studied in [9].

3.2. Concentration Inequality and Sample Error

Let us recall the one-sided Bernstein inequality as follows.

Lemma 10. Let be a random variable on a probability space with variance satisfying for some constant . Then for any , with confidence , one has

Proposition 11. Let . Assume that on satisfies the variance bound (20) with index indicated in (19). For any , with confidence , (3.6) can be bounded as

Proof. Let which satisfies and in turn . The variance bound (20) implies that Using (25) on the random variable , we can get the desired bound (28) with the help of Young’s inequality.

Let us turn to estimate the sample error (3.5) involving the function which runs over a set of functions since is a random sample itself. To estimate it, we use a concentration inequality below involving empirical covering numbers [1012].

Lemma 12. Let be a class of measurable functions on . Assume that there are constants and and and for every . If (7) holds, then there exists a constant depending only on such that for any , with probability at least , there holds where

We apply Lemma 12 to a function set , where

Proposition 13. Assume on satisfies the variance bound (20) with index indicated in (19). If satisfies (9) with , then for any , with confidence , one has where

Proof. Take with the form where . Hence and .
The Lipschitz property of the pinball loss implies that For , we have where . It follows that Hence
Applying Lemma 12 with , and , we know that for any , with confidence , there holds Here Note that where is indicated in (32). Then our desired bound holds true.

Proposition 14. Assume on satisfies the variance bound (20) with index indicated in (19). If satisfies (9) with Then for any , with confidence , there holds

The above bound follows directly from Propositions 11 and 13 with the fact that .

3.3. Bounding the Total Error

Now we are in a position to present our general result on error analysis for algorithm (3).

Theorem 15. Assume that satisfies (5) with some and . Denote . Further assume that satisfies (9) with and is uniquely defined. Then for any , with confidence , one has where and are constant independent of and .

Proof. Combining (18), (19), and (40), with confidence , we have where

Proof of Theorem 4. The assumption implies that
Therefore, our desired result comes directly from Theorem 15.

4. Further Discussions

In this paper, we studied ERM algorithm (3) for quantile regression and provide convergence and learning rates. We showed some essential differences between ERM scheme and kernel based regularized scheme for quantile regression. We also point out the difficulty to deal with quantile regression: the lack of strong convexity of the pinball loss. To overcome this difficulty, some noise condition on is proposed to enable us to get a variance-expectation bound similar to the one for the least square regression.

In our analysis we just consider and . The case for would be interesting in the future work. The approximation error involving can be estimated by the knowledge of interpolation space.

In our setting, the sample is drawn independently from the distribution . However, in many practical problems, the i.i.d condition is a little demanding, so it would be interesting to investigate the ERM scheme for quantile regression with nonidentical distributions [13, 14] or dependent sampling [15].

Acknowledgment

This work described in this paper is supported by NSF of China under Grant 11001247 and 61170109.