Abstract

Penalized spline estimator is one of the useful smoothing methods. To construct the estimator, having goodness of fit and smoothness, the smoothing parameter should be appropriately selected. The purpose of this paper is to select the smoothing parameter using the asymptotic property of the penalized splines. The new smoothing parameter selection method is established in the context of minimization asymptotic form of MISE of the penalized splines. The mathematical and the numerical properties of the proposed method are studied. First we organize the new method in univariate regression model. Next we extend to the additive models. A simulation study to confirm the efficiency of the proposed method is addressed.

1. Introduction

Penalized spline methods are a well-known efficient technique for nonparametric smoothing. Penalized splines were suggested by O’Sullivan [1] and Eilers and Marx [2]. In O’Sullivan [1], they used a cubic -spline function and the penalty was the integrated squared second derivative of the -spline function. On the other hand, Eilers and Marx [2] use a cubic -spline function and a difference penalty on the spline coefficients. The Eilers and Marx’s estimator is computationally efficient compared to smoothing splines and O’Sullivan’s estimator since it removes the integration part of the penalty. Hence this paper focuses on the penalized spline estimator provided via Eilers and Marx [2]. The penalized spline method is efficient for both univariate regression and multiple regressions such as the additive model (see Marx and Eilers [3]). General properties usages and a description of the flexibility of penalized splines are described in Ruppert et al. [4].

When using penalized splines, the determination of the smoothing parameter is very important since it controls the trade-off between the goodness of fit and the smoothness of the fitted curve. As the classical method for achieving this, the grid search method is often used. The grid search method is selected by minimizing one criterion from candidate points of the smoothing parameter. Criteria for grid searches include cross-validation, generalized cross-validation, Mallow’s , and so forth. Although the grid search selection generally finds one optimal smoothing parameter, it is possible that the worth curve is obtained when not all the candidates are good. This tendency is especially striking in additive models since the number of the smoothing parameter is the same as that of the covariates. Several smoothing parameter selection methods using the grid search criteria have been developed by many authors such as Krivobokova [5], Reiss and Ogden [6], Wood [7], Wood [8], and Wood [9]. On the other hand, the mixed model representation of the spline smoothing has also been studied (see Lin and Zhang [10], Wand [11], and Ruppert et al. [4]). In mixed models, the grid search method is not necessary to obtain the final fit curve. The smoothing parameter in the mixed model can be written as the ratio of the variance of the random coefficient and the error. By estimating these unknown variances using a maximum likelihood method or a restricted maximum likelihood method (REML), the final fitted curves are obtained, yielding the estimated best linear unbiased predictor (EBLUP). Therefore the EBLUP does not require a grid search. However the fitted curve tends to theoretically oversmooth and the numerical stability is not guaranteed if a cubic spline is used (see Section 3). The Bayesian approach to select the smoothing parameter has been studied by Fahrmeir et al. [12], Fahrmeir and Kneib [13], and Heinzl et al. [14]. Kauermann [15] compared some smoothing parameter selection methods.

In this paper, we propose a new method to determining the smoothing parameter using the asymptotic properties of the penalized splines. For the remainder of this paper, our new method will be known as the direct method. Before describing the outline of the direct method, we will briefly introduce the asymptotic studies of penalized splines. First, Hall and Opsomer [16] showed the consistency of the penalized spline estimator in white noise representation. Subsequently, Li and Ruppert [17], Claeskens et al. [18], Kauermann et al, [19], and Wang et al. [20] have developed the asymptotics for the penalized spline estimator in univariate regression. Yoshida and Naito [21] and Yoshida and Naito [22] have studied the asymptotics for penalized splines in additive regression models and generalized additive models, respectively. Xiao et al. [23] suggested a new penalized spline estimator, and developed its asymptotic properties in bivariate regression. Thus, the developments of the asymptotic theories of the penalized splines are relatively recent events. In addition, the smoothing parameter selection methods using asymptotic properties have not yet been studied. This motivates us to try to establish such methods.

The direct method is conducted by minimizing the mean integrated squared error (MISE) of the penalized spline estimator. In general, the MISE of the nonparametric estimator is divided into the integrated squared bias and the integrated variance of the estimator. Of course the penalized spline estimator is no exception and hence the direct method is stated by utilizing the expression of the asymptotic bias and variance of the penalized spline estimator, which have been derived by Claeskens et al. [18], Kauermann et al. [19], and Yoshida and Naito [22]. From their result, we see that the asymptotic order of the variance of the penalized spline estimator is only dependent on the sample size and the number of knots but not the smoothing parameter. However the second term of the asymptotic variance of the penalized spline estimator contains the smoothing parameter and we can see that the variance becomes small when the smoothing parameter increases. On the other hand, the squared bias of the penalized spline estimator increases if the smoothing parameter is reduced. Therefore the minimizer of the MISE of the penalized spline estimator can be seen as one of the optimal smoothing parameters. Since the MISE is asymptotically convex with respect to the smoothing parameter, the global minimum of the MISE can be found. This detection has been sufficiently developed for bandwidth selection in kernel regression (see Ruppert et al. [24], Wand and Jones [25], etc.). First the present paper focuses on univariate regression, and we next extend the direct method to the additive models. In both models, the mathematical and the numerical properties of the direct method are studied. In additive models, we need to select a smoothing parameter of the same number as the explanatory variable, such that the computational cost of the grid search becomes large. We expect that the computational cost of the direct method is dramatically reduced compared to the grid search.

The structure of this paper is as follows. In Section 2, we introduce a penalized spline estimator in a univariate regression model. Section 3 provides the direct method and related properties. Section 4 extends the direct method to the additive model. In Section 5, we confirm the performance of the direct method using a numerical study. We provide a discussion on the outlook and further studies in Section 6. The proofs of our theorems are provided in the appendix.

2. Penalized Spline Estimator

Consider the regression problem with observations, where is the response variable, is the explanatory variable, is the true regression function, and is the random error which is assumed to be independently distributed with mean 0 and variance . Throughout the paper we assume the explanatory variable    is not a random variable from which the expectation of can be expressed as . The support of the explanatory can be relaxed as the real space . In order to simplify the way of locating the knots in the following, the support of the explanatory is assumed to be with compact space. We aim to estimate via a nonparametric penalized spline method. We consider the knots and, for , let be the th degree -spline basis function defined as associated with the above knots and the additional knots and . The -spline function is a piecewise th degree polynomial on an interval . The details of the -spline basis functions are described in de Boor [26]. For simplicity, we write since we do not specify the in the following sentence.

We use the linear combination of and the unknown parameters to approximate the regression function and consider the -spline regression problem, where The purpose is to estimate the parameter included in instead of directly. The penalized spline estimator of is defined as the minimizer of where is the smoothing parameter and is the backward difference operator defined as and Let be a matrix, where for and 0 otherwise. Using the notation and , (5) can then be expressed as The minimum of (7) is obtained when The penalized spline estimator of for is defined as where .

If , is reduced to the regression spline estimator, which is the spline estimator obtained via the least squares method. The regression spline estimator will lead to an oscillatory fit if the number of knots is large. However the determination of and the location of knots are very difficult problems. The advantage of the penalized spline smoothing is that the good smoothing parameter brings the estimator to the curve with the fitness and smoothness simultaneously without choosing the number and location of knots precisely. In the present paper, we use equidistant knots and focus on the determination of the smoothing parameter. As the location of knots other than above, the quantiles of the data points are often used (see Ruppert [27]). However it is known that the penalized spline estimator is hardly affected by the location of knots if is not too small. Therefore we do not discuss the location of knots. We suggest the direct method for this in the next section.

3. Direct Determination of Smoothing Parameter

In this section, we provide the direct method for determining the smoothing parameter without a grid search. This direct method is given theoretical justification by asymptotic theory of the penalized spline estimator. To investigate the asymptotic property of the penalized spline estimator, we assume that , , and .

For convenience we first give some notation. Let and . Let be a best approximation to the true function . This means that satisfies where is the indicator function of an interval , and is the th Bernoulli polynomial (see Zhou et al. [28]). It can be easily shown that as .

The penalized spline estimator can be written as The first term of the right hand side of (12) is equal to the regression spline estimator denoted by . The asymptotics for the regression spline estimator has been developed by Zhou et al. [28] and can be expressed as From Theorem 2(a) of Claeskens et al. [18], we have where is the covariance of and the second term of the right hand side of (12). The variance of the second term of (12) can be shown to be negligible (see the appendix). The following theorem leads to controlling the trade-off between the squared bias and variance of the penalized spline estimator.

Theorem 1. The covariance in (15) is positive. Furthermore, as , and .

From the asymptotic form of and and Theorem 1, we see that, for small , the bias of is small and the variance becomes large. On the other hand, the large indicates the bias of increases and the variance decreases. From Theorem 1, the MISE of can be expressed as where and are of negligible order, respectively, compared to the regression spline and penalized spline of the second term of the right hand side of (12). Actually we have and . The MISE of is asymptotically quadratic and a global minimum exists. Let And let be the minimizer of MISE(). We suggest the use of as the optimal smoothing parameter, where However it is easy to see that MISE() and contain an unknown function and parameters, and hence these must be estimated. We construct the estimator of by using the consistent estimator of and . We can use the penalized spline estimator and its derivative as the pilot estimator of and . If we then use another smoothing parameter , it should be chosen appropriately. Therefore we use the regression spline estimator as the pilot estimator of and . First we establish . Next we construct the pilot estimator with by using the th degree -spline basis. Let . Using the fundamental property of the -spline function, can be written as asymptotically. Hence the regression spline estimator can be constructed as , where and . Since the regression spline estimator tends to be oscillatory with a higher order th spline function, the fewer knots are used to construct . The variance included in is estimated via Using the above pilot estimators, with some finite grid points on .

Consequently the final penalized spline estimator is defined as where

It is known that the optimal order of of the penalized spline estimator is the same as that of the regression spline estimator, (see Kauermann et al. [19] and Zhou et al. [28]). Using this, we show the asymptotic property of in following theorem.

Theorem 2. Let . Suppose that and . Then given in (21) exists, and as . Furthermore leads to the optimal order and the rate of convergence of MISE of becomes .

The proof of Theorem 2 is given in the appendix. At the end of this section, we give a few remarks.

Remark 3. The asymptotic order of the squared bias and variance of the penalized splines are and , respectively. Therefore under and , the optimal rate of convergence of MISE of the penalized splines is . From Theorem 2, we see that the asymptotic order of yields the optimal rate of convergence.

Remark 4. O’Sullivan [1] used as the penalty term, where is the smoothing parameter. When equidistant knots are used, the penalty can be expressed as , where . The penalty provided by Eilers and Marx [2] can be seen as the simple version of by replacing with and , where is the identity matrix. So the order of the difference matrix controls the smoothness of the th -spline function , and hence should be set such that to give a theoretical justification although the penalized spline estimator can be also calculated for . Actually, is often used by many authors.

Remark 5. The penalized spline regression is often considered as the mixed model representation (see Ruppert et al. [4]). In this frame work, we use the th truncated spline model where and the ’s are unknown parameters. Each is independently distributed as , where is an unknown variance parameter of . The penalized spline fit of is defined as the estimated BLUP (see Robinson [29]). The smoothing parameter in the ordinal spline regression model corresponds to in the spline mixed model. Since the and are estimated using the ML or REML method, we do not need to choose the smoothing parameter via a grid search. It is known that the estimated BLUP fit is linked to the penalized spline estimator (9) with (see Kauermann et al. [19]). Hence the estimated BLUP tends to have a theoretically underfit (see Remark 4).

Remark 6. From Lyapunov’s theorem, the asymptotic normality of the penalized spline estimator with can be derived under the same assumption as Theorem 2 and some additional mild conditions. Although the proof is omitted, it is straightforward since is the consistent estimator of and the asymptotic order of satisfies Lyapunov’s condition.

4. Extension to Additive Models

We extend the direct method to the regression model with multidimensional explanatory variables. In particular, we consider additive models in this section. For the dataset with 1-dimensional response and -dimensional explanatory , the additive models are connected via unknown regression functions and the mean parameter such that We assume that is located on an interval and are normalized as to ensure the identifiability of . Then the intercept is typically estimated via Hence we replace with in (26) and set , redefining the additive model as where each is centered. We aim to estimate via a penalized spline method. Let be the -spline model, where is the th -spline basis function with knots and ’s are unknown parameters. We consider the -spline additive models and estimate and . For , the penalized spline estimator of is defined as where are the smoothing parameters and is the th order difference matrix with size for . By using , the penalized spline estimator of is defined as

The asymptotics for have been studied by Yoshida and Naito [22] who derived the asymptotic bias and variance to be where , , , , and is the best approximation of , The above asymptotic bias and variance of are similar to that of the penalized spline estimator in univariate regression with . Furthermore the asymptotic normality of has been shown by Yoshida and Naito [22]. From their paper, we find that and are then asymptotically independent for . This indicates some theoretical justification to select in minimizing the MISE of . Similar to the discussion in Section 3, the minimizer of the MISE of can be obtained for , where Since , , and are unknown, they should be estimated. The pilot estimators of , , and are constructed based on the regression spline method. By using the pilot estimators , of , and the estimator of , we construct the estimator of : where is some finite grid point sequence on . We obtain for , the penalized spline estimator of , where is the penalized spline estimator of using . From Theorem 2 and the proof of Theorem 3.4 of Yoshida and Naito [22], the asymptotic normality of using can be shown.

Remark 7. Since the true regression functions are normalized, the estimator should be also centered as

Remark 8. The penalized spline estimator of can be obtained using a backfitting algorithm (Hastie and Tibshirani [30]). The backfitting algorithm for the penalized splines in additive regression is detailed in Marx and Eilers [3] and Yoshida and Naito [21].

Remark 9. Although we focus on the nonparametric additive regression in this paper, the direct method can be also applied to the generalized additive models. However we omit this discussion because the procedure is similar to the case of the additive models discussed in this section.

Remark 10. The direct method is quite computationally efficient when compared to the grid search method in additive models. In grid searches, we prepare the candidate of . Let be the set of all possible candidate grid value of for . Then we need to compute the backfitting algorithm times. On the other hand, it is sufficient to perform the backfitting algorithm for only two steps for the pilot estimator and the final penalized spline estimator. Thus, compared with the conventional grid search method, the direct method can drastically reduce computation time.

5. Numerical Study

In this section, we investigate the finite sample performance of the proposed direct method in a Monte Carlo simulation. Let us first consider the univariate regression model (1) for the data . Then we use the three types of true regression function , , and , which are labeled by F1, F2, and F3, respectively. Here is the density function of the normal distribution. The explanatory and error are independently generated from uniform distribution on and , respectively. We estimate each true regression function via the penalized spline method. We then use the linear and cubic -spline bases with equidistant knots and the second order difference penalty. In addition we set equidistant knots and the smoothing parameter is determined by the direct method. The penalized spline estimator with the linear spline and cubic spline are denoted as L-Direct and C-Direct, respectively. For comparison with L-Direct, the same studies are also implemented for the penalized spline estimator with a linear spline and the smoothing parameter selected via GCV and restricted maximum likelihood method (REML) in mixed model representation, and the local polynomial estimator with normal kernel and Plug-in bandwidth (see Ruppert et al. [24]). In GCV, we set the candidate values of as . The above three estimators are denoted by L-GCV, L-REML, and local linear, respectively. Furthermore we compare C-Direct with C-GCV and C-REML, which are the penalized spline estimator with the cubic spline and the smoothing parameter determined by GCV and REML. Let be the sample MISE of any estimator of , where is with th replication and . We calculate the sample MISE of the penalized spline estimator with the direct method, GCV, and REML and the local linear estimator. In this simulation, we use and . We have simulated and 200.

The sMISE of all estimators for each model and are given in Table 1. The penalized spline estimator using the direct method shows good performance in each setting. In comparison with other smoothing parameter methods, the direct method is a little better than the GCV as a whole. However for , C-GCV is better than C-Direct in F1 and F3 though its difference is very small. We see that the sMISE of C-Direct is smaller than local linear, whereas local linear behaves better than L-Direct in some case. In F2, C-Direct is the smallest of all estimators for and 200. Although the performance totally seems to depend on situation in which data sets are generated, we believe that the proposed method is one of the efficient methods.

Next the difference between and is investigated empirically. Let be the with th replications for . Then we calculate the sample MSE of : for F1, F2, and F3 and and 200. To construct for F1, F2, and F3, we use true and and an approximate . Here the approximate means the sample average of with and 1000 replications.

In Table 2, the sMSE of for each true function are described. For comparison, the sMSE of the smoothing parameter obtained via GCV and REML are calculated. The sMSE of the L-Direct and the C-Direct is small even in for all true functions. Therefore it seems that the accuracy of is guaranteed. It indicates that the pilot estimator constructed via least squares method is not bad. The sMSE with the direct method are smaller than that with GCV and REML. This result is not surprising since GCV and REML are not concerned with . However together with Table 1, it seems that the sample MSE of the smoothing parameter is reflected in the sample MISE of the estimator.

The proposed direct method was derived based on the MISE. On the other hand the GCV and the REML are obtained in context of minimizing prediction squared error (PSE) and prediction error. Hence we compare the sample PSE of the penalized spline estimator with the direct method, GCV and REML, and the local linear estimator. Since the prediction error is almost the same as MISE (see Section 4 of Ruppert et al. [4]), we omit the comparison of the prediction error. Let be the sample PSE for any estimator , where is independently generated from for all .

In Table 3, the modified sPSE, , of all estimators for each model and are described. In Remark 11, we discuss the modified sPSE. From the result, we can confirm that the direct method has good predictability. GCV can be regarded as the estimator of sPSE. Therefore in some case, sPSE with GCV is smaller than that with the direct method. It seems that the cause is the accuracy of the estimates of variance (see Remark 11). However its difference is very small.

We admit that the computational time (in second) taken to obtain for F1, , and . The fits with the direct method, GCV and REML took 0.04, 1.22, and 0.34. Although the difference is small, the computational time of the direct method was faster than that of GCV and REML.

Next we confirm the behavior of the penalized spline estimator with the direct method in the additive model. For the data , we assume the additive model with true functions , , and . The error is similar to the first simulation. The design points are generated by where are generated independently from the uniform distribution on . In this simulation, we adopt and . We then corrected to satisfy in each . We construct the penalized spline estimator via the backfitting algorithm with the cubic spline and second order difference penalty. The number of equidistant knots is and the smoothing parameters are determined using the direct method. The pilot estimator to construct is the regression spline estimator with fifth spline and . We calculate the sample MISE of each for and 1000 Monte Carlo iterations. Then we calculate the sample MISE of , , and . In order to compare with the direct method, we also conduct the same simulation with GCV and REML. In GCV, we set the candidate values of as for . Table 4 summarizes the sample MISE of denoted by MISE for direct method, GCV, and REML. The penalized spline estimator with the direct method performs like that with the GCV in both the uncorrelated and correlated design cases. For , the behaviors of MISE1, MISE2, and MISE3 with the direct method are similar. On the other hand, for GCV, the MISE1 is slightly larger than MISE2 and MISE3. The direct method leads to an efficient estimate of all covariates. On the whole, the direct method is better than REML. From above, we believe that the direct method is preferable in practice.

To confirm the consistency of the direct method, the sample MSE of is calculated the same manner as that given in univariate regression case. For comparison, the sample MSE of GCV and REML are obtained. Here the true is defined in the same way as that for univariate case. In Table 5, the results for each , , and , and and 0.5 are shown. We see from the results that the behavior of the direct method is good. The sample MSE of the direct method is smaller than that of GCV and REML for all . For the random design , such tendency can be also seen.

Table 6 shows the sPSE of with each smoothing parameter selection method. We see from result that the efficiency of the direct method is guaranteed in the context of prediction accuracy.

Finally we show the computational time (in second) required to construct the penalized spline estimator with each method. The computational times with the direct method, GCV and REML are 11.87 s, 126.43 s, and 43.5 s, respectively. We see that the direct method is more efficient than other methods in context of the computation (see Remark 11). All of computations in the simulation were done using a software R and a computer with 3.40 GHz CPU and 24.0 GB memory. Though this is only few examples, the direct method can be seen as one of the good methods to select the smoothing parameter.

Remark 11. We calculate as the criteria to confirm the prediction squared error. The ordinal PSE is defined as where ’s are the test data. The second term of the PSE is similar to MISE. So it can be said that the sample PSE evaluates the accuracy of the variance part and MISE part. To see the detail of the difference of the sample PSE between the direct method and other method, we calculated in this section.

6. Discussion

In this paper, we proposed a new direct method for determining the smoothing parameter for a penalized spline estimator in a regression problem. The direct method is based on minimizing MISE of the penalized spline estimator. We studied the asymptotic property of the direct method. The asymptotic normality of the penalized spline estimator using is theoretically guaranteed when the consistent estimator is used as the pilot estimator to obtain . In numerical study, for the additive model, the computational cost of this direct method is dramatically reduced when compared to grid search methods such as GCV. Furthermore we find that the performance of the direct method is better than or at least similar to that of other methods.

The direct method will be developed for other regression models such as varying-coefficient models, Cox proportional hazards models, single-index models, and others if the asymptotic bias and variance of the penalized spline estimator are derived. It is not limited to the mean regression; it can be applied to quantile regression. Actually, Yoshida [31] has presented the asymptotic bias and variance of the penalized spline estimator in univariate quantile regressions. Furthermore, it is seen that improving the direct method is important for various situations and datasets. In particular, the development of the determination of locally adaptive is an interesting avenue of further research.

Appendix

We describe the technical details. For the matrix , . First, to prove Theorems 1 and 2, we introduce the fundamental property of penalized splines in the following lemma.

Lemma A.1. Let be matrix. Suppose that , , and . Then, and .

The proof of Lemma A.1 is shown in Zhou et al. [28] and Claeskens et al. [18]. The repeated use of Lemma A.1 yields that the asymptotic order of the variance of the second term of (12) is . Actually, the asymptotic order of the variance of the second term of (12) can be calculated as When and , holds.

Proof of Theorem 1. We write Since we have and hence the second term of right hand side of (A.2) can be shown . From Lemma A.1, it is easy to derive that . Consequently we have and this leads to Theorem 1.

Proof of Theorem 2. It is sufficient to prove that and Since we have by using Lemma A.1 and the fact that (see Yoshida [31]).
From the property of -spline function, for square matrix , there exists such that In other words, the asymptotic order of and that of are the same. Let . Then, since , we obtain Furthermore because , we have and hence it can be shown that Together with (A.7) and (A.11), can be obtained. Then rate of convergence of the penalized spline estimator with is when , which is detailed in Yoshida and Naito [22].

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author also would like to thank two anonymous referees for their careful reading and comments to improve the paper.