Journal of Probability and Statistics

Journal of Probability and Statistics / 2009 / Article

Research Article | Open Access

Volume 2009 |Article ID 487194 | 37 pages | https://doi.org/10.1155/2009/487194

Model and Variable Selection Procedures for Semiparametric Time Series Regression

Academic Editor: Junbin Gao
Received13 Mar 2009
Accepted26 Jun 2009
Published22 Sep 2009

Abstract

Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

1. Introduction

Non- and semiparametric regression has become a rapidly developing field of statistics in recent years. Various types of nonlinear model such as neural networks, kernel methods, as well as spline method, series estimation, local linear estimation have been applied in many fields. Non- and semiparametric methods, unlike parametric methods, make no or only mild assumptions about the trend or seasonal components and are, therefore, attractive when the data on hand does not meet the criteria for classical time series models. However, the price of this flexibility can be high; when multiple predictor variables are included in the regression equation, nonparametric regression faces the so-called curse of dimensionality.

A major problem associated with non- and semiparametric trend estimation involves the selection of a smoothing parameter and the number of basis functions. Most literature on nonparametric regression with dependent errors focuses on the kernel estimator of the trend function (see, e.g., Altman [1], Hart [2] and Herrmann et al. [3]). These results have been extended to the case with long-memory errors by Hall and Hart [4], Ray and Tsay [5], and Beran and Feng [6]. Kernel methods are affected by the so-called boundary effect. A well-known estimator with automatic boundary correction is the local polynomial approach which is asymptotically equivalent to some kernel estimates. For detailed discussions on local polynomial fitting see, for example, Fan and Gijbels [7] and Fan and Yao [8].

For semiparametric models with serially correlated errors, Gao [9] proposed the semiparametric least-square estimators (SLSEs) for the parametric component and studied its asymptotic properties. You and Chen [10] constructed a semiparametric generalized least-square estimator (SGLSE) with autoregressive errors. Aneiros-Pérez and Vilar-Fernández [11] constructed SLSE with correlated errors.

Like parametric regression models, variable selection of the smoothing parameter for the basis functions is important problem in non- and semiparametric models. It is common practice to include only important variables in the model to enhance predictability. The general approach to finding sensible parameters is to choose an optimal subset determined according to the model selection criterion. Several information criteria for evaluating models constructed by various estimation procedures have been proposed, see, for example, Konishi and Kitagawa [12]. The commonly used criteria are generalized cross-validation, the Akaike information criterion (AIC), and the Bayesian information criterion (BIC). Although best subset selection is practically useful, these selection procedures ignore stochastic errors inherited between the stages of variable selection. Furthermore, best subset selection lacks stability, see, for example, Breiman [13]. Nonconcave penalized likelihood approaches for selecting significant variables for parametric regression models have been proposed by Fan and Li [14]. This methodology can be extended to semiparametric generalized regression models with dependent errors. One of the advantages of this procedure is the simultaneous selection of variables and the estimation of unknown parameters.

The rest of this paper is organized as follows. In Section 2.1 we introduce our semiparametric regression models and explain classical partial ridge regression estimation. Rather than focus on the kernel estimator of the trend function, we use the basis functions to fit the trend component of time series. In Section 2.2, we propose a penalized weighted least-square approach with information criteria for estimation and variable selection. The estimation algorithms are explained in Section 2.3. In Section 2.4, the GIC proposed by Konishi and Kitagawa [15], the BIC proposed by Hastie and Tibshirani [16], and the BIC proposed by Konishi et al. [17] are applied to the evaluation of models estimated by penalized weighted least-square. Section 2.5 contains the asymptotic results of proposed estimators. In Section 3 the performance of these information criteria is evaluated by simulation studies. Section 4 contains the real data analysis. Section 5 concludes our results, and proofs of the theorems are given in the appendix.

2. Estimation Procedures

In this section, we present our semiparametric regression model and estimation procedures.

2.1. The Model and Penalized Estimation

We consider the semiparametric regression model:

where is the response variable and is the covariate vector at time , is an unspecified baseline function of with , is a vector of unknown regression coefficients, and is a Gaussian and zero mean covariance stationary process.

We assume the following properties for the error terms and vectors of explanatory variables .

(A.1) It holds that is a linear process given by where and is an i.i.d. Gaussian random variable with and .(A.2)The coefficients satisfy the conditions that for all , and .

We define .

The assumptions on covariate variables are as follows.

(B.1) Also and , have mean zero and variance 1.

The trend function is expressed as a linear combination of a set of underlying basis functions :

where is an -dimensional vector constructed from basis functions , and is an unknown parameter vector to be estimated. The examples of basis functions are B-spline, P-spline, and radial basis functions. A P-spline basis is given by

where are spline knots. This specification uses the so-called truncated power function basis. The choice of the number of knots and the knot locations are discussed by Yu and Ruppert [18].

Radial basis function (RBF) emerged as a variant of artificial renewal network in late 80s. Nonlinear specification of using RBF has been widely used in cognitive science, engineering, biology, linguistics, and so on. If we consider the RBF modeling, a basis function can take the form

where determines the location and determines the width of the basis function.

Selecting appropriate basis functions, then the semiparametric regression model (2.1) can be expressed as a linear model

where , , with . The penalized least-square estimator is then a minimizer of the function

where is the smoothing parameter controlling the tradeoff between the goodness-of-fit measured by weighted least-square and the roughness of the estimated function. Also is an appropriate positive semidefinite symmetric matrix. For example, if satisfies , we have the usual quadratic integral penalty (see, e.g., Green and Silverman [19]). By simple calculus, (2.7) is minimized when and satisfy the block matrix equation

This equation can be solved without any iteration (see, e.g., Green [20]). First, we find where is usually called the smoothing matrix. Substituting into (2.6), we obtain

where , ,and is the identity matrix of order . Applying least-square to the linear model (2.9), we obtain the semiparametric ordinary least-square estimator (SOLSE) result:

Speckman [21] studied similar solutions for partial linear models with independent observations. Since the errors are serially correlated in model (2.1), is not asymptotically efficient. To obtain an asymptotically efficient estimator for , we use the prewhitening transformation. Note that the errors in (2.6) are invertible. Let , where is the lag operator and with . Applying to the model (2.6) and rewriting the corresponding equation, we obtain the new model:

where and . Here The regression errors in (2.12) are i.i.d. Because, in practice, the response variable is unknown, we use a reasonable approximation by based on the work by Xiao et al. [22] and Aneiros-Pérez and Vilar-Fernández [11].

Under the usual regularity conditions the coefficients decrease geometrically so, letting denote a truncation parameter, we may consider the truncated autoregression on :

where are i.i.d. random variables with . We make the following assumption about the truncation parameter.

(C.1)The truncation parameter satisfies for some .

The expansion rate of the truncation parameter given in (C.1) is also for convenience. Let be the transformation matrix such that . Then the model (2.12) can be expressed as

where

with . Here denotes the lag autocorrelation function of .

Now our estimation problem for the semiparametric time series regression model can be expressed as the minimization of the function

where and . Based on the work by Aneiros-Pérez and Vilar-Fernández [11], an estimator for is constructed as follows. We use the residuals to construct an estimate of using the ordinary least square method applied to the model

Define the estimate of , where where and is the matrix of regressors with the typical element . Then is obtained from by replacing with , with and so forth. Applying least-square to the linear model, we obtain Then

where and , with and . The following theorem shows that the loss in efficiency associated with the estimation of the autocorrelation structure is modest in large samples.

Theorem 2.1. Let the conditions of (A.1), (A.2), (B.1), and (C.1) hold, and assume that is nonsingular. Let denote the true value of , then where denotes convergence in distribution and . Assume that is nonsingular and let denote the true value of , then one has where .

2.2. Variable Selection and Penalized Least Squares

Variable and model selection are an indispensable tool for statistical data analysis. However, it has rarely been studied in the semiparametric context. Fan and Li [23] studied penalized weighted least-square estimation with variable selection in semiparametric models for longitudinal data. In this section, we introduce the penalized weighted least-square approach. We propose an algorithm for calculating the penalized weighted least-square estimator of in Section 2.3. In Section 2.4 we present the information criteria for the model selection.

From now on, we assume that the matrices and are standardized so that each column has mean 0 and variance 1. The first term in (2.7) can be regarded as a loss function of and , which we will denote by . Then expression (2.7) can be written as

The methodology in the previous section can be applied to the variable selection via penalized least-square. A form of penalized weighted least-square is

where are penalty functions and are regularization parameters, which control the model complexity. By minimizing (2.27) with a special construction of the penalty function given in what following some coefficients are estimated as 0, which deletes the corresponding variables, whereas others are not. Thus, the procedure selects variables and estimates coefficients simultaneously. The resulting estimate is called a penalized weighted least-square estimate.

Many penalty functions have been used for penalized least-square and penalized likelihood in various non- and semiparametric models. There are strong connections between the penalized weighted least-square and the variable selection. Denote by and the true parameters and the estimates, respectively. By taking the hard thresholding penalty function

we obtain the hard thresholding rule

The penalty results in a ridge regression and the penalty yields a soft thresholding rule

This solution gives the best subset selection via stepwise deletion and addition. Tibshirani [24, 25] has proposed LASSO, which is the penalized least-square estimate with the penalty, in the general least-square and likelihood settings.

2.3. An Estimation Algorithm

In this section we describe an algorithm for calculating the penalized least-square estimator of . The estimate of minimizes the penalized sum of squares given by (2.17). First we obtain in Step 1. In Step 2, we estimate by using obtained in Step 1. Then is obtained using (Step 3). Here the penalty parameters , and and the number of basis functions are chosen using information criteria that will be discussed in Section 2.4.

Step 1. First we obtain and by (2.10) and (2.11), respectively. Then we have the model

Step 2. An estimator for is constructed followings the work of Aneiros-Pérez and Vilar-Fernández [4]. We use the residuals to construct an estimate of using the ordinary least square method applied to the model The estimator is obtained from by replacing parameters with their estimates.

Step 3. Our SGLSE of is obtained by using the model where , , , and . Finding the solution of the penalized least-square of (2.27) needs the local quadratic approximation, because the and hard thresholding penalty are irregular at the origin and may not have second derivatives at some points. We follow the methodology of Fan and Li [14]. Suppose that we are given an initial value that is close to the minimizer of (2.27). If is very close to 0, then set . Otherwise they can be locally approximated by a quadratic function as when . Therefore, the minimization problem (2.27) can be reduced to a quadratic minimization problem and the Newton-Raphson algorithm can be used. The right-hand side of equation (2.27) can be locally approximated by where The solution can be found by iteratively computing the block matrix equation: This gives the estimators where , , and .

2.4. Information Criteria

Selecting suitable values for the penalty parameters and number of basis functions is crucial to obtaining good curve fitting and variable selection. The estimate of minimizes the penalized sum of squares given by (2.17). In this section, we express the model (2.15) as

where and . In many applications, the number of basis functions needs to be large to adequately capture the trend. To determine the number of basis functions, all models with are fitted and the preferred model minimizes some model selection criteria.

The Schwarz BIC is given by

where is the least-square estimate of without a degree of freedom correction. Hastie and Tibshirani [16] used the trace of the smoother matrix as an approximation to the effective number of parameters. By replacing the number of parameters in BIC by , we formally obtain information criteria for the basis function Gaussian regression model in the form

where and

Here is defined by (2.44) in what follows.

We also consider the use of the BIC criterion to choose appropriate values for these unknown parameters. Denote Let and be the number of zero components in and , respectively. Then the BIC criterion is

where is the matrix of second derivatives of the penalized likelihood defined by

Here is a diagonal matrix with th element and . The -dimensional vector has th element where is the element in the th row and th column of . Also is the matrix defined by

and and are the product of the and nonzero eigenvalues of and , respectively.

Konishi and Kitagawa [15] proposed a framework of Generalized Information Criteria (GIC) to the case where the models are not estimated by maximum likelihood. Hence, we also consider the use of GIC for the model evaluations. The GIC for the hard thresholding penalty function is given by

where is a matrix. Also is basically the product of the empirical influence function and the score function. It is defined by

The number of basis functions , penalty parameters are determined by minimizing BIC, BIC or GIC.

2.5. Sampling Properties

We now study the asymptotic properties of the estimate resulting from the penalized least-square function (2.27).

First we establish the convergence rate of the penalized profile least-square estimator. Assume that penalty functions and are negative and nondecreasing with . Let and denote the true values of and , respectively. Also let

Theorem 2.2. Under the conditions of Theorem 2.1, if , and tend to 0 as , then with probability tending to 1, there exist local minimizers and of such that and .

Theorem 2.2 demonstrates how the rate of convergence of the penalized least-square estimator of depends on for . To achieve the root convergence rate, we have to take small enough so that .

Next we establish the oracle property for the penalized least-square estimator. Let consist of all nonzero components of and let consist of all zero components. Let consist of all nonzero components of and let consist of all zero components. Let

Write

Further, let consist of the first components of and let consist of the last components of . Let consist of the first components of and let consist of the last components of .

Theorem 2.3. Assume that for and , one has , , and . Assume that the penalty functions and satisfy If then, under the conditions of Theorem 2.1, with probability tending to 1, the root consistent local minimizers and in Theorem 2.2 must satisfy the following: (1)(sparsity) ;(2)(asymptotic normality) Here and consist of the first and rows and columns of and defined in Theorem 2.1, respectively.

3. Numerical Simulations

We now assess the performance of semiparametric estimators of the proposed in previous section via simulations. We generate simulation data from the model

where , and is a Gaussian AR(1) process with autoregressive coefficient . We used the radial basis function network modeling to fit the trend component. We simulate the covariate vector from a normal distribution with mean 0 and . In each case, the autoregressive coefficient is set to , , or and the sample size is set to 50, 100 or 200. Figure 1 depicts some examples of simulation data.

We compare the effectiveness of our proposed procedure (PLS + HT) with an existing procedure (PLS). We also compare the performance of the information criteria BIC, GIC and BIC for evaluating the models. As discussed in Section 3, the proposed procedure (PLS + HT) excludes basis functions as well as explanatory variables.

First we assess the performance of by the square root of average squared errors ():

where are the grid points at which the baseline function is estimated. In our simulation, we use . Table 1 shows the means and standard deviations of for based on 500 simulations. increases as the autoregressive coefficient increases but decreases as the sample size increases. From Table 1, we see that the proposed procedure (PLS + HT) works better than PLS and that models evaluated by BIC work better than those based on BIC or GIC.



PLS BIC 0.069 (0.013) 0.081(0.017) 0.114 (0.037) 0.232 (0.115)
GIC 0.047 (0.013) 0.062 (0.015) 0.106 (0.037) 0.229 (0.120)
BIC 0.042 (0.010) 0.060 (0.019) 0.103 (0.039) 0.226 (0.124)
PLS + HT BIC 0.061 (0.040) 0.070 (0.021) 0.101 (0.038) 0.226 (0.103)
GIC 0.053 (0.017) 0.068 (0.020) 0.101 (0.034) 0.218 (0.097)
BIC 0.046 (0.015) 0.060 (0.019) 0.093 (0.034) 0.214 (0.101)

PLS BIC 0.041(0.008) 0.052 (0.012) 0.080 (0.025) 0.172 (0.080)
GIC 0.034 (0.008) 0.044 (0.011) 0.074 (0.026) 0.170 (0.080)
BIC 0.036 (0.010) 0.044 (0.010) 0.070 (0.024) 0.163 (0.079)
PLS + HT BIC 0.042 (0.008) 0.051 (0.016) 0.080 (0.024) 0.172 (0.079)
GIC 0.040 (0.015) 0.048 (0.016) 0.073 (0.024) 0.168 (0.078)
BIC 0.037 (0.011) 0.041 (0.011) 0.068 (0.023) 0.158 (0.075)

PLS BIC 0.029 (0.005) 0.040 (0.016) 0.058 (0.018) 0.129 (0.056)
GIC 0.025 (0.008) 0.033 (0.010) 0.056 (0.018) 0.125 (0.057)
BIC 0.029 (0.006) 0.031 (0.007) 0.050 (0.015) 0.114 (0.052)
PLS + HT BIC 0.030 (0.005) 0.040 (0.016) 0.058 (0.019) 0.127 (0.053)
GIC 0.027 (0.009) 0.033 (0.011) 0.054 (0.015) 0.123 (0.054)
BIC 0.019 (0.008) 0.028 (0.009) 0.047 (0.018) 0.109 (0.048)

Then the performance of is assessed by the square root of average squared errors ():

The means and standard deviations of for based on 500 simulations are shown in Table 2. We can see that the proposed procedure (PLS + HT) works better than the existing procedure. There is almost no change in as the autoregressive coefficient changes (unlike the procedure of You and Chen [10]), whereas depends strongly on the information, BIC works the best among the criteria. We can also confirm the consistency of the estimator, that is decreases as the sample size increases.



PLS BIC 0.022 (0.007) 0.023 (0.007) 0.022 (0.007) 0.020 (0.007)
GIC 0.021 (0.006) 0.023 (0.007) 0.023 (0.010) 0.021 (0.007)
BIC 0.021(0.006) 0.022 (0.007) 0.022 (0.009) 0.020 (0.007)
PLS + HT BIC 0.011(0.005) 0.013 (0.007) 0.012 (0.007) 0.010 (0.005)
GIC 0.010 (0.004) 0.013 (0.007) 0.013 (0.009) 0.011(0.006)
BIC 0.010 (0.004) 0.011 (0.005) 0.011 (0.006) 0.010 (0.005)

PLS BIC 0.014 (0.004) 0.014 (0.004) 0.014 (0.005) 0.012 (0.004)
GIC 0.013 (0.004) 0.014 (0.004) 0.013 (0.004) 0.012 (0.004)
BIC 0.014 (0.004) 0.014 (0.004) 0.013 (0.004) 0.011 (0.004)
PLS + HT BIC 0.007 (0.003) 0.008 (0.004) 0.007 (0.004) 0.006 (0.003)
GIC 0.007 (0.003) 0.008 (0.004) 0.007 (0.003) 0.006 (0.003)
BIC 0.007 (0.003) 0.007 (0.003) 0.006 (0.003) 0.006 (0.003)

PLS BIC 0.009 (0.003) 0.009 (0.003) 0.009 (0.003) 0.007 (0.002)
GIC 0.009 (0.003) 0.009 (0.003) 0.008 (0.003) 0.007 (0.002)
BIC 0.009 (0.003) 0.009 (0.003) 0.008 (0.002) 0.007 (0.002)
PLS + HT BIC 0.004 (0.002) 0.005 (0.002) 0.005 (0.002) 0.005 (0.002)
GIC 0.005 (0.002) 0.005 (0.002) 0.005 (0.002) 0.004 (0.002)
BIC 0.005 (0.002) 0.005 (0.002) 0.004 (0.002) 0.005 (0.002)

The first step ahead prediction error (PE), which is defined as

is also investigated. Table 3 shows the means and standard errors of PE for based on 500 simulations. The PE increases as the autoregressive coefficient increases, but the PE decreases as the sample size increases. From Table 3, we see that PLS + HT works better than the existing procedures and there is almost no difference in the PE depending on the information criteria. The models evaluated by BIC perform well for large sample sizes.



PLS BIC 0.136 (0.115) 0.150 (0.116) 0.140 (0.120) 0.158 (0.117)
GIC 0.111 (0.088) 0.127 (0.097) 0.134 (0.098) 0.149 (0.122)
BIC 0.111 (0.088) 0.127 (0.097) 0.131 (0.095) 0.149 (0.122)
PLS + HT BIC 0.121 (0.096) 0.106 (0.086) 0.119 (0.092) 0.139 (0.112)
GIC 0.094 (0.071) 0.118 (0.093) 0.126 (0.094) 0.139 (0.112)
BIC 0.095 (0.071) 0.116 (0.092) 0.124 (0.093) 0.139 (0.112)

PLS BIC 0.101 (0.086) 0.105 (0.082) 0.130 (0.112) 0.145 (0.124)
GIC 0.090 (0.070) 0.101 (0.078) 0.105 (0.082) 0.137 (0.109)
BIC 0.091 (0.070) 0.096 (0.072) 0.105 (0.092) 0.137 (0.109)
PLS + HT BIC 0.097 (0.082) 0.096 (0.078) 0.098 (0.088) 0.140 (0.162)
GIC 0.084 (0.063) 0.091 (0.071) 0.103 (0.081) 0.130 (0.111)
BIC 0.084 (0.063) 0.091 (0.071) 0.103 (0.081) 0.130 (0.111)

PLS BIC 0.091 (0.070) 0.105 (0.081) 0.114 (0.087) 0.174 (0.129)
GIC 0.087 (0.068) 0.095 (0.072) 0.102 (0.077) 0.139 (0.114)
BIC 0.086 (0.068) 0.095 (0.072) 0.102 (0.077) 0.139 (0.114)
PLS + HT BIC 0.084 (0.066) 0.090 (0.069) 0.091 (0.068) 0.123 (0.096)
GIC 0.083 (0.063) 0.090 (0.069) 0.098 (0.076) 0.126 (0.100)
BIC 0.082 (0.063) 0.092 (0.070) 0.098 (0.076) 0.126 (0.100)

The means and standard deviations of the number and deviation of basis functions are shown in Tables 4 and 5. The BIC gives a smaller number of basis functions than the other information criteria. The models evaluated by BIC also give smaller standard deviations of the number of basis functions. The models determined by BIC tend to choose larger deviations of basis functions than those based on BIC and GIC. The number of basis functions increases gradually as the sample size or increase. From Table 4, it appears that the number of basis functions does not depend on the sample size . From Table 5, it also appears that the deviations of basis functions do not depend on the sample size and .



BIC 7.87 (1.38) 8.85 (1.13) 8.76 (1.24) 8.82 (1.17)
GIC 8.06 (1.44) 8.75 (1.27) 8.84 (1.20) 8.84 (1.24)
BIC 6.02 (0.14) 6.15 (0.53) 6.17 (0.37) 6.21 (0.48)

BIC 7.98 (1.31) 8.83 (1.17) 8.71 (1.30) 8.71 (1.30)
GIC 8.01 (1.37) 8.91 (1.18) 8.67 (1.29) 8.95 (1.20)
BIC 6.20 (0.50) 6.22 (0.44) 6.31 (0.60) 6.35 (0.66)

BIC 7.93 (1.33) 8.18 (1.44) 8.25 (1.48) 8.20 (1.39)
GIC 8.11 (1.35) 8.11 (1.52) 8.39 (1.41) 8.55 (1.37)
BIC 6.15 (0.66) 6.22 (0.73) 6.46 (1.03) 6.93 (1.43)



BIC 0.10 (0.02) 0.10 (0.02) 0.10 (0.02) 0.10 (0.02)
GIC 0.11 (0.03) 0.10 (0.03) 0.10 (0.03) 0.10 (0.03)
BIC 0.14 (0.02) 0.18 (0.03) 0.16 (0.03) 0.16 (0.03)

BIC 0.10 (0.02) 0.09 (0.02) 0.09 (0.02) 0.09 (0.03)
GIC 0.11 (0.03) 0.09 (0.02) 0.10 (0.03) 0.09 (0.02)
BIC 0.15 (0.02) 0.15 (0.04) 0.15 (0.03) 0.13 (0.03)

BIC 0.10 (0.02) 0.11 (0.03) 0.11 (0.03) 0.10 (0.03)
GIC 0.11 (0.03) 0.12 (0.04) 0.11 (0.04) 0.10 (0.03)
BIC 0.15 (0.03) 0.17 (0.02) 0.16 (0.03) 0.14 (0.04)

We now compare the performance of our procedure with existing procedures in terms of the reduction of model complexity. Table 6 shows simulation results of the means and standard deviations of the number of parameters excluded ( or ) by the proposed procedure. The results indicate that the proposed procedure reduces model complexity. From Table 6, It appears that the models determined by BIC tend to exclude fewer parameters and give smaller standard deviations for the number of parameters excluded. This is due to the selection of a smaller number of basis functions compared to the selection based on the other criteria (see Table 4). There is almost no dependence of the number of excluded parameters on . The models evaluated by BIC give a larger number of excluded parameters as the sample size increases. On the other hand, the models evaluated by BIC or GIC give a smaller number of excluded parameters as the sample size increases.



PLS + HT BIC 7.715 (0.915) 6.910 (1.087) 7.300 (1.364) 6.888 (1.343)
GIC 8.345 (1.568) 7.404 (1.850) 7.620 (1.715) 7.337 (1.598)
BIC 4.950 (0.419) 5.020 (0.502) 5.070 (0.492) 5.092 (0.421)

PLS + HT BIC 7.506 (0.784) 7.334 (1.251) 5.698 (0.772) 5.460 (0.700)
GIC 7.916 (1.239) 7.718 (1.435) 5.906 (0.919) 5.740 (0.866)
BIC 4.990 (0.184) 5.076 (0.332) 5.092 (0.316) 5.086 (0.327)

PLS + HT BIC 7.062 (0.723) 5.594 (0.744) 5.544 (0.736) 5.460 (0.702)
GIC 7.450 (1.116) 5.764 (0.847) 5.656 (0.864) 5.586 (0.802)
BIC 5.008 (0.109) 5.152 (0.359) 5.162 (0.385) 5.086 (0.356)

Table 7 shows the means and standard deviations of the number of basis functions excluded as by the proposed procedure. From Table 7 it appears that the models evaluated by BIC tend to exclude fewer basis functions than those based on GIC and BIC. Again this is due to the selection of a smaller number of basis functions (see Table 4). The models determined by BIC also give smaller standard deviations of the number of basis functions than the other criteria. There is almost no dependence of the number of basis functions on .



BIC 3.52 (2.29) 4.21 (2.23) 3.98 (1.60) 3.96 (1.49)
GIC 3.74 (2.15) 4.40 (1.90) 4.18 (1.51) 4.26 (1.46)
BIC 1.03 (0.22) 1.20 (0.60) 1.28 (0.54) 1.24 (0.49)

BIC 3.35 (2.19) 4.49 (2.04) 3.78 (1.58) 3.95 (1.60)
GIC 3.67 (2.15) 4.62 (1.84) 3.91 (1.53) 4.30 (1.60)
BIC 1.06 (0.31) 1.78 (0.96) 1.31 (0.60) 1.36 (0.66)

BIC 3.64 (2.13) 3.26 (1.71) 3.26 (1.71) 3.61 (1.60)
GIC 3.86 (2.02) 3.43 (1.81) 3.65 (1.69) 3.89 (1.76)
BIC 1.12 (0.34) 1.23 (0.75) 1.46 (1.03) 1.93 (1.44)

Table 8 shows the means and standard deviations of the number of basis functions excluded as by the proposed procedure. The number of which values really 0 was five. From Table 8 we see that the proposed procedure gives nearly five. The models determined by BIC give results more close to five and smaller standard deviations of the number of basis functions than the other criteria. The number of basis functions approaches five as the sample size increases. The standard deviations of the number of basis functions excluded decrease as increases. These results indicate that the proposed procedure reduces model complexity.



BIC 4.14 (1.60) 4.15 (1.63) 4.69 (0.92) 4.79 (0.74)
GIC 4.28 (1.47) 4.35 (1.41) 4.70 (0.89) 4.72 (0.87)
BIC 4.97 (0.21) 4.95 (0.26) 4.97 (0.23) 4.99 (0.14)

BIC 4.15 (1.59) 4.17 (1.55) 4.72 (0.92) 4.77 (0.87)
GIC 4.22 (1.51) 4.29 (1.47) 4.77 (0.84) 4.65 (1.03)
BIC 4.98 (0.14) 4.95 (0.26) 5.00 (0.04) 5.00 (0.06)

BIC 4.14 (1.59) 4.78 (0.82) 4.78 (0.82) 4.72 (0.86)
GIC 4.16 (1.55) 4.68 (1.01) 4.75 (0.88) 4.66 (1.04)
BIC 4.99 (0.11) 4.99 (0.15) 5.00 (0.00) 5.00 (0.04)

Table 9 shows the percentage of times that various were estimated as being zero. As for the parameters , these parameters were not estimated zero for every simulations, we omit to show the corresponding results on Table 9. The results indicate that the proposed procedure excludes insignificant variables and selects significant variables. It can be seen that the proposed procedure gives a better performance as the sample size increases and that BIC is superior to the other criteria.



BIC 0.84 0.83 0.82 0.83 0.82
GIC 0.87 0.85 0.87 0.86 0.83
BIC 1.00 0.99 0.99 1.00 1.00

BIC 0.83 0.83 0.84 0.83 0.83
GIC 0.86 0.86 0.86 0.89 0.87
BIC 0.99 0.99 0.99 0.99 0.98

BIC 0.95 0.93 0.94 0.94 0.93
GIC 0.94 0.94 0.93 0.94 0.95
BIC 0.99 0.99 1.00 1.00 0.99

BIC 0.96 0.96 0.95 0.95 0.97
GIC 0.94 0.93 0.95 0.94 0.96
BIC 1.00 1.00 1.00 1.00 1.00

BIC 0.83 0.83 0.84 0.82 0.82
GIC 0.85 0.84 0.85 0.84 0.84
BIC 1.00 0.99 1.00 0.99 1.00

BIC 0.83 0.84 0.83 0.82 0.85
GIC 0.87 0.85 0.88 0.85 0.85
BIC 0.99 0.99 0.99 0.99 1.00

BIC 0.95 0.93 0.95 0.95 0.94
GIC 0.96 0.95 0.94 0.96 0.95
BIC 1.00 1.00 1.00 1.00 1.00

BIC 0.96 0.95 0.95 0.95 0.95
GIC 0.93 0.94 0.94 0.92 0.92
BIC 1.00 1.00 1.00 1.00 1.00

BIC 0.92 0.93 0.92 0.91 0.94
GIC 0.94 0.94 0.94 0.95 0.95
BIC 1.00 1.00 1.00 1.00 0.99

BIC 0.95 0.94 0.94 0.95 0.93
GIC 0.94 0.94 0.94 0.94 0.93
BIC 1.00 1.00 1.00 1.00 1.00

BIC 0.97 0.95 0.95 0.96 0.95
GIC 0.96 0.95 0.95 0.94 0.95
BIC 1.00 1.00 1.00 1.00 1.00

BIC 0.96 0.94 0.95 0.95 0.93
GIC 0.93 0.93 0.93 0.94 0.94
BIC 1.00 1.00 1.00 1.00 1.00

4. Real Data Analysis

In this section we present the consequence of analyzing the real-time series data using proposed procedure. We use two data in this study; the data about the spirit consumption data in United Kingdom and the association between fertility and female employment in Japan.

4.1. The Spirit Consumption Data in the United Kingdom

We now illustrate our theory through an application to spirit consumption data for the United Kingdom from 1870 to 1938. The data-set can be found in Fuller [26, page 523]. In this data-set, the dependent variable is the logarithm of the annual per capita consumption of spirits. The explanatory variables and are the logarithms of per capita income and the price of spirits, respectively, and . Figure 2 shows that there is a change-point at the start of the First World War (1914). Therefore, we prepare a variable : from 1870 to 1914 and from 1915 to 1933. From this we derive another three explanatory variables: , , and . We consider the semiparametric model: