Abstract

We propose a class of single-index ARCH(p)-M models and investigate estimators of the parametric and nonparametric components. We first estimate the nonparametric component using local linear smoothing technique and then construct an estimator of parametric component by using profile quasimaximum likelihood method. Under regularity conditions, the asymptotic properties of our estimators are established.

1. Introduction

As an extension of autoregressive conditional heteroskedastic (ARCH) model, Engle et al. [1] introduced conditional variance into the conditional mean return equation and proposed the generalized ARCH-in mean (GARCH-M) model. In the past decades, (G)ARCH-M model has been widely studied by many researchers. It is useful to describe the relationship between the risk and return in finance. Consider a special conditional mean of ARCH-M model with the form , where denotes a return series, denotes an error series, and denotes a conditional return volatility series. The above equality gives straightforward linear relationship between risk and return: high risk causes high return. The volatility coefficient can be addressed as relative risk aversion parameter in Das and Sarkar [2] and price of volatility in Chou et al. [3]. Many empirical studies have been done based on the above conditional mean. However, some researchers found that is time varying rather than constant. To capture the variation of the volatility coefficient, Chou et al. [3] proposed a time-varying parameter GARCH-M model. Zhang [4] introduced a functional coefficient GARCH-M model in which the volatility coefficient was treated as an unknown univariate function. Zhang's [4] model enables us to study the relationship between risk aversion and certain variable. However, Chou et al. [3] suggested that the risk aversion could be simultaneously influenced by some macroeconomic variables such as inflation rate, interest rate, and consumption price index. Consequently, it is reasonable to regard volatility coefficient as a function of several economic variables. In this paper, we will consider the following single-index ARCH(p)-M model: where and are and vectors of unknown parameters, respectively. The risk aversion is an unknown smooth function of some macroeconomic variables . is an independent and identically distributed sequence. is independent of for any and is independent of .   is the conditional variance, denotes the sigma field generated by the information available up to time . Let and the superscript denotes the transpose of a matrix or a vector.

In (1), motivated by Xia and Li [5] and Xue and Pang [6], we assume a single-index form for the volatility coefficient to avoid the so-called curse of dimensionality problem. Namely, we treat the volatility coefficient as an unknown smooth function which is a function of linear combination of certain explanatory variables. Following Ling [7] and Christensen et al. [8], we make a modification for the conditional variance . Namely, we use the squared time-lagged observable returns series in the conditional variance equation rather than the squared unobservable errors series. Such a modification is helpful to model estimation in that is deterministic once the parameter is known. For simplicity and estimability of model, we consider a ARCH(p) process for the conditional variance . When is observable, model (1) is reduced to the single-index coefficient regression models introduced by Xia and Li [5]. Model (1) is extended from Zhang's [4] model by generalizing the univariate function coefficient to multivariate function coefficient.

The paper is arranged as follows. In Section 2, we describe the estimation methods of nonparametric and parametric components and study their asymptotic properties. The concluding remark is given in Section 3. All technicalities are put in the Appendix.

2. Methodology and Results

In this paper, we will consider the model (1). For the sake of model identifiability, the commonly used assumptions are that ,  , and .

When we analyze the asymptotic properties of the estimator of , we need to calculate the derivative of at the point . However, means that the true value of is on the boundary of the unit sphere and it causes that does not have a derivative at the point . Hence needs to be reparameterized. We apply the “remove-one-component” method, introduced by Yu and Ruppert [9], on the parameter . By removing , let be a dimensional parameter vector and we have . The new parameter satisfies the constraint . Then, is differentiable with respect to . The Jacobian matrix of with respect to is obtained by where is a dimensional unit matrix.

In this way, the parameter space , where Let and an inner point of , denotes the true parameter point, where

2.1. Estimation Method

In this section, we give details of model estimation. We first estimate the nonparametric component by local linear smoothing technique and then construct an estimate of parametric component by the profile quasimaximum likelihood method.

For any fixed , where is a compact support of the density function of random variable , we have Let When the true values for parameter are known, say , we know . Using local linear smoothing, we obtain the estimates of , , say , . Then we define the estimate of the nonparametric component in (6) given by where Let denote a kernel density function and denote the bandwidth, satisfying .

Based on (7), we employ the quasimaximum likelihood approach to estimate parameter . For each , let where and is a nonnegative weight function whose compact support is contained in . We can get the estimate of the parametric component ; that is,

Remark 1. (a) The weight function is substituted by , in that (b) In terms of the expression of , we can get that where denotes a zero matrix.

2.2. Asymptotic Properties

In order to obtain the asymptotic properties of the estimators of parametric and nonparametric component, we first introduce some regularity conditions as follows.(A1)The probability density function of the random variable is bounded and satisfies , where and is a compact support set of the probability density function of . and have two bounded and continuous derivatives.(A2)The parameter space is a bounded metric space, satisfying .(A3)For any , , , and are uniformly bounded and have continuous second-order derivatives, satisfying Lipschitz condition of order 1.(A4)For any , , , and are uniformly bounded.(A5)For each , , , is bounded away from infinity.(A6)The process is stationary and ergodic. The process is -mixing with the mixing coefficients satisfying the condition for some positive constants , and . satisfies for some .(A7)The function defined in (9) is continuous and has unique minimum .(A8) The kernel is a bounded and symmetric probability density function with a bounded support and derivative and satisfies

Remark 2. Condition (A1) means that the probability density function of the random variable is positive, which is used to ensure that the denominators of nonparametric estimators are bounded away from 0 with high probability. Conditions (A2) and (A7) have been analogously adopted by Yang [10]. Conditions (A3)–(A6) are essential conditions for the asymptotic properties of the estimators. Condition (A6) is a standard assumption in time series literature. Condition (A8) is commonly assumed smoothness condition for the consistency of kernel based estimators.

In Section 2.1, we have described two-step estimation method and obtain the final estimate of the nonparametric component, that is, . We now establish the asymptotic results for the estimates of and . And the related rates of convergence are given.

Theorem 3. If conditions (A1)–(A8) are satisfied, then as and such that , it holds that

Theorem 4. Under the conditions of Theorem 3, it holds that

Theorem 5. Under the conditions of Theorem 3, consistency for our estimates holds: , .

Theorem 6. Under the conditions of Theorem 3, if and , has an asymptotic normal distribution with mean zero and covariance matrix , where and are given in (A.101) and (A.105) in Appendix, respectively.

3. Concluding Remark

In this paper, we consider a single-index ARCH(p)-M model. We applied profile likelihood approach to investigate the estimators for the parametric and nonparametric components. We first estimate the nonparametric component using local linear smoothing technique then construct an estimator of parametric component by using profile quasimaximum likelihood method. Under some regularity conditions, the asymptotic properties, namely, consistency and asymptotic normality with related rates of convergence, are derived.

Appendix

Throughout Appendix, denotes a generic constant which may take different values at different places. Let , . Proofs of our theorems are based on the following lemmas. First of all, we introduce the following notations:

Lemma A.1. Under the conditions of Theorem 3, one has

Proof. Note that where .
By conditions (A1) and (A8), as , we have
Next, we show that By condition (A6) and the law of large numbers, we have .
Note For , We have For , where lies between and . Thus, Combining (A.8) with (A.10), we have , where Note that On the compact support set , it is easy to obtain , implying .
According to Lemma  1 and Theorem  1 of Andrews [11], together with Lemma  1 of Xue and Pang [6], we can obtain (A.5).
Combining (A.4) with (A.5), we complete the proof of this lemma.

Lemma A.2. Under the conditions of Theorem 3, one has

Proof. Note that Then, we have According to Lemma A.1, it is easy to obtain the result of this lemma.

Lemma A.3. Under the assumption of Theorem 3, one has

Proof. Note that where By conditions (A1) and (A3), as , we have
Next, we show that By condition (A6) and the law of large numbers, we have .
Note where As discussing   in the proof of Lemma A.1, we can obtain , implying .
According to Lemma  1 and Theorem  1 of Andrews [11], together with Lemma  1 of Xue and Pang [6], we can obtain (A.20).
Combining (A.19) with (A.20), we prove the first result of this lemma. By mimicking the above proof, we can show that the other result holds.

Lemma A.4. Under the assumption of Theorem 3, one has

Proof. The proof of Lemma A.4 is the same as that of Lemma A.2, so we omit it.

Lemma A.5. Under the conditions of Theorem 3, one has

Proof. By conditions (A1) and (A8), we know , .
According to Lemma A.2, for all , it is uniformly followed that . Thus, for all , it is uniformly followed that .

Proof of Theorem 3. Note that Then, we have With conditions (A1), (A3), and (A8), we can obtain the first result of this theorem by Lemmas A.4 and A.5. The other result can be similarly proved.

Proof of Theorem 4. Note that
Then, we have By condition (A3), applying Theorem 3, we can complete the proof of Theorem 4.

Proof of Theorem 5. By the law of large numbers, we have .
It is easy to get that where lies between and .
Note that Then, we have .
We can derive that Similarly we have
Following (A.29), we have where By condition (A5), we can obtain , implying .
According to Lemma  1 and Theorem  1 of Andrews [11], we can obtain
Note that ; we have Then, we can get that Note that and . Invoking Theorem 3, condition (A5), and the law of large numbers for and , we can show that With (A.35) and (A.38), we have According to Theorem  2.12 and Lemma  14.3 of Kosorok [12], we can obtain that converges in probability to the true parameter .
Next we continue to show the consistency of nonparametric estimate. Indeed, we have

To prove Theorem 6, we make a Taylor expansion about at , where lies between and .

Thus one has We need the following lemmas to deal with and , respectively.

Lemma A.6. Under the conditions of Theorem 3, it holds that where

Proof. Note that By the law of large numbers and condition that the process is stationary and ergodic, we can obtain that converges in probability to 0. Then according to Lemma  1 and Theorem  1 of Andrews [11], together with Lemma  1 of Xue and Pang [6], we can prove that the result of this lemma holds as the proof of Lemma A.1.

Lemma A.7. If conditions (A1)–(A8) are satisfied, then as and such that , it holds that where

Proof. Applying the fact that , we can get that
Note that By Lemmas A.1 and A.6, we have
Then, we can obtain that
Thus, it can be shown that By mimicking the above proof, we can show that the other result of this lemma holds.

Lemma A.8. Under the conditions of Theorem 3, it holds that where .

Proof. Here we only prove the first result of this lemma. In the same token, we can obtain the other result.
For any given point , we use local approximation as to approximate . The estimators and are obtained by solving the kernel estimating functions with respect to , :
The first equation of (A.54) is Taking derivatives with respect to on both sides, direct observations lead to where and . Then, we have
The proof of Lemma A.8 is summarized in the following three steps.
Step  1. It is the analysis of term .
First we analyze , which can be decomposed as follows:
Note that is the Nadaraya-Watson estimate for , so it can be shown that
Combining (A.59) and (A.60) with Lemma A.7, we have
Step  2. The analysis of term .
By condition (A1) and Lemma A.1, it can be shown that
According to the symmetric assumption for the kernel function, we have
It is not difficult to get that which implies that
Step  3. It is the analysis of term .
We will deal with , which can be rewritten as follows: where
In the following part, we will prove that every term is of order uniformly.
By the law of large numbers, we have By condition (A3), we can get that According to Lemma  1 and Theorem  1 of Andrews [11], we can show that
Denote Furthermore, we can get that which implies .
For , the Abel inequality implies
For , we have
For , we have
For , we have
Consequently, we have shown that Together with (A.62), we can obtain that
For (A.58), we can get that Combination of (A.61), (A.65), and (A.78) can prove the first result.

Lemma A.9. Under the conditions of Theorem 3, it holds that where

Proof. Let , . Note that where
Combining Theorem 3 and Lemma A.8, we can obtain that Consequently, the desired result directly follows from (A.84).

Lemma A.10. Under the conditions of Theorem 3, if and , it holds that where

Proof. Note that where
It can be easily got that Invoking Theorem 4, Lemma A.9, and the law of large numbers for , together with the facts that and , we can prove that the latter three terms of the right-hand side of the last equation are of order if and hold.
Thus, we can obtain that
To analyze , we will show the asymptotic representation for . By the fact that , applying the similar argument used in Lemma A.7, we can obtain that where , , and .
Recalling the definition of (6) and (7), we have
Finally, we have where is given in (A.86). The third term above is of order by applying the law of large numbers to and .

Lemma A.11. Under the conditions of Theorem 3, it holds that where is given in (A.101).

Proof. Note that where
Thus, we have Based on the fact that converges in probability to , we can obtain that Then, by the law of large numbers, we have Therefore, we get where

Proof of Theorem 6. We have shown that According to Lemmas A.10 and A.11, we can obtain the following asymptotic expansion: To analyse the asymptotic normality of , we only need to show the asymptotic normality of .
Recalling the definition of and , hence we have and , together with the assumptions of , which imply .
We have to verify that for some , , which can be obtained with condition (A.20). Then, by the central limit theorem for strongly mixing sequences (see, e.g., Bosq [13], Theorem  1.7), we can show that where
Finally, by the Slutsky theorem, we have .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Yuan Li's work is financially supported by National Natural Science Foundation of China (Grant no. 11271095) and the Specialized Research Fund for the Doctoral Program of Higher Education (Grant no. 20124410110002).