Table of Contents Author Guidelines Submit a Manuscript
Advances in Decision Sciences
Volume 2012, Article ID 515494, 11 pages
http://dx.doi.org/10.1155/2012/515494
Research Article

Asymptotic Optimality of Estimating Function Estimator for CHARN Model

Faculty of Economics, Wakayama University, 930 Sakaedani, Wakayama 640-8510, Japan

Received 15 February 2012; Accepted 9 April 2012

Academic Editor: Hiroshi Shiraishi

Copyright © 2012 Tomoyuki Amano. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares (CL) estimator. However, recently, it was shown that the optimal estimating function estimator (G estimator) is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.

1. Introduction

The conditional least squares (CL) estimator is one of the most fundamental estimators for financial time series models. It has the two advantages which can be calculated with ease and does not need the knowledge about the innovation process (i.e., error term). Hence this convenient estimator has been widely used for many financial time series models. However, Amano and Taniguchi [1] proved it is not good in the sense of the efficiency for ARCH model, which is the most famous financial time series model.

The estimating function estimator was introduced by Godambe ([2, 3]) and Hansen [4]. Recently, Chandra and Taniguchi [5] constructed the optimal estimating function estimator (G estimator) for the parameters of the random coefficient autoregressive (RCA) model, which was introduced to describe occasional sharp spikes exhibited in many fields and ARCH model based on Godambe's asymptotically optimal estimating function. In Chandra and Taniguchi [5], it was shown that G estimator is better than CL estimator by simulation. Furthermore, Amano [6] applied CL and G estimators to some important time series models (RCA, GARCH, and nonlinear AR models) and proved that G estimator is better than CL estimator in the sense of the efficiency theoretically. Amano [6] also derived the conditions that G estimator becomes asymptotically optimal, which are not strict and natural.

However, in Amano [6], G estimator was not applied to a conditional heteroscedastic autoregressive nonlinear model (denoted by CHARN model). CHARN model was proposed by Härdle and Tsybakov [7] and Härdle et al. [8], which includes many financial time series models and is used widely in the finance. Kanai et al. [9] applied G estimator to CHARN model and proved its asymptotic normality. However, Kanai et al. [9] did not compare efficiencies of CL and G estimators and discuss the asymptotic optimality of G estimator theoretically. Since CHARN model is the important and rich model, which includes many financial time series models and can be assumed as return processes of assets, more investigation of CL and G estimators for this model are needed. Hence, in this paper, we compare efficiencies of CL and G estimators and investigate the asymptotic optimality of G estimator for this model.

This paper is organized as follows. Section 2 describes definitions of CL and G estimators. In Section 3, CL and G estimators are applied to CHARN model, and efficiencies of these estimators are compared. Furthermore, we derive the condition of asymptotic optimality of G estimator. We also compare the mean squared errors of and by simulation in Section 4. Proofs of Theorems are relegated to Section 5. Throughout this paper, we use the following notation: : Sum of the absolute values of all entries of .

2. Definitions of CL and G Estimators

One of the most fundamental estimators for parameters of the financial time series models is the conditional least squares (CL) estimator introduced by Tjstheim [10], and it has been widely used in the finance. for a time series model is obtained by minimizing the penalty function where is the -algebra generated by , and is an appropriate positive integer (e.g., if follows th-order nonlinear autoregressive model, we can take ). CL estimator generally has a simple expression. However, it is not asymptotically optimal in general (see Amano and Taniguchi [1]).

Hence, Chandra and Taniguchi [5] constructed G estimator based on Godambe's asymptotically optimal estimating function for RCA and ARCH models. For the definition of , we prepare the following estimating function . Let be a stochastic process which is depending on the -dimensional parameter , then is given by where is a -dimensional vector depending on and , , and is the -field generated by . The estimating function estimator for the parameter is defined as

Chandra and Taniguchi [5] derived the asymptotic variance of as and gave the following lemma by extending the result of Godambe [3].

Lemma 2.1. The asymptotic variance (2.4) is minimized if where

Based on the estimating function in Lemma 2.1, Chandra and Taniguchi [5] constructed G estimator for the parameters of RCA and ARCH models and showed that is better than by simulation. Furthermore, Amano [6] applied to some important financial time series models (RCA, GARCH, and nonlinear AR models) and showed that is better than in the sense of the efficiency theoretically. Amano [6] also derived conditions that becomes asymptotically optimal. However, in Amano [6], and were not applied to CHARN model, which includes many important financial time series models. Hence, in the next section, we apply and to this model and prove is better than in the sense of the efficiency for this model. Furthermore, conditions of asymptotical optimality of are also derived.

3. CL and G Estimators for CHARN Model

In this section, we discuss the asymptotics of and for CHARN model.

CHARN model of order is defined as where are measurable functions, and is a sequence of i.i.d. random variables with , and independent of . Here, the parameter vector is assumed to be lying in an open set . Its true value is denoted by .

First we estimate the true parameter of (3.1) by use of , which is obtained by minimizing the penalty function

For the asymptotics of , we impose the following assumptions.

Assumption 3.1. has the probability density function a.e. .
There exist constants , , , such that for with ,
is continuous and symmetric on , and there exists a positive constant such that
Consider the following
Assumption 3.1 makes strict stationary and ergodic (see [11]). We further impose the following.

Assumption 3.2. Consider the following for all .

Assumption 3.3. and are almost surely twice continuously differentiable in , and their derivatives and , , satisfy the condition that there exist square-integrable functions and such that

for all .

satisfies

The continuous derivative exists on and satisfies

From Tjϕstheim [10], the following lemma holds.

Lemma 3.4. Under Assumptions 3.1, 3.2, and 3.3, has the following asymptotic normality: where

Next, we apply to CHARN model. From Lemma 2.1, is obtained by solving the equation

For the asymptotic of , we impose the following Assumptions.

Assumption 3.5. Consider the following for all .
is almost surely twice continuously differentiable in , and for the derivatives , , there exist square-integrable functions such that for all .
   is -positive definite matrix and satisfies
For (a neighborhood of in ), there exist integrable functions , , and such that for , where .
From Kanai et al. [9], the following lemma holds.

Lemma 3.6. Under Assumptions 3.1, 3.2, 3.3, and 3.5, the following statement holds:

Finally we compare efficiencies of and . We give the following theorem.

Theorem 3.7. Under Assumptions 3.1, 3.2, 3.3, and 3.5, the following inequality holds: and equality holds if and only if is constant or (for matrices and , means is positive definite).

This theorem is proved by use of Kholevo inequality (see Kholevo [12]). From this theorem, we can see that the magnitude of the asymptotic variance of is smaller than that of , and the condition that these asymptotic variances coincide is strict. Therefore, is better than in the sense of the efficiency. Hence, we evaluate the condition that is asymptotically optimal based on local asymptotic normality (LAN). LAN is the concept of local asymptotic normality for the likelihood ratio of general statistical models, which was established by Le Cam [13]. Once LAN is established, the asymptotic optimality of estimators and tests can be described in terms of the LAN property. Hence, its Fisher information matrix Γ is described in terms of LAN, and the asymptotic variance of an estimator has the lower bound Γ−1. Now, we prepare the following Lemma, which is due to Kato et al. [14].

Lemma 3.8. Under Assumptions 3.1, 3.2, and 3.3, CHARN model has LAN, and its Fisher information matrix Γ is where

From this Lemma, the asymptotic variance of has the lower bound Γ−1, that is,

The next theorem gives the condition that equals Γ−1, that is becomes asymptotically optimal.

Theorem 3.9. Under Assumptions 3.1, 3.2, 3.3, and 3.5, if and is Gaussian, then is asymptotically optimal, that is,

Finally, we give the following example which satisfies the assumptions in Theorems 3.7 and 3.9.

Example 3.10. CHARN model includes the following nonlinear AR model: where : is a measurable function, and is a sequence of i.i.d. random variables with , and independent of , and we assume Assumptions 3.1, 3.2, 3.3, and 3.5 (for example, we define , where , , , ). In Amano [6], it was shown that the asymptotic variance of attains that of . Amano [6] also showed under the condition that is Gaussian, is asymptotically optimal.

4. Numerical Studies

In this section, we evaluate accuracies of and for the parameter of CHARN model by simulation. Throughout this section, we assume the following model: where i.i.d.. Mean squared errors (MSEs) of and for the parameter are reported in the following Table 1. The simulations are based on 1000 realizations, and we set the parameter value and the length of observations as , 0.2, 0.3 and 100, 200, 300.

tab1
Table 1: MSE of and for the parameter in (4).

From Table 1, we can see that MSE of is smaller than that of . Furthermore it is seen that MSE of and decreases as the length of observations increases.

5. Proofs

This section provides the proofs of the theorems. First, we prepare the following lemma to compare the asymptotic variances of CL and G estimators (see Kholevo [12]).

Lemma 5.1. We define and as and random matrices, respectively, and as a random variable that is positive everywhere. If the matrix exists, then the following inequality holds:
The equality holds if and only if there exists a constant matrix such that

Now we proceed to prove Theorem 3.7.

Proof of Theorem 3.7. Let , , and , then from the definitions of matrices , and , it can be represented as
Hence from Lemma 5.1, we can see that
From this inequality, we can see that

Proof of Theorem 3.9. Fisher information matrix of CHARN model based on LAN can be represented as
From (5.6), if , becomes
Next, we show under the Gaussianity of that . From the Schwarz inequality, it can be obtained that
The equality holds if and only if there exists some constant such that
Equation (5.9) becomes, for some constant ,
Hence, is , and becomes the density function of the normal distribution.

Acknowledgment

The author would like to thank referees for their comments, which improved the original version of this paper.

References

  1. T. Amano and M. Taniguchi, “Asymptotic efficiency of conditional least squares estimators for ARCH models,” Statistics & Probability Letters, vol. 78, no. 2, pp. 179–185, 2008. View at Publisher · View at Google Scholar
  2. V. P. Godambe, “An optimum property of regular maximum likelihood estimation,” Annals of Mathematical Statistics, vol. 31, pp. 1208–1211, 1960. View at Publisher · View at Google Scholar
  3. V. P. Godambe, “The foundations of finite sample estimation in stochastic processes,” Biometrika, vol. 72, no. 2, pp. 419–428, 1985. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. L. P. Hansen, “Large sample properties of generalized method of moments estimators,” Econometrica, vol. 50, no. 4, pp. 1029–1054, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. S. A. Chandra and M. Taniguchi, “Estimating functions for nonlinear time series models,” Annals of the Institute of Statistical Mathematics, vol. 53, no. 1, pp. 125–141, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. T. Amano, “Asymptotic efficiency of estimating function estimators for nonlinear time series models,” Journal of the Japan Statistical Society, vol. 39, no. 2, pp. 209–231, 2009. View at Google Scholar
  7. W. Härdle and A. Tsybakov, “Local polynomial estimators of the volatility function in nonparametric autoregression,” Journal of Econometrics, vol. 81, no. 1, pp. 223–242, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. W. Härdle, A. Tsybakov, and L. Yang, “Nonparametric vector autoregression,” Journal of Statistical Planning and Inference, vol. 68, no. 2, pp. 221–245, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. H. Kanai, H. Ogata, and M. Taniguchi, “Estimating function approach for CHARN models,” Metron, vol. 68, pp. 1–21, 2010. View at Google Scholar
  10. D. Tjøstheim, “Estimation in nonlinear time series models,” Stochastic Processes and Their Applications, vol. 21, no. 2, pp. 251–273, 1986. View at Publisher · View at Google Scholar
  11. Z. Lu and Z. Jiang, “L1 geometric ergodicity of a multivariate nonlinear AR model with an ARCH term,” Statistics & Probability Letters, vol. 51, no. 2, pp. 121–130, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. A. S. Kholevo, “On estimates of regression coefficients,” Theory of Probability and its Applications, vol. 14, pp. 79–104, 1969. View at Google Scholar
  13. L. Le Cam, “Locally asymptotically normal families of distributions. Certain approximations to families of distributions and their use in the theory of estimation and testing hypotheses,” vol. 3, pp. 37–98, 1960. View at Google Scholar · View at Zentralblatt MATH
  14. H. Kato, M. Taniguchi, and M. Honda, “Statistical analysis for multiplicatively modulated nonlinear autoregressive model and its applications to electrophysiological signal analysis in humans,” IEEE Transactions on Signal Processing, vol. 54, pp. 3414–3425, 2006. View at Google Scholar