Research Article | Open Access
Y. Liang, A. Thavaneswaran, B. Abraham, "Joint Estimation Using Quadratic Estimating Function", Journal of Probability and Statistics, vol. 2011, Article ID 372512, 14 pages, 2011. https://doi.org/10.1155/2011/372512
Joint Estimation Using Quadratic Estimating Function
A class of martingale estimating functions is convenient and plays an important role for inference for nonlinear time series models. However, when the information about the first four conditional moments of the observed process becomes available, the quadratic estimating functions are more informative. In this paper, a general framework for joint estimation of conditional mean and variance parameters in time series models using quadratic estimating functions is developed. Superiority of the approach is demonstrated by comparing the information associated with the optimal quadratic estimating function with the information associated with other estimating functions. The method is used to study the optimal quadratic estimating functions of the parameters of autoregressive conditional duration (ACD) models, random coefficient autoregressive (RCA) models, doubly stochastic models and regression models with ARCH errors. Closed-form expressions for the information gain are also discussed in some detail.
Godambe  was the first to study the inference for discrete time stochastic processes using estimating function method. Thavaneswaran and Abraham  had studied the nonlinear time series estimation problems using linear estimating functions. Naik-Nimbalkar and Rajashi  and Thavaneswaran and Heyde  studied the filtering and prediction problems using linear estimating functions in the Bayesian context. Chandra and Taniguchi , Merkouris , and Ghahramani and Thavaneswaran  among others have studied the estimation problems using estimating functions. In this paper, we study the linear and quadratic martingale estimating functions and show that the quadratic estimating functions are more informative when the conditional mean and variance of the observed process depend on the same parameter of interest.
This paper is organized as follows. The rest of Section 1 presents the basics of estimating functions and information associated with estimating functions. Section 2 presents the general model for the multiparameter case and the form of the optimal quadratic estimating function. In Section 3, the theory is applied to four different models.
Suppose that is a realization of a discrete time stochastic process, and its distribution depends on a vector parameter belonging to an open subset of the -dimensional Euclidean space. Let denote the underlying probability space, and let be the -field generated by . Let , be specified -dimensional vectors that are martingales. We consider the class of zero mean and square integrable -dimensional martingale estimating functions of the form where are matrices depending on , . The estimating functions are further assumed to be almost surely differentiable with respect to the components of and such that and are nonsingular for all and for each . The expectations are always taken with respect to . Estimators of can be obtained by solving the estimating equation . Furthermore, the matrix is assumed to be positive definite for all . Then, in the class of all zero mean and square integrable martingale estimating functions , the optimal estimating function which maximizes, in the partial order of nonnegative definite matrices, the information matrix is given by and the corresponding optimal information reduces to .
The function is also called the “quasi-score” and has properties similar to those of a score function in the sense that and . This is a more general result in the sense that for its validity, we do not need to assume that the true underlying distribution belongs to the exponential family of distributions. The maximum correlation between the optimal estimating function and the true unknown score justifies the terminology “quasi-score” for . Moreover, it follows from Lindsay [8, page 916] that if we solve an unbiased estimating equation to get an estimator, then the asymptotic variance of the resulting estimator is the inverse of the information . Hence, the estimator obtained from a more informative estimating equation is asymptotically more efficient.
2. General Model and Method
Consider a discrete time stochastic process with conditional moments That is, we assume that the skewness and the excess kurtosis of the standardized variable do not contain any additional parameters. In order to estimate the parameter based on the observations , we consider two classes of martingale differences and such that
The optimal estimating functions based on the martingale differences and are and , respectively. Then, the information associated with and are and , respectively. Crowder  studied the optimal quadratic estimating function with independent observations. For the discrete time stochastic process , the following theorem provides optimality of the quadratic estimating function for the multiparameter case.
Theorem 2.1. For the general model in (2.1), in the class of all quadratic estimating functions of the form , (a)the optimal estimating function is given by , where(b) the information is given by (c) the gain in information is given by (d) the gain in information is given by
Proof. We choose two orthogonal martingale differences and , where the conditional variance of is given by . That is, and are uncorrelated with conditional variance and , respectively. Moreover, the optimal martingale estimating function and associated information based on the martingale differences are Then, the quadratic estimating function based on and becomes and satisfies the sufficient condition for optimality where is a constant matrix. Hence, is optimal in the class , and part (a) follows. Since and are orthogonal, the information and part (b) follow. Hence, for each component , , neither nor is fully informative, that is, and .
Corollary 2.2. When the conditional skewness and kurtosis are constants, the optimal quadratic estimating function and associated information, based on the martingale differences and , are given by
3.1. Autoregressive Conditional Duration (ACD) Models
There is growing interest in the analysis of intraday financial data such as transaction and quote data. Such data have increasingly been made available by many stock exchanges. Unlike closing prices which are measured daily, monthly, or yearly, intra-day data or high-frequency data tend to be irregularly spaced. Furthermore, the durations between events themselves are random variables. The autoregressive conditional duration (ACD) process due to Engle and Russell  had been proposed to model such durations, in order to study the dynamic structure of the adjusted durations , with , where is the time of the th transaction. The crucial assumption underlying the ACD model is that the time dependence is described by a function , where is the conditional expectation of the adjusted duration between the th and the th trades. The basic ACD model is defined as where are the iid nonnegative random variables with density function and unit mean, and is the information available at the th trade. We also assume that is independent of . It is clear that the types of ACD models vary according to different distributions of and specifications of . In this paper, we will discuss a specific class of models which is known as ACD (, ) model and given by where , , , and . We assume that 's are iid nonnegative random variables with mean , variance , skewness , and excess kurtosis . In order to estimate the parameter vector , we use the estimating function approach. For this model, the conditional moments are , , , and . Let and be the sequences of martingale differences such that , , and . The optimal estimating function and associated information based on are given by and . The optimal estimating function and the associated information based on are given by and . Then, by Corollary 2.2 that the optimal quadratic estimating function and associated information are given by the information gain in using over is and the information gain in using over is which are both nonnegative definite.
When follows an exponential distribution, , , , and . Then, , , and , and hence .
3.2. Random Coefficient Autoregressive Models
In this section, we will investigate the properties of the quadratic estimating functions for the random coefficient autoregressive (RCA) time series which were first introduced by Nicholls and Quinn .
Consider the RCA model where and are uncorrelated zero mean processes with unknown variance and variance with unknown parameter , respectively. Further, we denote the skewness and excess kurtosis of by , which are known, and of by , , respectively. In the model (3.6), both the parameter and need to be estimated. Let , we will discuss the joint estimation of and . In this model, the conditional mean is then and the conditional variance is . The parameter appears simultaneously in the mean and variance. Let and such that , , . Then the conditional skewness is , and the conditional excess kurtosis is .
Since and , by applying Theorem 2.1, the optimal quadratic estimating function for and based on the martingale differences and is given by , where Hence, the component quadratic estimating function for is and the component quadratic estimating function for is Moreover, the information matrix of the optimal quadratic estimating function for and is given by where
In view of the parameter only, the conditional least squares (CLS) estimating function and the associated information are directly given by and . The optimal martingale estimating function and the associated information based on are given by and . Moreover, the inequality implies that . Hence the optimal estimating function is more informative than the conditional least squares one. The optimal quadratic estimating function based on the martingale differences and is given by (3.8) and (3.11), respectively. It is obvious to see that the information of is larger than that of . Therefore, we can conclude that for the RCA model, , and hence, the estimate obtained by solving the optimal quadratic estimating equation is more efficient than the CLS estimate and the estimate obtained by solving the optimal linear estimating equation.
3.3. Doubly Stochastic Time Series Model
Random coefficient autoregressive models we discussed in the previous section are special cases of what Tjøstheim  refers to as doubly stochastic time series model. In the nonlinear case, these models are given by where of (3.6) is replaced by a more general stochastic sequence and is replaced by a function of the past, . Suppose that is a moving average sequence of the form where consists of square integrable independent random variables with mean zero and variance . We further assume that and are independent, then depends on the posterior mean , and variance of . Under the normality assumption of and , and the initial condition , and satisfy the following Kalman-like recursive algorithms (see [13, page 439]): where and . Hence, the conditional mean and variance of are given by which can be computed recursively.
Let and , then and are sequences of martingale differences. We can derive that , , and . The optimal estimating function and associated information based on are given by Then, the inequality implies that that is, the optimal linear estimating function is more informative than the conditional least squares estimating function .
The optimal estimating function and the associated information based on are given by Hence, by Theorem 2.1, the optimal quadratic estimating function is given by And the associated information, , is given by It is obvious to see that the information of is larger than that of and , and hence, the estimate obtained by solving the optimal quadratic estimating equation is more efficient than the CLS estimate and the estimate obtained by solving the optimal linear estimating equation. Moreover, the relations can be applied to calculate the estimating functions and associated information recursively.
3.4. Regression Model with ARCH Errors
Consider a regression model with ARCH () errors of the form such that , and . In this model, the conditional mean is , the conditional variance is , and the conditional skewness and excess kurtosis are assumed to be constants and , respectively. It follows form Theorem 2.1 that the optimal component quadratic estimating function for the parameter vector is Moreover, the information matrix for is given by where
It is of interest to note that when are conditionally Gaussian such that , , the optimal quadratic estimating functions for and based on the estimating functions and , are, respectively, given by Moreover, the information matrix for in (3.28) has ,
In this paper, we use appropriate martingale differences and derive the general form of the optimal quadratic estimating function for the multiparameter case with dependent observations. We also show that the optimal quadratic estimating function is more informative than the estimating function used in Thavaneswaran and Abraham . Following Lindsay , we conclude that the resulting estimates are more efficient in general. Examples based on ACD models, RCA models, doubly stochastic models, and the regression model with ARCH errors are also discussed in some detail. For RCA models and doubly stochastic models, we have shown the superiority of the approach over the CLS method.
- V. P. Godambe, “The foundations of finite sample estimation in stochastic processes,” Biometrika, vol. 72, no. 2, pp. 419–428, 1985.
- A. Thavaneswaran and B. Abraham, “Estimation for nonlinear time series models using estimating equations,” Journal of Time Series Analysis, vol. 9, no. 1, pp. 99–108, 1988.
- U. V. Naik-Nimbalkar and M. B. Rajarshi, “Filtering and smoothing via estimating functions,” Journal of the American Statistical Association, vol. 90, no. 429, pp. 301–306, 1995.
- A. Thavaneswaran and C. C. Heyde, “Prediction via estimating functions,” Journal of Statistical Planning and Inference, vol. 77, no. 1, pp. 89–101, 1999.
- S. A. Chandra and M. Taniguchi, “Estimating functions for nonlinear time series models. Nonlinear non-Gaussian models and related filtering methods,” Annals of the Institute of Statistical Mathematics, vol. 53, no. 1, pp. 125–141, 2001.
- T. Merkouris, “Transform martingale estimating functions,” The Annals of Statistics, vol. 35, no. 5, pp. 1975–2000, 2007.
- M. Ghahramani and A. Thavaneswaran, “Combining estimating functions for volatility,” Journal of Statistical Planning and Inference, vol. 139, no. 4, pp. 1449–1461, 2009.
- B. G. Lindsay, “Using empirical partially Bayes inference for increased efficiency,” The Annals of Statistics, vol. 13, no. 3, pp. 914–931, 1985.
- M. Crowder, “On linear and quadratic estimating functions,” Biometrika, vol. 74, no. 3, pp. 591–597, 1987.
- R. F. Engle and J. R. Russell, “Autoregressive conditional duration: a new model for irregularly spaced transaction data,” Econometrica, vol. 66, no. 5, pp. 1127–1162, 1998.
- D. F. Nicholls and B. G. Quinn, “The estimation of random coefficient autoregressive models. I,” Journal of Time Series Analysis, vol. 1, no. 1, pp. 37–46, 1980.
- D. Tjøstheim, “Some doubly stochastic time series models,” Journal of Time Series Analysis, vol. 7, no. 1, pp. 51–72, 1986.
- A. N. Shiryayev, Probability, vol. 95 of Graduate Texts in Mathematics, Springer, New York, NY, USA, 1984.
Copyright © 2011 Y. Liang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.