Abstract

A class of martingale estimating functions is convenient and plays an important role for inference for nonlinear time series models. However, when the information about the first four conditional moments of the observed process becomes available, the quadratic estimating functions are more informative. In this paper, a general framework for joint estimation of conditional mean and variance parameters in time series models using quadratic estimating functions is developed. Superiority of the approach is demonstrated by comparing the information associated with the optimal quadratic estimating function with the information associated with other estimating functions. The method is used to study the optimal quadratic estimating functions of the parameters of autoregressive conditional duration (ACD) models, random coefficient autoregressive (RCA) models, doubly stochastic models and regression models with ARCH errors. Closed-form expressions for the information gain are also discussed in some detail.

1. Introduction

Godambe [1] was the first to study the inference for discrete time stochastic processes using estimating function method. Thavaneswaran and Abraham [2] had studied the nonlinear time series estimation problems using linear estimating functions. Naik-Nimbalkar and Rajashi [3] and Thavaneswaran and Heyde [4] studied the filtering and prediction problems using linear estimating functions in the Bayesian context. Chandra and Taniguchi [5], Merkouris [6], and Ghahramani and Thavaneswaran [7] among others have studied the estimation problems using estimating functions. In this paper, we study the linear and quadratic martingale estimating functions and show that the quadratic estimating functions are more informative when the conditional mean and variance of the observed process depend on the same parameter of interest.

This paper is organized as follows. The rest of Section 1 presents the basics of estimating functions and information associated with estimating functions. Section 2 presents the general model for the multiparameter case and the form of the optimal quadratic estimating function. In Section 3, the theory is applied to four different models.

Suppose that {𝐲𝑡,𝑡=1,,𝑛} is a realization of a discrete time stochastic process, and its distribution depends on a vector parameter 𝜽 belonging to an open subset Θ of the 𝑝-dimensional Euclidean space. Let (Ω,,𝑃𝜽) denote the underlying probability space, and let 𝑦𝑡 be the 𝜎-field generated by {𝐲1,,𝐲𝑡,𝑡1}. Let 𝐡𝑡=𝐡𝑡(𝐲1,,𝐲𝑡,𝜽), 1𝑡𝑛 be specified 𝑞-dimensional vectors that are martingales. We consider the class of zero mean and square integrable 𝑝-dimensional martingale estimating functions of the form𝐠=𝑛(𝜽)𝐠𝑛(𝜽)=𝑛𝑡=1𝐚𝑡1𝐡𝑡,(1.1) where 𝐚𝑡1 are 𝑝×𝑞 matrices depending on 𝐲1,,𝐲𝑡1, 1𝑡𝑛. The estimating functions 𝐠𝑛(𝜽) are further assumed to be almost surely differentiable with respect to the components of 𝜽 and such that E[(𝜕𝐠𝑛(𝜽)/𝜕𝜽)𝑦𝑛1] and E[𝐠𝑛(𝜽)𝐠𝑛(𝜽)𝑦𝑛1] are nonsingular for all 𝜽Θ and for each 𝑛1. The expectations are always taken with respect to 𝑃𝜽. Estimators of 𝜽 can be obtained by solving the estimating equation 𝐠𝑛(𝜽)=𝟎. Furthermore, the 𝑝×𝑝 matrix E[𝐠𝑛(𝜽)𝐠𝑛(𝜽)𝑦𝑛1] is assumed to be positive definite for all 𝜽Θ. Then, in the class of all zero mean and square integrable martingale estimating functions , the optimal estimating function 𝐠𝑛(𝜽) which maximizes, in the partial order of nonnegative definite matrices, the information matrix𝐈𝐠𝑛E(𝜽)=𝜕𝐠𝑛(𝜽)𝜕𝜽𝑦𝑛1E𝐠𝑛(𝜽)𝐠𝑛(𝜽)𝑦𝑛11E𝜕𝐠𝑛(𝜽)𝜕𝜽𝑦𝑛1(1.2) is given by𝐠𝑛(𝜽)=𝑛𝑡=1𝐚𝑡1𝐡𝑡=𝑛𝑡=1E𝜕𝐡𝑡𝜕𝜽𝑦𝑡1E𝐡𝑡𝐡𝑡𝑦𝑡11𝐡𝑡,(1.3) and the corresponding optimal information reduces to E[𝐠𝑛(𝜽)𝐠𝑛(𝜽)𝑦𝑛1].

The function 𝐠𝑛(𝜽) is also called the “quasi-score” and has properties similar to those of a score function in the sense that E[𝐠𝑛(𝜽)]=𝟎 and E[𝐠𝑛(𝜽)𝐠𝑛(𝜽)]=E[𝜕𝐠𝑛(𝜽)/𝜕𝜽]. This is a more general result in the sense that for its validity, we do not need to assume that the true underlying distribution belongs to the exponential family of distributions. The maximum correlation between the optimal estimating function and the true unknown score justifies the terminology “quasi-score” for 𝐠𝑛(𝜽). Moreover, it follows from Lindsay [8, page 916] that if we solve an unbiased estimating equation 𝐠𝑛(𝜽)=𝟎 to get an estimator, then the asymptotic variance of the resulting estimator is the inverse of the information 𝐈𝐠𝑛. Hence, the estimator obtained from a more informative estimating equation is asymptotically more efficient.

2. General Model and Method

Consider a discrete time stochastic process {𝑦𝑡,𝑡=1,2,} with conditional moments𝜇𝑡𝑦(𝜽)=E𝑡𝑦𝑡1,𝜎2𝑡𝑦(𝜽)=Var𝑡𝑦𝑡1,𝛾𝑡1(𝜽)=𝜎3𝑡E𝑦(𝜽)𝑡𝜇𝑡(𝜽)3𝑦𝑡1,𝜅𝑡1(𝜽)=𝜎4𝑡E𝑦(𝜽)𝑡𝜇𝑡(𝜽)4𝑦𝑡13.(2.1) That is, we assume that the skewness and the excess kurtosis of the standardized variable 𝑦𝑡 do not contain any additional parameters. In order to estimate the parameter 𝜽 based on the observations 𝑦1,,𝑦𝑛, we consider two classes of martingale differences {𝑚𝑡(𝜽)=𝑦𝑡𝜇𝑡(𝜽),𝑡=1,,𝑛} and {𝑠𝑡(𝜽)=𝑚2𝑡(𝜽)𝜎2𝑡(𝜽),𝑡=1,,𝑛} such that𝑚𝑡𝑚=E2𝑡𝑦𝑡1𝑦=E𝑡𝜇𝑡2𝑦𝑡1=𝜎2𝑡,𝑠𝑡𝑠=E2𝑡𝑦𝑡1𝑦=E𝑡𝜇𝑡4+𝜎4𝑡2𝜎2𝑡𝑦𝑡𝜇𝑡2𝑦𝑡1=𝜎4𝑡𝜅𝑡,+2𝑚,𝑠𝑡𝑚=E𝑡𝑠𝑡𝑦𝑡1𝑦=E𝑡𝜇𝑡3𝜎2𝑡𝑦𝑡𝜇𝑡𝑦𝑡1=𝜎3𝑡𝛾𝑡.(2.2)

The optimal estimating functions based on the martingale differences 𝑚𝑡 and 𝑠𝑡 are 𝐠𝑀(𝜽)=𝑛𝑡=1(𝜕𝜇𝑡/𝜕𝜽)(𝑚𝑡/𝑚𝑡) and 𝐠𝑆(𝜽)=𝑛𝑡=1(𝜕𝜎2𝑡/𝜕𝜽)(𝑠𝑡/𝑠𝑡), respectively. Then, the information associated with 𝐠𝑀(𝜽) and 𝐠𝑆(𝜽) are 𝐈𝐠𝑀(𝜽)=𝑛𝑡=1(𝜕𝜇𝑡/𝜕𝜽)(𝜕𝜇𝑡/𝜕𝜽)(1/𝑚𝑡) and 𝐈𝐠𝑆(𝜽)=𝑛𝑡=1(𝜕𝜎2𝑡/𝜕𝜽)(𝜕𝜎2𝑡/𝜕𝜽)(1/𝑠𝑡), respectively. Crowder [9] studied the optimal quadratic estimating function with independent observations. For the discrete time stochastic process {𝑦𝑡}, the following theorem provides optimality of the quadratic estimating function for the multiparameter case.

Theorem 2.1. For the general model in (2.1), in the class of all quadratic estimating functions of the form 𝒢𝑄={𝐠𝑄(𝜽)𝐠𝑄(𝜽)=𝑛𝑡=1(𝐚𝑡1𝑚𝑡+𝐛𝑡1𝑠𝑡)}, (a)the optimal estimating function is given by 𝐠𝑄(𝜽)=𝑛𝑡=1(𝐚𝑡1𝑚𝑡+𝐛𝑡1𝑠𝑡), where𝐚𝑡1=1𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜇𝑡1𝜕𝜽𝑚𝑡+𝜕𝜎2𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡,𝐛𝑡1=1𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝜕𝜎2𝑡1𝜕𝜽𝑠𝑡;(2.3)(b) the information 𝐈𝑔𝑄(𝜽) is given by 𝐈𝐠𝑄(𝜽)=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜇𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽1𝑚𝑡+𝜕𝜎2𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽1𝑠𝑡𝜕𝜇𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽+𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡;(2.4)(c) the gain in information 𝐈𝐠𝑄(𝜽)𝐈𝐠𝑀(𝜽) is given by 𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜇𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠2𝑡𝑚2𝑡𝑠𝑡+𝜕𝜎2𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽1𝑠𝑡𝜕𝜇𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽+𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡;(2.5)(d) the gain in information 𝐈𝐠𝑄(𝜽)𝐈𝐠𝑆(𝜽) is given by 𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜇𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽1𝑚𝑡+𝜕𝜎2𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡𝑚,𝑠2𝑡𝜕𝜇𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽+𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡.(2.6)

Proof. We choose two orthogonal martingale differences 𝑚𝑡 and 𝜓𝑡=𝑠𝑡𝜎𝑡𝛾𝑡𝑚𝑡, where the conditional variance of 𝜓𝑡 is given by 𝜓𝑡=(𝑚𝑡𝑠𝑡𝑚,𝑠2𝑡)/𝑚𝑡=𝜎4𝑡(𝜅𝑡+2𝛾2𝑡). That is, 𝑚𝑡 and 𝜓𝑡 are uncorrelated with conditional variance 𝑚𝑡 and 𝜓𝑡, respectively. Moreover, the optimal martingale estimating function and associated information based on the martingale differences 𝜓𝑡 are 𝐠Ψ(𝜽)=𝑛𝑡=1𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝜕𝜎2𝑡𝜓𝜕𝜽𝑡𝜓𝑡=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1×𝜕𝜇𝑡𝜕𝜽𝑚,𝑠2𝑡𝑚2𝑡𝑠𝑡+𝜕𝜎2𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝑚𝑡+𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝜕𝜎2𝑡1𝜕𝜽𝑠𝑡𝑠𝑡,𝐈𝐠Ψ(𝜽)=𝑛𝑡=1𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝜕𝜎2𝑡𝜕𝜽1𝜓𝑡=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1×𝜕𝜇𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠2𝑡𝑚2𝑡𝑠𝑡+𝜕𝜎2𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽1𝑠𝑡𝜕𝜇𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽+𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡.(2.7) Then, the quadratic estimating function based on 𝑚𝑡 and 𝜓𝑡 becomes 𝐠𝑄(𝜽)=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1×𝜕𝜇𝑡1𝜕𝜽𝑚𝑡+𝜕𝜎2𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝑚𝑡+𝜕𝜇𝑡𝜕𝜽𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝜕𝜎2𝑡1𝜕𝜽𝑠𝑡𝑠𝑡(2.8) and satisfies the sufficient condition for optimality E𝜕𝐠𝑄(𝜽)𝜕𝜽𝑦𝑡1𝐠=Cov𝑄(𝜽),𝐠𝑄(𝜽)𝑦𝑡1𝐾,𝐠𝑄(𝜽)𝒢𝑄,(2.9) where 𝐾 is a constant matrix. Hence, 𝐠𝑄(𝜽) is optimal in the class 𝒢𝑄, and part (a) follows. Since 𝑚𝑡 and 𝜓𝑡 are orthogonal, the information 𝐈𝐠𝑄(𝜽)=𝐈𝐠𝑀(𝜽)+𝐈𝐠Ψ(𝜽) and part (b) follow. Hence, for each component 𝜃𝑖, 𝑖=1,,𝑝, neither 𝑔𝑀(𝜃𝑖) nor 𝑔𝑆(𝜃) is fully informative, that is, 𝐼𝑔𝑄(𝜃𝑖)𝐼𝑔𝑀(𝜃𝑖) and 𝐼𝑔𝑄(𝜃𝑖)𝐼𝑔𝑆(𝜃𝑖).

Corollary 2.2. When the conditional skewness 𝛾 and kurtosis 𝜅 are constants, the optimal quadratic estimating function and associated information, based on the martingale differences 𝑚𝑡=𝑦𝑡𝜇𝑡 and 𝑠𝑡=𝑚2𝑡𝜎2𝑡, are given by 𝐠𝑄𝛾(𝜽)=12𝜅+2𝑛1𝑡=11𝜎3𝑡𝜎𝑡𝜕𝜇𝑡+𝛾𝜕𝜽𝜅+2𝜕𝜎2𝑡𝑚𝜕𝜽𝑡+1𝛾𝜅+2𝜕𝜇𝑡1𝜕𝜽𝜎𝑡𝜕𝜎2𝑡𝑠𝜕𝜽𝑡,𝐈𝐠𝑄(𝛾𝜽)=12𝜅+21𝐈𝐠𝑀(𝜽)+𝐈𝐠𝑆(𝛾𝜽)𝜅+2𝑛𝑡=11𝜎3𝑡𝜕𝜇𝑡𝜕𝜽𝜕𝜎2𝑡𝜕𝜽+𝜕𝜎2𝑡𝜕𝜽𝜕𝜇𝑡𝜕𝜽.(2.10)

3. Applications

3.1. Autoregressive Conditional Duration (ACD) Models

There is growing interest in the analysis of intraday financial data such as transaction and quote data. Such data have increasingly been made available by many stock exchanges. Unlike closing prices which are measured daily, monthly, or yearly, intra-day data or high-frequency data tend to be irregularly spaced. Furthermore, the durations between events themselves are random variables. The autoregressive conditional duration (ACD) process due to Engle and Russell [10] had been proposed to model such durations, in order to study the dynamic structure of the adjusted durations 𝑥𝑖, with 𝑥𝑖=𝑡𝑖𝑡𝑖1, where 𝑡𝑖 is the time of the 𝑖th transaction. The crucial assumption underlying the ACD model is that the time dependence is described by a function 𝜓𝑖, where 𝜓𝑖 is the conditional expectation of the adjusted duration between the (𝑖1)th and the 𝑖th trades. The basic ACD model is defined as 𝑥𝑖=𝜓𝑖𝜀𝑖,𝜓𝑖𝑥=E𝑖𝑥𝑡𝑖1,(3.1) where 𝜀𝑖 are the iid nonnegative random variables with density function 𝑓() and unit mean, and 𝑥𝑡𝑖1 is the information available at the (𝑖1)th trade. We also assume that 𝜀𝑖 is independent of 𝑥𝑡1. It is clear that the types of ACD models vary according to different distributions of 𝜀𝑖 and specifications of 𝜓𝑖. In this paper, we will discuss a specific class of models which is known as ACD (𝑝, 𝑞) model and given by 𝑥𝑡=𝜓𝑡𝜀𝑡,𝜓𝑡=𝜔+𝑝𝑗=1𝑎𝑗𝑥𝑡𝑗+𝑞𝑗=1𝑏𝑗𝜓𝑡𝑗,(3.2) where 𝜔>0, 𝑎𝑗>0, 𝑏𝑗>0, and max(𝑝,𝑞)𝑗=1(𝑎𝑗+𝑏𝑗)<1. We assume that 𝜀𝑡's are iid nonnegative random variables with mean 𝜇𝜀, variance 𝜎2𝜀, skewness 𝛾𝜀, and excess kurtosis 𝜅𝜀. In order to estimate the parameter vector 𝜽=(𝜔,𝑎1,,𝑎𝑝,𝑏1,,𝑏𝑞), we use the estimating function approach. For this model, the conditional moments are 𝜇𝑡=𝜇𝜀𝜓𝑡, 𝜎2𝑡=𝜎2𝜀𝜓2𝑡, 𝛾𝑡=𝛾𝜀, and 𝜅𝑡=𝜅𝜀. Let 𝑚𝑡=𝑥𝑡𝜇𝑡 and 𝑠𝑡=𝑚2𝑡𝜎2𝑡 be the sequences of martingale differences such that 𝑚𝑡=𝜎2𝜀𝜓2𝑡, 𝑠𝑡=𝜎4𝜀(𝜅𝜀+2)𝜓4𝑡, and 𝑚,𝑠𝑡=𝜎3𝜀𝛾𝜀𝜓3𝑡. The optimal estimating function and associated information based on 𝑚𝑡 are given by 𝐠𝑀(𝜽)=(𝜇𝜀/𝜎2𝜀)𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)𝑚𝑡 and 𝐈𝐠𝑀(𝜽)=(𝜇2𝜀/𝜎2𝜀)𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)(𝜕𝜓𝑡/𝜕𝜽). The optimal estimating function and the associated information based on 𝑠𝑡 are given by 𝐠𝑆(𝜽)=2/𝜎2𝜀(𝜅𝜀+2)𝑛𝑡=1(1/𝜓3𝑡)(𝜕𝜓𝑡/𝜕𝜽)𝑠𝑡 and 𝐈𝐠𝑆(𝜽)=(4/(𝜅𝜀+2))𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)(𝜕𝜓𝑡/𝜕𝜽). Then, by Corollary 2.2 that the optimal quadratic estimating function and associated information are given by 𝐠𝑄1(𝜽)=𝜎2𝜀𝜅𝜀+2𝛾2𝜀𝑛𝑡=1𝜇𝜀𝜅𝜀+2+2𝜎𝜀𝛾𝜀𝜓2𝑡𝜕𝜓𝑡𝑚𝜕𝜽𝑡+𝜇𝜀𝛾𝜀2𝜎𝜀𝜓𝑡𝜎𝜀𝜓3𝑡𝜕𝜓𝑡𝑠𝜕𝜽𝑡,𝐈𝐠𝑄𝛾(𝜽)=12𝜀𝜅𝜀+21𝐈𝐠𝑀(𝜽)+𝐈𝐠𝑆(𝜽)4𝜇𝜀𝛾𝜀𝜎𝜀𝜅𝜀+2𝑛𝑡=11𝜓2𝑡𝜕𝜓𝑡𝜕𝜽𝜕𝜓𝑡𝜕𝜽=4𝜎2𝜀+𝜇2𝜀𝜅𝜀+24𝜇𝜀𝜎𝜀𝛾𝜀𝜎2𝜀𝜅𝜀+2𝛾2𝜀𝑛𝑡=11𝜓2𝑡𝜕𝜓𝑡𝜕𝜽𝜕𝜓𝑡𝜕𝜽,(3.3) the information gain in using 𝐠𝑄(𝜽) over 𝐠𝑀(𝜽) is2𝜎𝜀𝜇𝜀𝛾𝜀2𝜎2𝜀𝜅𝜀+2𝛾2𝜀𝑛𝑡=11𝜓2𝑡𝜕𝜓𝑡𝜕𝜽𝜕𝜓𝑡𝜕𝜽,(3.4) and the information gain in using 𝐠𝑄(𝜽) over 𝐠𝑆(𝜽) is𝜇𝜀𝜅𝜀+22𝜎𝜀𝛾𝜀2𝜎2𝜀𝜅𝜀+2𝛾2𝜀𝜅𝜀+2𝑛𝑡=11𝜓2𝑡𝜕𝜓𝑡𝜕𝜽𝜕𝜓𝑡𝜕𝜽,(3.5) which are both nonnegative definite.

When 𝜀𝑡 follows an exponential distribution, 𝜇𝜀=1/𝜆, 𝜎2𝜀=1/𝜆2, 𝛾𝜀=2, and 𝜅𝜀=3. Then, 𝐈𝐠𝑀(𝜽)=𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)(𝜕𝜓𝑡/𝜕𝜽), 𝐈𝐠𝑆(𝜽)=(4/5)𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)(𝜕𝜓𝑡/𝜕𝜽), and 𝐈𝐠𝑄(𝜽)=𝑛𝑡=1(1/𝜓2𝑡)(𝜕𝜓𝑡/𝜕𝜽)(𝜕𝜓𝑡/𝜕𝜽), and hence 𝐈𝐠𝑄(𝜽)=𝐈𝐠𝑀(𝜽)>𝐈𝐠𝑆(𝜽).

3.2. Random Coefficient Autoregressive Models

In this section, we will investigate the properties of the quadratic estimating functions for the random coefficient autoregressive (RCA) time series which were first introduced by Nicholls and Quinn [11].

Consider the RCA model𝑦𝑡=𝜃+𝑏𝑡𝑦𝑡1+𝜀𝑡,(3.6) where {𝑏𝑡} and {𝜀𝑡} are uncorrelated zero mean processes with unknown variance 𝜎2𝑏 and variance 𝜎2𝜀=𝜎2𝜀(𝜃) with unknown parameter 𝜃, respectively. Further, we denote the skewness and excess kurtosis of {𝑏𝑡} by 𝛾𝑏, 𝜅𝑏 which are known, and of {𝜀𝑡} by 𝛾𝜀(𝜃), 𝜅𝜀(𝜃), respectively. In the model (3.6), both the parameter 𝜃 and 𝛽=𝜎2𝑏 need to be estimated. Let 𝜽=(𝜃,𝛽), we will discuss the joint estimation of 𝜃 and 𝛽. In this model, the conditional mean is 𝜇𝑡=𝑦𝑡1𝜃 then and the conditional variance is 𝜎2𝑡=𝑦2𝑡1𝛽+𝜎2𝜀(𝜃). The parameter 𝜃 appears simultaneously in the mean and variance. Let 𝑚𝑡=𝑦𝑡𝜇𝑡 and 𝑠𝑡=𝑚2𝑡𝜎2𝑡 such that 𝑚𝑡=𝑦2𝑡1𝜎2𝑏+𝜎2𝜀, 𝑠𝑡=𝑦4𝑡1𝜎4𝑏(𝜅𝑏+2)+𝜎4𝜀(𝜅𝜀+2)+4𝑦2𝑡1𝜎2𝑏𝜎2𝜀, 𝑚,𝑠𝑡=𝑦3𝑡1𝜎3𝑏𝛾𝑏+𝜎3𝜀𝛾𝜀. Then the conditional skewness is 𝛾𝑡=𝑚,𝑠𝑡/𝜎3𝑡, and the conditional excess kurtosis is 𝜅𝑡=𝑠𝑡/𝜎4𝑡2.

Since 𝜕𝜇𝑡/𝜕𝜽=(𝑦𝑡1,0) and 𝜕𝜎2𝑡/𝜕𝜽=(𝜕𝜎2𝜀/𝜕𝜃,𝑦2𝑡1), by applying Theorem 2.1, the optimal quadratic estimating function for 𝜃 and 𝛽 based on the martingale differences 𝑚𝑡 and 𝑠𝑡 is given by 𝐠𝑄(𝜽)=𝑛𝑡=1𝐚𝑡1𝑚𝑡+𝐛𝑡1𝑠𝑡, where𝐚𝑡1=1𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝑦𝑡1𝑚𝑡+𝜕𝜎2𝜀𝜕𝜃𝑚,𝑠𝑡𝑚𝑡𝑠𝑡,𝑦2𝑡1𝑚,𝑠𝑡𝑚𝑡𝑠𝑡,𝐛𝑡1=1𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝑦𝑡1𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝜕𝜎2𝜀1𝜕𝜃𝑠𝑡𝑦,2𝑡1𝑠𝑡.(3.7) Hence, the component quadratic estimating function for 𝜃 is𝑔𝑄(𝜃)=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1×𝑦𝑡1𝑚𝑡+𝜕𝜎2𝜀𝜕𝜃𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝑚𝑡+𝑦𝑡1𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝜕𝜎2𝜀1𝜕𝜃𝑠𝑡𝑠𝑡,(3.8) and the component quadratic estimating function for 𝛽 is𝑔𝑄(𝛽)=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝑦2𝑡1𝑚,𝑠𝑡𝑚𝑡𝑚𝑡𝑠𝑡𝑦2𝑡1𝑠𝑡𝑠𝑡.(3.9) Moreover, the information matrix of the optimal quadratic estimating function for 𝜃 and 𝛽 is given by𝐈g𝑄𝐼(𝜽)=𝜃𝜃𝐼𝜃𝛽𝐼𝛽𝜃𝐼𝛽𝛽,(3.10) where𝐼𝜃𝜃=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝑦2𝑡1𝑚𝑡+𝜕𝜎2𝜀𝜕𝜃21𝑠𝑡2𝜕𝜎2𝜀𝑦𝜕𝜃𝑡1𝑚,𝑠𝑡𝑚𝑡𝑠𝑡,𝐼(3.11)𝜃𝛽=𝐼𝛽𝜃=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝜕𝜎2𝜀1𝜕𝜃𝑠𝑡𝑦𝑡1𝑚,𝑠𝑡𝑚𝑡𝑠𝑡𝑦2𝑡1𝐼,(3.12)𝛽𝛽=𝑛𝑡=11𝑚,𝑠2𝑡𝑚𝑡𝑠𝑡1𝑦4𝑡1𝑠𝑡.(3.13)

In view of the parameter 𝜃 only, the conditional least squares (CLS) estimating function and the associated information are directly given by 𝑔CLS(𝜃)=𝑛𝑡=1𝑦𝑡1𝑚𝑡 and 𝐼CLS(𝜃)=(𝑛𝑡=1𝑦2𝑡1)2/𝑛𝑡=1𝑦2𝑡1𝑚𝑡. The optimal martingale estimating function and the associated information based on 𝑚𝑡 are given by 𝑔𝑀(𝜃)=𝑛𝑡=1(𝑦𝑡1𝑚𝑡/𝑚𝑡) and 𝐼𝑔𝑀(𝜃)=𝑛𝑡=1(𝑦2𝑡1/𝑚𝑡). Moreover, the inequality𝑛𝑡=1𝑦2𝑡1𝑚𝑡𝑛𝑡=1𝑦2𝑡1𝑚𝑡𝑛𝑡=1𝑦2𝑡12(3.14) implies that 𝐼CLS(𝜃)𝐼𝑔𝑀(𝜃). Hence the optimal estimating function is more informative than the conditional least squares one. The optimal quadratic estimating function based on the martingale differences 𝑚𝑡 and 𝑠𝑡 is given by (3.8) and (3.11), respectively. It is obvious to see that the information of 𝑔𝑄(𝜃) is larger than that of 𝑔𝑀(𝜃). Therefore, we can conclude that for the RCA model, 𝐼CLS(𝜃)𝐼𝑔𝑀(𝜃)𝐼𝑔𝑄(𝜃), and hence, the estimate obtained by solving the optimal quadratic estimating equation is more efficient than the CLS estimate and the estimate obtained by solving the optimal linear estimating equation.

3.3. Doubly Stochastic Time Series Model

Random coefficient autoregressive models we discussed in the previous section are special cases of what Tjøstheim [12] refers to as doubly stochastic time series model. In the nonlinear case, these models are given by𝑦𝑡=𝜃𝑡𝑓𝑡,𝑦𝑡1+𝜀𝑡,(3.15) where {𝜃+𝑏𝑡} of (3.6) is replaced by a more general stochastic sequence {𝜃𝑡} and 𝑦𝑡1 is replaced by a function of the past, 𝑦𝑡1. Suppose that {𝜃𝑡} is a moving average sequence of the form𝜃𝑡=𝜃+𝑎𝑡+𝑎𝑡1,(3.16) where {𝑎𝑡} consists of square integrable independent random variables with mean zero and variance 𝜎2𝑎. We further assume that {𝜀𝑡} and {𝑎𝑡} are independent, then E[𝑦𝑡𝑦𝑡1] depends on the posterior mean 𝑢𝑡=E[𝑎𝑡𝑦𝑡1], and variance 𝑣𝑡=E[(𝑎𝑡𝑢𝑡)2𝑦𝑡1] of 𝑎𝑡. Under the normality assumption of {𝜀𝑡} and {𝑎𝑡}, and the initial condition 𝑦0=0, 𝑢𝑡 and 𝑣𝑡 satisfy the following Kalman-like recursive algorithms (see [13, page 439]):𝑢𝑡𝜎(𝜃)=2𝑎𝑓𝑡,𝑦𝑡1𝑦𝑡𝜃+𝑚𝑡1𝑓𝑡,𝑦𝑡1𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1,𝑣𝑡(𝜃)=𝜎2𝑎𝜎4𝑎𝑓2𝑡,𝑦𝑡1𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1,(3.17) where 𝑢0=0 and 𝑣0=𝜎2𝑎. Hence, the conditional mean and variance of 𝑦𝑡 are given by𝜇𝑡(𝜃)=𝜃+𝑢𝑡1𝑓(𝜃)𝑡,𝑦𝑡1,𝜎2𝑡(𝜃)=𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1,(𝜃)(3.18) which can be computed recursively.

Let 𝑚𝑡=𝑦𝑡𝜇𝑡 and 𝑠𝑡=𝑚2𝑡𝜎2𝑡, then {𝑚𝑡} and {𝑠𝑡} are sequences of martingale differences. We can derive that 𝑚,𝑠𝑡=0, 𝑚𝑡=𝜎2𝑒(𝜃)+𝑓2(𝑡,𝑦𝑡1)(𝜎2𝑎+𝑣𝑡1(𝜃)), and 𝑠𝑡=2𝜎4𝑒(𝜃)+4𝑓2(𝑡,𝑦𝑡1)𝜎2𝑒(𝜃)(𝜎2𝑎+𝑣𝑡1(𝜃))+2𝑓4(𝑡,𝑦𝑡1)(𝜎2𝑎+𝑣𝑡1(𝜃))2. The optimal estimating function and associated information based on 𝑚𝑡 are given by𝑔𝑀(𝜃)=𝑛𝑡=1𝑓𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)𝑚𝜕𝜃𝑡𝑚𝑡,𝐼𝑔𝑀(𝜃)=𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)/𝜕𝜃2𝑚𝑡.(3.19) Then, the inequality 𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)𝜕𝜃2𝑚𝑡𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)/𝜕𝜃2𝑚𝑡𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)𝜕𝜃22(3.20) implies that𝐼CLS(𝜃)=𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)/𝜕𝜃22𝑛𝑡=1𝑓2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)/𝜕𝜃2𝑚𝑡𝐼𝑔𝑀(𝜃),(3.21) that is, the optimal linear estimating function 𝑔𝑀(𝜃) is more informative than the conditional least squares estimating function 𝑔CLS(𝜃).

The optimal estimating function and the associated information based on 𝑠𝑡 are given by𝑔𝑆(𝜃)=𝑛𝑡=1𝜕𝜎2𝑒(𝜃)𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)𝑠𝜕𝜃𝑡𝑠𝑡,𝐼𝑔𝑆(𝜃)=𝑛𝑡=1𝜕𝜎2𝑒(𝜃)𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)𝜕𝜃21𝑠𝑡.(3.22) Hence, by Theorem 2.1, the optimal quadratic estimating function is given by 𝑔𝑄(𝜃)=𝑛𝑡=11𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1×𝑓(𝜃)𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)𝑚𝜕𝜃𝑡+𝜕𝜎2𝑒(𝜃)/𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)/𝜕𝜃𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦t1𝜎2𝑎+𝑣𝑡1𝑠(𝜃)𝑡.(3.23) And the associated information, 𝐼𝑔𝑄(𝜃)=𝐼𝑔𝑀(𝜃)+𝐼𝑔𝑆(𝜃), is given by 𝐼𝑔𝑄(𝜃)=𝑛𝑡=11𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1×𝑓(𝜃)2𝑡,𝑦𝑡11+𝜕𝑢𝑡1(𝜃)𝜕𝜃2+𝜕𝜎2𝑒(𝜃)/𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)/𝜕𝜃2𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1.(𝜃)(3.24) It is obvious to see that the information of 𝑔𝑄 is larger than that of 𝑔𝑀 and 𝑔𝑆, and hence, the estimate obtained by solving the optimal quadratic estimating equation is more efficient than the CLS estimate and the estimate obtained by solving the optimal linear estimating equation. Moreover, the relations 𝜕𝑢𝑡(𝜃)𝑓𝜕𝜃=2𝑡,𝑦𝑡1𝜎2𝑎1+𝜕𝑢𝑡1𝜎(𝜃)/𝜕𝜃2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1(𝜃)𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1(𝜃)2𝜎2𝑎𝑦𝑡𝑓𝑡,𝑦𝑡1𝜃+𝑢𝑡1(𝜃)𝜕𝜎2𝑒(𝜃)/𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)/𝜕𝜃𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1(𝜃)2,𝜕𝑣𝑡(𝜃)=𝜎𝜕𝜃4𝑎𝑓2𝑡,𝑦𝑡1𝜕𝜎2𝑒(𝜃)/𝜕𝜃+𝑓2𝑡,𝑦𝑡1𝜕𝑣𝑡1(𝜃)/𝜕𝜃𝜎2𝑒(𝜃)+𝑓2𝑡,𝑦𝑡1𝜎2𝑎+𝑣𝑡1(𝜃)2(3.25) can be applied to calculate the estimating functions and associated information recursively.

3.4. Regression Model with ARCH Errors

Consider a regression model with ARCH (𝑠) errors 𝜀𝑡 of the form𝑦𝑡=𝐱𝐭𝜷+𝜀𝑡,(3.26) such that E[𝜀𝑡𝑦𝑡1]=0, and Var(𝜀𝑡𝑦𝑡1)=𝑡=𝛼0+𝛼1𝜀2𝑡1++𝛼𝑠𝜀2𝑡𝑠. In this model, the conditional mean is 𝜇𝑡=𝐱𝐭𝜷, the conditional variance is 𝜎2𝑡=𝑡, and the conditional skewness and excess kurtosis are assumed to be constants 𝛾 and 𝜅, respectively. It follows form Theorem 2.1 that the optimal component quadratic estimating function for the parameter vector 𝜽=(𝛽1,,𝛽𝑟,𝛼0,,𝛼𝑠)=(𝜷,𝜶) is 𝐠𝑄1(𝜷)=𝛾(𝜅+2)12𝜅+21×𝑛𝑡=112𝑡𝑡(𝜅+2)𝐱𝐭+2𝑡1/2𝛾𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗𝑚𝑡+𝑡1/2𝛾𝐱𝐭2𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗𝑠𝑡,𝐠𝑄1(𝜶)=𝛾(𝜅+2)12𝜅+21×𝑛𝑡=112𝑡𝑡1/2𝛾1,𝜀2𝑡1,,𝜀2𝑡𝑝𝑚𝑡𝑛𝑡=11,𝜀2𝑡1,,𝜀2𝑡𝑝𝑠𝑡.(3.27) Moreover, the information matrix for 𝜽=(𝜷,𝜶) is given by𝛾𝐈=12𝜅+21𝐈𝜷𝜷𝐈𝜷𝜶𝐈𝜶𝜷𝐈𝜶𝜶,(3.28) where𝐈𝜷𝜷=𝑛𝑡=1𝐱𝐭𝐱𝐭𝑡+41,𝜀2𝑡1,,𝜀2𝑡𝑠1,𝜀2𝑡1,,𝜀2𝑡𝑠2𝑡,𝐈(𝜅+2)𝜷𝜶=𝑛𝑡=1𝑡1/2𝛾𝑡𝐱𝐭2𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗1,𝜀2𝑡1,,𝜀2𝑡𝑠2𝑡,𝐈(𝜅+2)𝜶𝜷=𝐼𝜷𝜶=𝑛𝑡=11,𝜀2𝑡1,,𝜀2𝑡𝑠𝑡1/2𝛾𝐱𝐭2𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗2𝑡,𝐈(𝜅+2)𝜶𝜶=𝑛𝑡=11,𝜀2𝑡1,,𝜀2𝑡𝑠1,𝜀2𝑡1,,𝜀2𝑡𝑠2𝑡.(𝜅+2)(3.29)

It is of interest to note that when {𝜀𝑡} are conditionally Gaussian such that 𝛾=0, 𝜅=0,𝐸𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗1,𝜀2𝑡1,,𝜀2𝑡𝑠2𝑡(𝜅+2)=𝟎,(3.30) the optimal quadratic estimating functions for 𝜷 and 𝜶 based on the estimating functions 𝑚𝑡=𝑦𝑡𝐱𝐭𝛽 and 𝑠𝑡=𝑚2𝑡𝑡, are, respectively, given by𝐠𝑄(𝜷)=𝑛𝑡=112𝑡𝑡𝐱𝐭𝑚𝑡+𝑛𝑡=1𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗𝑠𝑡,𝐠𝑄(𝜶)=𝑛𝑡=112𝑡1,𝜀2𝑡1,,𝜀2𝑡𝑠𝑠𝑡.(3.31) Moreover, the information matrix for 𝜽=(𝜷,𝜶) in (3.28) has 𝐈𝜷𝜶=𝐈𝜶𝜷=𝟎,𝐈𝜷𝜷=𝑛𝑡=1𝑡𝐱𝐭𝐱𝐭+2𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗𝑠𝑗=1𝛼𝑗𝐱𝐭𝜀𝑡𝑗2𝑡,𝐈𝜶𝜶=𝑛𝑡=11,𝜀2𝑡1,,𝜀2𝑡𝑠1,𝜀2𝑡1,,𝜀2𝑡𝑠22𝑡.(3.32)

4. Conclusions

In this paper, we use appropriate martingale differences and derive the general form of the optimal quadratic estimating function for the multiparameter case with dependent observations. We also show that the optimal quadratic estimating function is more informative than the estimating function used in Thavaneswaran and Abraham [2]. Following Lindsay [8], we conclude that the resulting estimates are more efficient in general. Examples based on ACD models, RCA models, doubly stochastic models, and the regression model with ARCH errors are also discussed in some detail. For RCA models and doubly stochastic models, we have shown the superiority of the approach over the CLS method.