Research Article  Open Access
Lema Logamou Seknewna, Peter Mwita Nyamuhanga, Benjamin Kyalo Muema, "Smoothed Conditional Scale Function Estimation in AR(1)ARCH(1) Processes", Journal of Probability and Statistics, vol. 2018, Article ID 4816716, 13 pages, 2018. https://doi.org/10.1155/2018/4816716
Smoothed Conditional Scale Function Estimation in AR(1)ARCH(1) Processes
Abstract
The estimation of the Smoothed Conditional Scale Function for time series was taken out under the conditional heteroscedastic innovations by imitating the kernel smoothing in nonparametric QARQARCH scheme. The estimation was taken out based on the quantile regression methodology proposed by Koenker and Bassett. And the proof of the asymptotic properties of the Conditional Scale Function estimator for this type of process was given and its consistency was shown.
1. Introduction
Consider a Quantile Autoregressive model,where is the Conditional Quantile Function of given and the innovation is assumed to be independent and identically distributed with zero quantile and constant scale function; see [1]. A kernel estimator of has been determined and its consistency is shown [2]. A bootstrap kernel estimator of was determined and shown to be consistent [3]. This research will extend [3] by assuming that the innovations follow Quantile Autoregressive Conditional Heteroscedastic process similar to AutoregressiveQuantile Autoregressive Conditional Heteroscedastic process proposed in [1]:where is the conditional quantile function of given ; is a conditional scale function at level, and is independent and identically distributed (i.i.d.) error with zero quantile and unit scale. The function can be expressed aswhere is the socalled volatility found in [4, 5] which are papers of reference on Engleâ€™s ARCH models among many others and is a positive constant depending on [see [6]]. An example of this kind of function is AutoregressiveGeneralized Autoregressive Conditional Heteroscedastic AR(1)GARCH(1,1),where , , , , , , , , and with 0 mean and variance 1. Note that may also be an ARMA (see [7]). The specifications for model (4) are given in Section 4.2.
Considering other financial time series models, the model (1) can be seen as a robust generalization of ARARCHmodels, introduced in [7], and their nonparametric generalizations reviewed by [8]. For instance, consider a financial time series model of AR()ARCH()type,where and and are arbitrary functions representing, respectively, the conditional mean and conditional variance of the process.
The focus of this paper is to determine a smoothed estimator of the conditional scale function (CSF) and its asymptotic properties. This study is essential since volatility is inherent in many areas, for example, hydrology, finance, and weather. The volatility needs to be estimated robustly even when the moments of distribution do not exist.
A partitioned stationary mixed time series , where the and the variate are, respectively, measurable and measurable, is considered. For some , the conditional quantile of given the past assumed to be determined by is estimated. For simplicity, we assume that throughout the rest of the discussion.
We derive a smoothed nonparametric estimator of and show its consistency using standard estimate of Nadaraya [9]Watson [10] type. This estimate is obtained from the estimate of the conditional scale function in [11] which is a type of estimator that has some disadvantages of not being adaptive and having some boundary effects but can be fixed by wellknown techniques ([12]). It is though a constrained estimator in and a monotonically increasing function. This is very important to our estimation of the conditional distribution function and its inverse.
2. Methods and Estimations
Let and denote the probability density function (pdf) of and the joint pdf of . The dependence between the exogenous and the endogenous variables is described by the following conditional probability density function (CPDF):and the conditional cumulative distribution function (CCDF)The estimation of the conditional scale function is derived through the CCDF. However, the following assumptions and definitions (these assumptions are commonly used for kernel density estimation (KDE), bias reduction [13], asymptotic properties, and normality proof) are necessary (see Table 1).

Assumption 1. â€‰(i) and exist.(ii)For fixed , and are continuous in the neighborhood of where the estimator is to be estimated.(iii)The derivatives and for exist.(iv) is a convex function in for fixed .(v)The conditional density exists and is continuous in the neighborhood of .(vi).
Assumption 2. The kernel function is(i)Symmetrical: with (ii)Nonnegative and bounded: for , , (iii)Lipschitz: , such that for all (iv)A pdf: with .
Assumption 3. The process is strong mixing with , ; see [14, Theorem 1.7].
Assumption 4. The sequence of the smoothing parameters is such that , as and .
Definition 5 (strong mixing). Let be a stationary time series endowed with algebras and . Define as If as , then the process is strong mixing.
The results in this section are about the case when the Autoregressive part of the model (4) for any . We therefore consider the modelDefine the checkfunction asHere, is the indicator function. Therefore, is a piecewise monotone increasing function. is a function of any real random variable with distribution function , and a real value, , is the asymmetric absolute value function whose amount of asymmetry depends on ; see [15]. In case where is symmetric and , then we have the fact that is an absolute value function and is the conditional median absolute deviation (CMAD) of . When became 0 in model (5), we have a purely heteroscedastic ARCH model introduced in [16] and for , which, in this particular case, can be seen as a conditional scale function at level.
The checkfunction in (10) is Lipschitz continuous by the following theorem.
Theorem 6. Let be defined as in (10) and . Then, satisfies the Lipschitz continuity condition:with the Lipschitz constant and for all ,
Proof of Theorem 6. See the proof of Lemma 3.1 in [1, p. 7475].
By the next theorem we show clearly why the errors in model (2) are assumed to be zero quantile and unit scale.
Theorem 7. Consider model (5) and the socalled checkfunction in (10); then, for ,is zero quantile and unit scale. And the following equations are verifiable:
Proof of Theorem 7. The quantile operator iswith welldefined properties in [1, p. 910]. From model (5), the conditional quantile of iswhere is the quantiles of . Then, using model (5) and (16), we getAnd the quantile of (18) iswhere is the quantile of . Note that, from (17), . The quotientis zero quantile and unit scale and can be seen as model (2) if , , and
Now, assuming that (independent of ) in model (2) is zero quantile, it is equivalent to writeThis proves (13) for . Also, is unit scale, which means
Assuming , the estimator, , of the conditional scale function, , is obtained through the minimization of the objective functionThus, the conditional scale function may be obtained by minimizing with respect to ; that is,The kernel estimator of (24) at is given byWe can express the estimate of in the random design as it was developed in [17]. Let be a nonnegative function of and a random vector in , . In the random design, the conditional expectation (23) can be rewritten as follows:where represents the conditional pdf of given , is the joint pdf of the two random variables and , and is the pdf of . Using [9, 10] with , a 1dimensional rescaled kernel with bandwidth , we have the following estimates of and [18]:From the estimations above, , the estimate of , isand considering the regularity conditions of in Assumption 2 and also the fact that , , we havewhere is the estimate of the marginal pdf of at point and can be rewritten asand the derivative of with respect to isThe minimizer of (30) is obtained from . This leads to the following equation:wherefor all , . Note that in (27). The left part of (33) is a (unsmoothed) conditional cumulative distribution function (CCDF),that needs to be estimated and our estimator is thereforewhich is equivalent to .
An algorithm for estimating is proposed in the following section. This estimator suffers from the problem of boundary effects as we can see it on Figure 2 due to outliers. We obtain unsmoothed curves of the CCDF because the smoothness is only in the direction. A method is proposed by [19] to smooth it in the . The form of Smoothed Conditional Distribution Estimator iswhere is an integrated kernel with the smoothing parameter in the direction. This estimate is smooth rather than the NW which is a jump function in . To deal with boundary effects, one may think of the Weighted NadarayaWatson (WNW) estimate of the CDF discussed in [12, 20], [21, p. 3â€“18] among others. The WNW estimatorâ€™s expression iswith conditions and Lambda is determined using the NewtonRaphson iteration. Smoothing the CDF does not smooth the estimator in (36).
2.1. Algorithm
This algorithm estimates the empirical CCDF, , and its inverse . Starting with the estimation of the former, the denominator is easy to compute as the estimator of the probability density function of as vector of points .(1)Obtain , , for all .(2)Check if each is less than or equal to each observation of the whole sequence . The result determines which can be expressed in matrix of order .(3)Construct from the sequence of i.i.d random variable with observation . is the number of from which the probability density function (pdf) of is to be estimated.(4)Determine the matrix of kernels which isâ€‰The row sums of over give the estimator of the pdf of at , , . We obtain the matrix of weights by the ration of and (elementwise), where is a matrix of ones. Note that the row sums of are 1.â€‰Let be the matrix from 2. The estimator of the Conditional Cumulative Distribution Function (CCDF) is
2.2. NadarayaWatson Smoothing Method
We can make smooth by using NW regression (one can also use LOWESS (LOcally WEighted Scatter plot Smoother) regression introduced by [22] to smooth the estimator in (36) and it solves the problem of boundary effects). This will provide a smoothed curve at each level . We write the regression equation aswith and and the errors satisfy , , and for . Note that can be derived using joint pdf aswhere and are estimated as in (28).
We can perform some transformations on (42) in order to show that it is actually better than the unsmoothed one. By Assumption 1 (iv) and the fact that , we have We have used Jensenâ€™s theorem for conditional expectation found in [23] and stated as follows.
Theorem 8 (Jensenâ€™s inequality). For any convex function ,
Proof of Theorem 8. Suppose that is differentiable. The function is convex ifLet and . The inequality is true for all and taking its expectation on both sides proves the theorem.
This inequality is applicable when is a conditional convex function and when is a conditional expectation. The estimator is also element of the set to which the unsmoothed estimator belongs. This means that . The estimator is empirically given by
2.2.1. Asymptotic Properties
To show the asymptotic properties of our estimator, we compute its expectation and variance. Assuming the data is i.i.d, the expectation of the numerator is given byWe assume that the first and the second derivatives of at point exist. That is, by Taylorâ€™s expansion of and given bywe getSimilarly, the expectation of the numerator isFor small enough, . Thus,The variance of the numerator, say , isNote that . Similarly, the variance of the denominator, , is .
The covariance of the numerator and the denominator of the estimator in (46) are given byThe variance of the estimator in (46) is the variance of a ratio of correlated variables that can be calculated using the approximation found in [24]:If Assumption 3 for strong mixing processes holds, then from the Central Limit Theorem (CLT) we have
2.3. Asymptotic Normality of QARCH
The CCDF in (35) can be written in the form of an arithmetic mean of a random variable :and the approximation of the expectation of is[see [24]]. Using the i.i.d assumption over the data, the numerator isWe have used the change of variables , the definition of the conditional density function turned into , and Fubuniâ€™s theorem for multiple integrals. Taylor series expansions of and yieldand, for the denominator, we haveThus,From the assumption that , the denominator is approximated to . Hence,Some authors assumed that, in this case, the first derivative of the true pdf of at point can be zero [19] as the one for the fixed design and, therefore, the bias can be given byWe haveUsing the same approximation in (54), the variance of isand by the Central Limit Theorem, using Assumption 3 for ,Notice that the expectation of is the same as the one of and the variance is . To show the asymptotic normality of , we use the following theorem.
Theorem 9 (delta method). Suppose has the asymptotic normal distribution as in (67). Suppose is a continuous function that has a derivative at . Then
Proof of Theorem 9. The firstorder Taylor expansion of about the point , and evaluated at the random variable , isand subtracting from both sides and multiplying by , we get