Journal of Applied Mathematics

Volume 2019, Article ID 8479086, 5 pages

https://doi.org/10.1155/2019/8479086

## Parameter Estimation for p-Order Random Coefficient Autoregressive (RCA) Models Based on Kalman Filter

LaMSD, Faculty of Sciences, Mohammed Premier University, Oujda, Morocco

Correspondence should be addressed to Mohammed Benmoumen; moc.liamg@nemuomneb.dem

Received 13 February 2019; Accepted 18 April 2019; Published 13 May 2019

Academic Editor: Qiankun Song

Copyright © 2019 Mohammed Benmoumen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper we elaborate an algorithm to estimate p-order Random Coefficient Autoregressive Model (RCA(p)) parameters. This algorithm combines quasi-maximum likelihood method, the Kalman filter, and the simulated annealing method. In the aim to generalize the results found for RCA(1), we have integrated a subalgorithm which calculate the theoretical autocorrelation. Simulation results demonstrate that the algorithm is viable and promising.

#### 1. Introduction

Random Coefficient Autoregressive processes have been widely studied in the literature for modeling time series exhibiting nonlinear behavior. The process was introduced by Andel (1976) who also studied its properties. He had obtained conditions for the existence of a singly infinite process which is second-order stationary satisfying: where ’s are fixed coefficients, )is a sequence of i.i.d. random vectors with mean zero and constant covariance matrix , is a white noise process with mean zero and variance , and and are assumed to be mutually independent.

Multiple studies have emerged after Conlisk (1974, 1976) has derived conditions for the stability of RCA models, Robinson (1978) has considered statistical inference for the RCA model, and Nicholls and Quinn [1] have extended the results of Andel to the multivariate RCA model.

Many authors have investigated the estimation of parameters in the random coefficient autoregressive. Among them, we cite Nicholls and Quinn who obtained the least squares and the maximum likelihood estimates. Under certain assumptions, they established the strong consistency as well as the asymptotic normality for both estimates. For a detailed early study, we refer to Nicholls and Quinn [1]. On their side, Thavaneswaran and Abraham [2] apply Godambe’s theorem (1985) to obtain optimal estimates for models.

Recently several authors are interested in the model. Aue and Horvath (2011) propose a unified quasi-likelihood procedure for the estimation of the unknown parameters of RCA(1) models that works for both stationary and nonstationary processes. They also establish the weak consistency and the asymptotic normality for this procedure.

Liang et al. (2012) have described moment properties for processes and the corresponding squared processes; they have given also a joint prediction study of the mean and volatility.

A new algorithm was proposed by Allal and Benmoumen [3] to estimate first-order RCA’s parameters. This algorithm combines quasi-maximum likelihood method, the Kalman filter, and the Powell’s method. Our contribution aims to extend previous method to higher-order of RCA process.

As we shall see, the proposed algorithm exacts initial values. To provide the concerned values we are obliged to calculate the theoretical autocorrelation of -order RCA models. To ward off tedious computation we implement an algorithm, more details in Section 4, who calculate the autocorrelations in numerical way.

This paper is organized as follows. In Section 2, we present definition and some basic properties of RCA models. In Section 3 we recall Kalman’s algorithm and apply it to calculate likelihood function for RCA models. In Section 4, we describe our estimating algorithm. The performance of this algorithm is examined by Monte Carlo simulations and compared to the quasi-maximum likelihood method, in Section 5. Finally, we achieve with a conclusion.

#### 2. Stationarity and Moment Properties

Let be a model. In order to derive stationarity conditions for the processes , Nicholls and Quinn proposed the following vectorial presentation.

Define the random vector by , equation (1) may be rewritten in terms of bywhere the matrix is given by , , where and and .

Theorem 1. *If is the field generated by and then there exists a unique measurable second-order stationary solution to (1) if and only if has all its eigenvalues within the unit circle and , where and is the last column of the matrix (see Andel (1976) or Nicholls and Quinn [1]).*

In the next theorem, Lian et al. [4] give us autocorrelation structure and marginal variance of RCA(p) models.

Theorem 2. *Consider the stationary in (1). *(i)*The processes have the same autocorrelation structure as the processes:*(ii)*The marginal variance of a stationary process is given by*

*In practice, this theorem will enable us to deduce the starting conditions for Kalman filter. Hence, we develop in Section 3 a recursive algorithm called variance to calculate the autocovariance function for RCA models.*

*3. Quasi-Maximum Likelihood and Kalman Filter*

*Assuming the joint normality of and , the conditional log-likelihood is given by*

*where is the vector of unknown parameters, is a sample of observations at , and indicate the normal density function of given , with mean and variance . The quasi-maximum likelihood may be put in the following form:*

*Given this formula the quasi-maximum likelihood can be calculated using the Kalman filter; see Hamilton [5].*

*In order to apply Kalman filter, we consider the appropriate state-space representation for model in (1).*

*where , , , *

*We should point out that the previous representation was proposed by Benmoumen in his M.Sc. Thesis [6].*

*Now, we describe the Kalman filter in the aim to build the log-likelihood function. The Kalman filter is a recursive algorithm derived by Kalman [7] to provide an optimal forecast of given , with the mean square error . Given starting values and which are derived from Theorem 2, the recursive procedure is as follows:(i)Calculate the forecasting of the observation , and the error of this forecast.(ii)Update the state vector . Compute the MSE of this updated projection.(iii)Calculate the forecasting and the MSE of this forecast.*

*Thus, we could construct the log-likelihood function using Kalman filter, so as to obtain the maximum likelihood estimators and in order to avoid the fastidious computation of partial derivatives of we used the simulated annealing method (see Corana et al. [8]) which is a global optimization algorithm.*

*4. Estimating Algorithm for p-Order Random Coefficient Autoregressive Model parameters*

*The algorithm proposed here is a generalization of a procedure developed by Benmoumen and al. (2013) to estimate first-order RCA’s parameters.*

*Recently, the same idea has been developed for parameters estimation in GARCH(1,1), ARCH(1), and ARCH(p) models by Benmoumen et al. (2011, 2014, and 2015).*

*Before describing our algorithm MLKF (quasi-maximum likelihood and Kalman filter estimation), it is worthwhile to provide a subalgorithm which tests if parameters fulfill the conditions of stationarity; we will denote it by Test. The second subalgorithm, which we must provide, concerns the computation of by Kalman filter; we will denote it by KF. These two subalgorithms will be implemented in our global estimating algorithm.*

*Herein, we are interested in minimizing . Subalgorithm Test() if The eigenvalues of the matrix have modul less than unity and then Then go to next else Take the last point as starting point end if End Subalgorithm*

* Subalgorithm variance() Solve the equations: Where And Calculate: End Subalgorithm*

* Subalgorithm KF() Given the starting conditions and Calculate: and for to do , , , and and end for for to do end for End Subalgorithm*

*Indeed, as we shall see later our algorithm is an iterative process requiring initial estimates of the parameters to commence the iterations. The consistent least squares estimates are suitable for this purpose. MLKF Algorithm Step 1: Initialize: the vector parameters the step vector and the temperature . Step 2: Starting from the point , generate a random point along the direction : , where r is a random number generated in the range by a pseudorandom generator; is the vector of the hth coordinate direction; and is the component of the step vector along the same direction. Step 3: Call sub algorithm Step 4: Call sub algorithm Compute and If accept the new point Else accept or reject the new point with acceptance probability : generate a uniformly distributed random number in the range If , the point is accepted otherwise it is rejected. Step 5: Steps 2 to 4 are repeated for each coordinate direction , , , ( is the dimension of the vector parameter). Step 6: Steps 2 to 5 are repeated times ( is the number of step variation) and the step vector is adjusted. Step 7: Steps 2 to 6 are repeated times ( is the number of temperature reduction) the temperature is reduced following the rule: with . Step 8: Steps 2 to 7 are repeated until a termination criterion is satisfied. End Algorithm*

*5. Simulations*

*To examine the performance of our algorithm, we have carried out series of simulation experiments. In this study, we consider two examples of models RCA.(1)with , where .(2)with , where *

*For the models mentioned earlier, we generated 1000 replications of sample sizes and .*

*The results of this experiment are displayed in Tables 1–4 where for each estimator we give the mean and MSE, where we used notation QMLE for the quasi-maximum likelihood estimators and MLKF for the estimation by our algorithm.*