Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2019, Article ID 8479086, 5 pages
Research Article

Parameter Estimation for p-Order Random Coefficient Autoregressive (RCA) Models Based on Kalman Filter

LaMSD, Faculty of Sciences, Mohammed Premier University, Oujda, Morocco

Correspondence should be addressed to Mohammed Benmoumen; moc.liamg@nemuomneb.dem

Received 13 February 2019; Accepted 18 April 2019; Published 13 May 2019

Academic Editor: Qiankun Song

Copyright © 2019 Mohammed Benmoumen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


In this paper we elaborate an algorithm to estimate p-order Random Coefficient Autoregressive Model (RCA(p)) parameters. This algorithm combines quasi-maximum likelihood method, the Kalman filter, and the simulated annealing method. In the aim to generalize the results found for RCA(1), we have integrated a subalgorithm which calculate the theoretical autocorrelation. Simulation results demonstrate that the algorithm is viable and promising.

1. Introduction

Random Coefficient Autoregressive processes have been widely studied in the literature for modeling time series exhibiting nonlinear behavior. The process was introduced by Andel (1976) who also studied its properties. He had obtained conditions for the existence of a singly infinite process which is second-order stationary satisfying: where ’s are fixed coefficients, )is a sequence of i.i.d. random vectors with mean zero and constant covariance matrix , is a white noise process with mean zero and variance , and and are assumed to be mutually independent.

Multiple studies have emerged after Conlisk (1974, 1976) has derived conditions for the stability of RCA models, Robinson (1978) has considered statistical inference for the RCA model, and Nicholls and Quinn [1] have extended the results of Andel to the multivariate RCA model.

Many authors have investigated the estimation of parameters in the random coefficient autoregressive. Among them, we cite Nicholls and Quinn who obtained the least squares and the maximum likelihood estimates. Under certain assumptions, they established the strong consistency as well as the asymptotic normality for both estimates. For a detailed early study, we refer to Nicholls and Quinn [1]. On their side, Thavaneswaran and Abraham [2] apply Godambe’s theorem (1985) to obtain optimal estimates for models.

Recently several authors are interested in the model. Aue and Horvath (2011) propose a unified quasi-likelihood procedure for the estimation of the unknown parameters of RCA(1) models that works for both stationary and nonstationary processes. They also establish the weak consistency and the asymptotic normality for this procedure.

Liang et al. (2012) have described moment properties for processes and the corresponding squared processes; they have given also a joint prediction study of the mean and volatility.

A new algorithm was proposed by Allal and Benmoumen [3] to estimate first-order RCA’s parameters. This algorithm combines quasi-maximum likelihood method, the Kalman filter, and the Powell’s method. Our contribution aims to extend previous method to higher-order of RCA process.

As we shall see, the proposed algorithm exacts initial values. To provide the concerned values we are obliged to calculate the theoretical autocorrelation of -order RCA models. To ward off tedious computation we implement an algorithm, more details in Section 4, who calculate the autocorrelations in numerical way.

This paper is organized as follows. In Section 2, we present definition and some basic properties of RCA models. In Section 3 we recall Kalman’s algorithm and apply it to calculate likelihood function for RCA models. In Section 4, we describe our estimating algorithm. The performance of this algorithm is examined by Monte Carlo simulations and compared to the quasi-maximum likelihood method, in Section 5. Finally, we achieve with a conclusion.

2. Stationarity and Moment Properties

Let be a model. In order to derive stationarity conditions for the processes , Nicholls and Quinn proposed the following vectorial presentation.

Define the random vector by , equation (1) may be rewritten in terms of bywhere the matrix is given by , , where and and .

Theorem 1. If is the field generated by and then there exists a unique measurable second-order stationary solution to (1) if and only if has all its eigenvalues within the unit circle and , where and is the last column of the matrix (see Andel (1976) or Nicholls and Quinn [1]).

In the next theorem, Lian et al. [4] give us autocorrelation structure and marginal variance of RCA(p) models.

Theorem 2. Consider the stationary in (1). (i)The processes have the same autocorrelation structure as the processes:(ii)The marginal variance of a stationary process is given by

In practice, this theorem will enable us to deduce the starting conditions for Kalman filter. Hence, we develop in Section 3 a recursive algorithm called variance to calculate the autocovariance function for RCA models.

3. Quasi-Maximum Likelihood and Kalman Filter

Assuming the joint normality of and , the conditional log-likelihood is given by

where is the vector of unknown parameters, is a sample of observations at , and indicate the normal density function of given , with mean and variance . The quasi-maximum likelihood may be put in the following form:

Given this formula the quasi-maximum likelihood can be calculated using the Kalman filter; see Hamilton [5].

In order to apply Kalman filter, we consider the appropriate state-space representation for model in (1).

where , , ,

We should point out that the previous representation was proposed by Benmoumen in his M.Sc. Thesis [6].

Now, we describe the Kalman filter in the aim to build the log-likelihood function. The Kalman filter is a recursive algorithm derived by Kalman [7] to provide an optimal forecast of given , with the mean square error . Given starting values and which are derived from Theorem 2, the recursive procedure is as follows:(i)Calculate the forecasting of the observation , and the error of this forecast.(ii)Update the state vector . Compute the MSE of this updated projection.(iii)Calculate the forecasting and the MSE of this forecast.

Thus, we could construct the log-likelihood function using Kalman filter, so as to obtain the maximum likelihood estimators and in order to avoid the fastidious computation of partial derivatives of we used the simulated annealing method (see Corana et al. [8]) which is a global optimization algorithm.

4. Estimating Algorithm for p-Order Random Coefficient Autoregressive Model parameters

The algorithm proposed here is a generalization of a procedure developed by Benmoumen and al. (2013) to estimate first-order RCA’s parameters.

Recently, the same idea has been developed for parameters estimation in GARCH(1,1), ARCH(1), and ARCH(p) models by Benmoumen et al. (2011, 2014, and 2015).

Before describing our algorithm MLKF (quasi-maximum likelihood and Kalman filter estimation), it is worthwhile to provide a subalgorithm which tests if parameters fulfill the conditions of stationarity; we will denote it by Test. The second subalgorithm, which we must provide, concerns the computation of by Kalman filter; we will denote it by KF. These two subalgorithms will be implemented in our global estimating algorithm.

Herein, we are interested in minimizing .Subalgorithm Test()if The eigenvalues of the matrix have modul less than unity and then Then go to nextelseTake the last point as starting pointend if End Subalgorithm

Subalgorithm variance()Solve the equations: Where And Calculate: End Subalgorithm

Subalgorithm KF()Given the starting conditions and Calculate: and for to do, , , and and end forfor to doend forEnd Subalgorithm

Indeed, as we shall see later our algorithm is an iterative process requiring initial estimates of the parameters to commence the iterations. The consistent least squares estimates are suitable for this purpose.MLKF Algorithm Step 1: Initialize: the vector parameters the step vector and the temperature .Step 2: Starting from the point , generate a random point along the direction : , where r is a random number generated in the range by a pseudorandom generator; is the vector of the hth coordinate direction; and is the component of the step vector along the same direction.Step 3: Call sub algorithm Step 4: Call sub algorithm Compute and If accept the new pointElse accept or reject the new point with acceptance probability :generate a uniformly distributed random number in the range If , the point is accepted otherwise it is rejected.Step 5: Steps 2 to 4 are repeated for each coordinate direction , , , ( is the dimension of the vector parameter).Step 6: Steps 2 to 5 are repeated times ( is the number of step variation) and the step vector is adjusted.Step 7: Steps 2 to 6 are repeated times ( is the number of temperature reduction) the temperature is reduced following the rule: with .Step 8: Steps 2 to 7 are repeated until a termination criterion is satisfied.End Algorithm

5. Simulations

To examine the performance of our algorithm, we have carried out series of simulation experiments. In this study, we consider two examples of models RCA.(1)with , where .(2)with , where

For the models mentioned earlier, we generated 1000 replications of sample sizes and .

The results of this experiment are displayed in Tables 14 where for each estimator we give the mean and MSE, where we used notation QMLE for the quasi-maximum likelihood estimators and MLKF for the estimation by our algorithm.

Table 1: Mean and MSE of estimated parameters for Example 1; .
Table 2: Mean and MSE of estimated parameters for Example 1; .
Table 3: Mean and MSE of estimated parameters for Example 2; .
Table 4: Mean and MSE of estimated parameters for Example 2; .

Remark 3. (i)The least squares estimate of , where and is given by where , , and .(ii)The maximum likelihood estimate of is defined by minimizing the following function: Under certain assumptions, both of the estimates, the least squares and the maximum likelihood, are strongly consistent and obey a central limit theorem. In fact, condition is required for the strong consistency of the least squares estimates, and for the asymptotic normality of these estimators. Respectively, in order that a central limit theorem exist for maximum likelihood estimators, conditions and , will be required. For more details see Nicholls and Quinn [1].

5.1. Comparison with Quasi-Maximum Likelihood Estimators

In this series of simulation we compare our algorithm (MLKF) versus the quasi-maximum likelihood (QMLE). In each method we use simulated annealing algorithm for optimization and the least squares estimators for initiation.

As is to be expected, the MLKF estimation procedure has performed better, as is seen from the fact that the sample mean square errors (MSE) are generally smaller than for the quasi-maximum likelihood estimators (QMLE). Hence, we can conclude that the performance of our estimation procedure is promising.

6. Conclusion

In this paper, we constructed an algorithm for calculating the covariance matrix of RCA(p) models and we have generalized a procedure developed by Benmoumen et al. (2013) to estimate first-order RCA’s parameters.

The log-likelihood function is constructed using the Kalman filter and is numerically maximized applying simulated annealing method. The results of our simulation study show that our estimation approach succeeds and it performs better than the competitor.

Data Availability

Only computer-generated data have been used so all researchers can find our results from the application of our algorithms and computer-simulated data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


  1. D. F. Nicholls and B. G. Quinn, Random Coefficient Autoregressive Models: An Introduction, Springer-Verlag, New York, NY, USA, 1982.
  2. A. Thavaneswaran and B. Abraham, “Estimation for nonlinear time series models using estimating equations,” Journal of Time Series Analysis, vol. 9, no. 1, pp. 99–108, 1988. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. J. Allal and M. Benmoumen, “Parameter estimation for first-order Random Coefficient Autoregressive (RCA) Models based on Kalman Filter,” Communications in Statistics—Simulation and Computation, vol. 42, no. 8, pp. 1750–1762, 2013. View at Google Scholar · View at MathSciNet
  4. Y. Lian and A. Thavaneswaran, “RCA models: joint prediction of mean and volatility,” Statistics & Probability Letters, vol. 83, no. 2, pp. 527–533, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. J. D. Hamilton, Time Series Analysis, Princeton University Press, Princeton, New Jersey, 1994. View at MathSciNet
  6. M. Benmoumen, Algorithmes Pour L’Inférence Statistique Dans Les Modèles RCA et GARCH basée sur le Filtre de Kalman, thèse De Doctorat, Université Mohammed Premier, 2012.
  7. R. E. Kalman, “A new approach to linear filtering and prediction problems; transactions of the ASME,” Journal of Basic Engineering, vol. 82, pp. 34–45, 1960. View at Google Scholar
  8. A. Corana, M. Marchesi, C. Martini, and S. Ridella, “Minimizing Multimodal functions of continuous variables with simulated annealing Algorithm,” ACM Transactions on Mathematical Software, vol. 35, pp. 262–280, 1987. View at Publisher · View at Google Scholar · View at MathSciNet