Research Article | Open Access

Volume 2019 |Article ID 6593821 | https://doi.org/10.1155/2019/6593821

Li Bi, Feilong Lu, Kai Yang, Dehui Wang, "Locally Most Powerful Test for the Random Coefficient Autoregressive Model", Mathematical Problems in Engineering, vol. 2019, Article ID 6593821, 11 pages, 2019. https://doi.org/10.1155/2019/6593821

# Locally Most Powerful Test for the Random Coefficient Autoregressive Model

Accepted17 Jun 2019
Published27 Jun 2019

#### Abstract

In this article, we study the problem of testing the constancy of the coefficient in a class of stationary first-order random coefficient autoregressive (RCAR(1)) model. We construct a new test statistic based on the locally most powerful-type (LMP) test. Under the null hypothesis, we derive the limiting distribution of the proposed test statistic. In the simulation, we compare the power between LMP test and empirical likelihood (EL) test and find that the accuracy of using LMP is 6.7%, 28.8%, and 26.1% higher than that of EL test under normal, student’s , and symmetric contamination errors, respectively. A real life data is given to illustrate the practical effectiveness of our test.

#### 1. Introduction

In time series analysis, autoregressive and linear processes are widely used due to their mathematical tractability. In fact, the autoregressive model has two prominently advantages: their estimation procedures are well established and the existence of stationary solutions are easily derived. Therefore, this model has been investigated in the field of signal detection and classification, psychometry, and biomedical engineering. See, for example, Hoque [1], Ogawa et al. [2], Subasi et al. [3], and Maleki et al. [4]. The first-order autoregressive model (AR(1)) is defined aswhere is an independent and identically distributed (i.i.d.) random error sequence with probability density function , , , and independent of for all However, it has been found that a variety of data sets cannot be modelled precisely by assuming linearity. Financial data, for instance, present heteroscedasticity, and biological data suffer from random perturbations. To address this problem, Conlisk [5] considered a random coefficient autoregressive model defined as follows:where is a sequence of i.i.d random variables with and . is an i.i.d. random error sequence with mean zero and variance . The random variable is assumed to be independent of , which is independent of . Note that when , the RCAR(1) model reduces to the standard AR(1) model.

The RCAR(1) models have been widely applied in mathematical literature due to its strong application value in practice. For instance, Nicholls and Quinn [68] derived the least square (LS) estimates of the model parameter and showed that the LS estimates are strongly consistent and obey the central limit theorem. Brandt [9] restated the necessary and sufficient conditions for the existence of a strict stationarity and ergodicity solution in the RCAR(1) model. Wang and Ghosh [10] considered the Basyesian estimation method to estimate RCAR model parameter and explored the frequentist properties of the Bayes estimator. Wang et al. [11] obtained the asymptotic properties of the maximum likelihood estimators of these parameters in RCAR(1) model under the unit root assumption.

More recently, the problem of testing traditional AR(1) model against RCAR(1) model is thus of essential issue in this settings. This detection problem was firstly investigated by Nicholls and Quinn [8], where a Gaussian Lagrange multiplier test is derived for the problem. However, this likelihood ratio approach has several weaknesses, as the true value of the parameter under the null hypothesis lies on the boundary of the parameter space, the asymptotics will not be easily established. Ramanathan and Rajarshi [12] considered a non-Gaussian test method to handle the change problem for parameters in RCAR(1) case. Their signed rank test, however, require a symmetry assumption on the innovation density, which is highly unnatural in this context. Still for the RCAR(1) model, Lee [13] proposed the cumulative sum test for parameter change in RCAR(1) model. But, it detects a change only in a function of the parameter rather than the parameter themselves. Recently, Moreno and Romo [14] studied robust unit root test with autoregressive errors. The unit root test shows some size distortions and requires many estimates in the construction of test.

To deal with these drawbacks, we construct a locally most powerful-type (LMP) test for testing the constancy of parameters in the RCAR(1) model. The LMP test has been investigated by several authors. Rohatgi et al. [15], Chikkagoudar and Biradar [16], and Manik et al. [17] considered the LMP test for parameter constancy. The test has merit in that its calculation process is less cumbersome and it produces stable sizes especially when parameter nearby the true value. In contrast, the traditional tests mentioned above show some size distortions and require many estimates in the construction of tests, see Ramanathan and Rajarshi [12]. In a general way, our test can conventionally discard correlation effects and enhance the performance of the test. In this paper, we illustrate how our proposed method can be implemented for finite samples under normal, student’s t, and symmetric contamination errors (see Huber [18] and Hettmansperger [19]). Through numerical simulation studies, we can see that our test has a stronger power in terms of maintaining the empirical level and power than the empirical likelihood (EL) test suggested by Zhao et al. [20].

The outline of this paper is organized as follows. In Section 2, we introduce our test statistic and derive its limiting distribution under the null hypothesis. Numerical simulations to evaluate the empirical size and power of our test technique are discussed in Section 3. Section 4 presents a real life data example to illustrate the superior of our methods. We provide brief concluding remarks in Conclusions. Appendix provides the proofs of the main results.

#### 2. Methodology and Main Results

In this section, we will construct a test statistic to test whether is a constant. To achieve this task, we set up the null and alternative hypothesesIn what follows, we give our main results. Suppose that the time series data are generated from (2). Let be an i.i.d. sequence of random variables with common probability distribution ; is an i.i.d sequence of random variables with density function In this article, we regard as a given number; alternatively, if is a random variable, we shall consider only inferences conditionally on are fixed. Note that , , and are independent.

Next, we assume the following conditions to establish the asymptotic properties of the test statistic:

(C1)

(C2) The distribution of is such that

(C3) The third and mixed derivatives of with respect to and are uniformly bounded on .

(C4) Differentiation thrice with respect to of under the integration is bounded.

It is easy to derive that is a Markov chain on with the following transition probabilities: The Markovity follows from (2) and the fact that is an i.i.d sequence. The conditional density is obtained by noting that conditional on Conditional on and is an i.i.d sequence with the density , hence, the probability density function of is given by . So, we get the result in the above equation.

Therefore, we can write down the likelihood function for RCAR(1) model:Furthermore, employing this spirit of LMP approach introduced by Manik et al. [17], we can obtain our test statisticwhereAnd the detailed procedures can be found in Appendix A.

Let denote vector of all unknown parameters of the RCAR(1) process. Hereafter, we use the notation to represent the maximum likelihood estimator (MLE) of . In addition, the actual test statistic can be obtained by replacing with their MLE It can be easily derived that is a zero mean martingale.

Before we state our main results, the following assumptions will be made:

(A.1) , for some

Then about the asymptotic distribution of , we have the following theorem.

Theorem 1. Let be a sequence of strictly stationary, ergodic, and -measurable solutions to equation (2) under conditions of and above . Then under , we have asymptotic normalityhere

(The expressions for , , and are derived in Appendix B).

#### 3. Simulation

In this section, we carry out some simulation studies to compare performances of the locally most powerful-type test and the empirical likelihood test in terms of empirical size and power. The empirical size and power for the two tests in Tables 16 are based on 1000 repetitions with the help of R software. Within each study, we set the initial values and employ the significance level at 0.05. Throughout this simulation, we used notation LMP for locally most powerful-type test by our algorithm, EL for empirical likelihood method.

 100 MP 0.095 0.105 0.109 0.101 0.116 0.103 0.089 0.098 EL 0.008 0.004 0.002 0.003 0.003 0.002 0.003 0.002 200 LMP 0.089 0.082 0.085 0.086 0.085 0.082 0.083 0.087 EL 0.006 0.007 0.004 0.005 0.004 0.004 0.003 0.003 300 LMP 0.074 0.077 0.082 0.069 0.076 0.071 0.076 0.074 EL 0.006 0.009 0.007 0.006 0.005 0.007 0.008 0.009 400 LMP 0.063 0.070 0.075 0.068 0.067 0.063 0.070 0.072 EL 0.007 0.009 0.006 0.004 0.007 0.008 0.005 0.004 500 LMP 0.066 0.060 0.077 0.060 0.065 0.062 0.060 0.055 EL 0.009 0.005 0.006 0.005 0.006 0.009 0.014 0.008 600 LMP 0.062 0.057 0.067 0.053 0.065 0.060 0.059 0.068 EL 0.009 0.005 0.011 0.007 0.009 0.007 0.008 0.007 700 LMP 0.064 0.059 0.068 0.056 0.057 0.062 0.058 0.063 EL 0.005 0.010 0.008 0.007 0.007 0.008 0.005 0.003 800 LMP 0.060 0.061 0.060 0.056 0.058 0.055 0.055 0.063 EL 0.009 0.007 0.007 0.010 0.011 0.012 0.008 0.009 900 LMP 0.055 0.062 0.054 0.048 0.053 0.053 0.062 0.062 EL 0.015 0.015 0.008 0.010 0.011 0.007 0.010 0.009 1000 LMP 0.063 0.051 0.057 0.052 0.058 0.062 0.057 0.054 EL 0.019 0.009 0.012 0.009 0.011 0.012 0.007 0.009 5000 LMP 0.051 0.051 0.051 0.053 0.050 0.048 0.049 0.053 EL 0.020 0.022 0.018 0.020 0.016 0.021 0.024 0.022
 100 MP 0.060 0.065 0.066 0.061 0.065 0.068 0.075 0.067 EL 0.003 0.003 0.002 0.003 0.004 0.003 0.004 0.001 200 LMP 0.059 0.061 0.065 0.059 0.066 0.063 0.068 0.061 EL 0.002 0.004 0.002 0.002 0.004 0.004 0.002 0.004 300 LMP 0.058 0.057 0.059 0.055 0.063 0.058 0.063 0.055 EL 0.005 0.004 0.003 0.009 0.006 0.006 0.004 0.002 400 LMP 0.058 0.055 0.058 0.055 0.062 0.054 0.061 0.058 EL 0.004 0.003 0.005 0.003 0.004 0.003 0.005 0.006 500 LMP 0.056 0.055 0.056 0.052 0.057 0.059 0.055 0.054 EL 0.006 0.004 0.005 0.005 0.007 0.005 0.006 0.007 600 LMP 0.060 0.057 0.058 0.060 0.062 0.059 0.056 0.053 EL 0.003 0.005 0.005 0.005 0.008 0.007 0.009 0.006 700 LMP 0.055 0.056 0.058 0.057 0.060 0.060 0.056 0.058 EL 0.006 0.005 0.005 0.008 0.006 0.006 0.004 0.008 800 LMP 0.050 0.054 0.054 0.052 0.056 0.054 0.054 0.054 EL 0.005 0.007 0.008 0.006 0.006 0.008 0.008 0.008 900 LMP 0.051 0.052 0.058 0.056 0.056 0.060 0.049 0.051 EL 0.005 0.006 0.006 0.009 0.006 0.008 0.010 0.005 1000 LMP 0.052 0.056 0.057 0.055 0.050 0.053 0.050 0.054 EL 0.007 0.008 0.005 0.006 0.012 0.009 0.009 0.007 5000 LMP 0.052 0.050 0.050 0.050 0.051 0.049 0.054 0.052 EL 0.015 0.014 0.012 0.016 0.017 0.014 0.012 0.013
 100 LMP 0.064 0.068 0.065 0.072 0.053 0.078 0.071 0.053 0.047 EL 0.000 0.000 0.001 0.000 0.000 0.001 0.000 0.000 0.001 300 LMP 0.045 0.066 0.053 0.038 0.054 0.081 0.051 0.043 0.040 EL 0.000 0.001 0.000 0.001 0.001 0.002 0.000 0.001 0.000 500 LMP 0.058 0.057 0.057 0.049 0.050 0.060 0.043 0.049 0.044 EL 0.001 0.000 0.001 0.000 0.000 0.001 0.000 0.000 0.001 1000 LMP 0.050 0.059 0.052 0.051 0.045 0.054 0.036 0.043 0.036 EL 0.002 0.001 0.001 0.001 0.001 0.001 0.003 0.001 0.001 5000 LMP 0.053 0.050 0.054 0.050 0.048 0.050 0.047 0.050 0.046 EL 0.003 0.004 0.003 0.007 0.007 0.006 0.008 0.006 0.011 100 LMP 0.078 0.081 0.089 0.069 0.089 0.095 0.083 0.091 0.086 EL 0.000 0.002 0.001 0.000 0.000 0.000 0.000 0.000 0.001 300 LMP 0.080 0.070 0.074 0.070 0.070 0.056 0.063 0.057 0.055 EL 0.000 0.000 0.001 0.000 0.000 0.001 0.002 0.000 0.001 500 LMP 0.072 0.081 0.068 0.067 0.058 0.059 0.060 0.046 0.053 EL 0.001 0.000 0.001 0.000 0.000 0.001 0.001 0.000 0.004 1000 LMP 0.064 0.061 0.067 0.067 0.055 0.055 0.052 0.051 0.045 EL 0.000 0.000 0.000 0.001 0.002 0.003 0.003 0.000 0.001 5000 LMP 0.055 0.053 0.050 0.050 0.055 0.046 0.050 0.047 0.050 EL 0.004 0.002 0.003 0.002 0.002 0.006 0.003 0.003 0.009
 100 LMP 0.165 0.102 0.171 0.110 0.171 0.107 0.174 0.090 EL 0.055 0.024 0.040 0.023 0.050 0.022 0.041 0.025 200 LMP 0.318 0.143 0.294 0.147 0.320 0.140 0.311 0.160 EL 0.191 0.070 0.156 0.073 0.202 0.049 0.155 0.067 300 LMP 0.513 0.209 0.494 0.253 0.511 0.175 0.449 0.241 EL 0.405 0.144 0.332 0.173 0.397 0.124 0.294 0.168 400 LMP 0.674 0.232 0.561 0.330 0.663 0.264 0.590 0.338 EL 0.578 0.176 0.456 0.272 0.559 0.189 0.450 0.260 500 LMP 0.791 0.377 0.720 0.427 0.770 0.374 0.725 0.438 EL 0.722 0.298 0.629 0.366 0.705 0.303 0.625 0.374 600 LMP 0.842 0.415 0.792 0.521 0.832 0.424 0.790 0.508 EL 0.798 0.352 0.717 0.460 0.793 0.365 0.712 0.458 700 LMP 0.912 0.483 0.862 0.598 0.919 0.478 0.855 0.598 EL 0.882 0.429 0.809 0.541 0.885 0.423 0.794 0.536 800 LMP 0.964 0.560 0.915 0.645 0.947 0.568 0.918 0.643 EL 0.938 0.512 0.877 0.604 0.928 0.516 0.872 0.599 900 LMP 0.967 0.633 0.936 0.727 0.971 0.623 0.955 0.703 EL 0.952 0.586 0.910 0.696 0.959 0.586 0.933 0.687 1000 LMP 0.988 0.712 0.963 0.799 0.978 0.702 0.962 0.776 EL 0.981 0.678 0.942 0.774 0.969 0.676 0.938 0.749 5000 LMP 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 EL 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
 100 LMP 0.289 0.110 0.220 0.149 0.241 0.096 0.183 0.131 EL 0.045 0.016 0.027 0.019 0.047 0.018 0.027 0.029 200 LMP 0.596 0.252 0.519 0.310 0.525 0.199 0.452 0.272 EL 0.161 0.052 0.118 0.055 0.186 0.057 0.146 0.063 300 LMP 0.790 0.400 0.722 0.475 0.762 0.351 0.686 0.429 EL 0.336 0.111 0.276 0.124 0.373 0.124 0.293 0.152 400 LMP 0.899 0.538 0.861 0.644 0.862 0.462 0.815 0.564 EL 0.486 0.147 0.399 0.196 0.520 0.173 0.440 0.223 500 LMP 0.964 0.624 0.933 0.757 0.957 0.596 0.906 0.699 EL 0.622 0.218 0.529 0.285 0.695 0.262 0.590 0.345 600 LMP 0.979 0.701 0.951 0.819 0.965 0.657 0.947 0.754 EL 0.708 0.237 0.606 0.351 0.767 0.301 0.650 0.395 700 LMP 0.995 0.791 0.974 0.862 0.983 0.727 0.978 0.824 EL 0.814 0.351 0.669 0.404 0.839 0.389 0.773 0.483 800 LMP 0.996 0.837 0.990 0.927 0.994 0.809 0.984 0.882 EL 0.845 0.404 0.767 0.512 0.895 0.488 0.839 0.585 900 LMP 0.999 0.874 0.995 0.934 0.996 0.831 0.993 0.924 EL 0.879 0.430 0.801 0.548 0.929 0.514 0.877 0.622 1000 LMP 1.000 0.936 1.000 0.971 1.000 0.889 0.996 0.951 EL 0.913 0.515 0.887 0.626 0.946 0.601 0.930 0.706 5000 LMP 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 EL 0.992 0.967 0.994 0.985 1.000 1.000 1.000 0.999
 100 LMP 0.141 0.054 0.034 0.120 0.073 0.038 0.084 0.082 0.046 EL 0.039 0.009 0.006 0.026 0.012 0.008 0.018 0.012 0.009 300 LMP 0.775 0.388 0.183 0.628 0.493 0.299 0.475 0.448 0.331 EL 0.325 0.102 0.030 0.231 0.097 0.052 0.143 0.114 0.061 500 LMP 0.959 0.675 0.381 0.880 0.791 0.594 0.754 0.729 0.583 EL 0.656 0.236 0.070 0.500 0.268 0.143 0.347 0.222 0.131 1000 LMP 1.000 0.967 0.753 0.996 0.992 0.934 0.982 0.982 0.939 EL 0.945 0.599 0.255 0.891 0.673 0.399 0.720 0.609 0.402 5000 LMP 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 EL 1.000 1.000 0.981 1.000 1.000 0.997 0.999 1.000 0.997 100 LMP 0.118 0.043 0.035 0.071 0.041 0.025 0.049 0.052 0.037 EL 0.042 0.013 0.015 0.032 0.021 0.014 0.020 0.013 0.011 300 LMP 0.673 0.278 0.101 0.545 0.372 0.190 0.388 0.313 0.220 EL 0.301 0.077 0.021 0.199 0.101 0.043 0.128 0.093 0.055 500 LMP 0.914 0.558 0.277 0.868 0.681 0.423 0.700 0.622 0.467 EL 0.571 0.183 0.054 0.486 0.239 0.128 0.353 0.212 0.113 1000 LMP 1.000 0.929 0.627 0.995 0.965 0.849 0.970 0.954 0.865 EL 0.927 0.503 0.204 0.852 0.637 0.375 0.702 0.548 0.368 5000 LMP 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 EL 1.000 1.000 0.963 1.000 1.000 0.996 1.000 1.000 0.994
##### 3.1. Empirical Size

To calculate empirical sizes, we pay our attention to three kinds of model (1) defined as follows:

(N1) ,

(N2) ,

(N3) , The distribution function of -contamination iswhere is a fixed constant satisfying and is the distribution function of standard normal random variable,

Let us now describe how the simulated results are obtained. First of all, we use the model N1 to generate data when the 0.5, 1 and 0.3, 0.5, 0.7, 0.9. Secondly, we generate data from model N2 with the degrees of freedom 8, 10 and 0.3, 0.5, 0.7, 0.9. For model N1 and N2, we take the sample size 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, and 5000. Thirdly, we simulate samples from model N3 with the , , and 0.3, 0.5, 0.7, 0.9. For model N3, we set the sample size and . The empirical levels of three models are presented in Tables 13, respectively. As seen from Tables 13, the LMP test give similar results under normal, student’s t, and symmetric contamination errors. In addition, the LMP test produces sizes closer to the nominal significance level 0.05 quite satisfactorily, especially for larger sample sizes. However, the EL method has lower levels for each sample size, so one may make wrong decisions in tests based on them. Looking at three models results, we can conclude that our method has a greater effect than the EL test in terms of the empirical level.

##### 3.2. Empirical Power

In order to investigate the empirical powers, we consider the alternatives under three versions of model (2):

(A1) ,

(A2) ,

(A3) , The distribution function of -contamination iswhere is a fixed constant satisfying and is the distribution function of standard normal random variable, ,

To calculate the empirical powers of the two tests, in the first, we generate samples from the model A1 in which the parameters 0.5, 1, 0.5, 1, and 0.5, 1. In the second setting, we simulate samples from model A2 with the degrees of freedom the parameters and For model A1 and A2, we take the sample size 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, and 5000. In the last case, we generate data from the model A3 in which the , , and the parameters , For model A3, we employ the sample size 100, 300, 500, 1000, and 5000. The empirical power of the above three models are summed up in Tables 46. In three cases, the power of two test statistics increase with the sample size, while our test produces relatively better powers than the EL test. Furthermore, both tests powers are close to 1 at the sample size . Overall, from these simulations, we conclude that the accuracy of using LMP is 6.7%, 28.8%, 26.1% higher than that of EL test under normal, student’s , and symmetric contamination errors, respectively. As anticipated, our finding shows that the LMP test is a functional tool to detect a parameter change for RCAR(1) model. Therefore, we recommend the LMP for practical use because its computation is not difficult and the overall performance is better than that EL under normal, student’s t, and symmetric contamination errors.

#### 4. A Real Life Data Analysis

In this section, we illustrate how the LMP method can be applied to a practical application. This data consist of 78 monthly number of annulus growth rate of the import bill in Australia, starting in December 2010 and ending in May 2017. The data are available online at the CEInet Statistics Database site http://db.cei.cn/page/Default.aspx. The mean and variance of the data are found to be 0.2069 and 15.3347. The data are denoted as Figure 1 is the sample path plot for the real data ,

For such a process, the mean function of is constant and we may assume that the process mean is subtracted out to produce a process with zero mean. The plot of the sample path, autocorrelation function (ACF), and the partial autocorrelation function (PACF) for series are given in Figures 1 and 2, respectively. From Figure 1, we can see may come from a stationary autoregressive time series process. From Figure 2, we conclude that is from first-order autoregressive process. Moreover, we test the normality of the residual data according to the Normal Q-Q plot introduced by Henry C Thode [21]. From Figure 3, we notice that the scatter points on the Normal Q-Q plot are close to the reference line, so the residual data can basically be seen as following the normal distribution. Hence, we derive that the model N1 would be more suitable to model these data.

From the time series plot given in Figure 1, the constancy of the parameter may be suspected, and, therefore, we are interested in testing the hypothesis of constancy of the coefficient parameter. We carry out the test for against The value of the test statistic in (8) turned out to be 2.1151, with value 0.0344, which indicates the rejection of the null hypothesis at 5% level of significance. Thus, in this case the RCAR(1) model would be much more appropriate as opposed to the AR(1) model.

#### 5. Conclusions

In this article, we propose a locally most powerful-type test for testing the constancy of the random coefficient parameter in autoregressive model and derive their limiting null distributions under regularity conditions. It is clear from the applications that the coefficient need not remain constant throughout the time. Therefore, it is essential to have such a test conducted whenever the random variation is suspected. Through our simulation study and a real-data analysis, we demonstrate that our test succeeds and it performs better than the competitor. Finally, we anticipate that our locally most powerful-type test can be extended to other types of time series model.

#### A. Process of Establish the Test Statistic

In this Appendix A, we present the detailed steps to obtain the test statistics.

Proof. Therefore, we can get the likelihood function Now, the Taylor series expansion of around which is the mean of gives that Obviously, we can conclude that if That is to say, the distribution function of random variable is degenerated. Furthermore, we derive that By taking the derivative of with respect to , at , we have the following equation:Thus, the LMP test statistic has the formThis completes the proof.

#### B. Proof of Theorem 1

Proof of Theorem 1. In the following, we can write down the log-likelihood function under the null hypothesis: According to the maximum likelihood principle, the maximum likelihood estimate can be obtained by maximizing the log-likelihood function with respect to . Applying the Taylor series expansion to giveswhere is on the line segment between and , so . Under the null hypothesis, we can write Using the similar method as in the above, expanding around gives that with . The remainder term is since Under and , we conclude that . Therefore, Now the first term of (B.5) converges to with This is because Next, we can obtain the elements of Fisher information matrix Thus (B.5) is asymptotically equivalent to We present the following lemma recommended by Billingsley [22] to establish the asymptotic normality of the test statistic.
Lemma B.1. Under assumption of , we have as ,
(i) ; (ii) , where .
Proof. By the ergodic theorem, the first conclusion holds.
Now, let and is a sigma field. Note that It is easy to see that . Then, we have Thus is a martingale. We have shown , which means is uniformly integrable. By Theorem 1.1 of Billingsley [22], we have that as Hence, using the following version of Martingale Central Limit Theorem from Hall and Heyde [23], we have . The proof of Lemma B.1 is thus completed.                 □
Similarly, we can verify that is a martingale. By ergodic and stationary properties, we obtain that as , Therefore, .
In the same way, for any vector we haveBy the Cramer-Wold device, we obtain where Then we can make the conclusion thatwhere The proof of the theorem is completed.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work is supported by National Natural Science Foundation of China (Nos. 11871028, 11731015, 11571051, and 11501241), Natural Science Foundation of Jilin Province (Nos. 20180101216JC, 20170101057JC, and 20150520053JH), Program for Changbaishan Scholars of Jilin Province (2015010), and Science and Technology Program of Jilin Educational Department during the “13th Five-Year” Plan Period (No. 2016316).

#### References

1. A. Hoque, “Finite sample analysis of the first order autoregressive model,” Calcutta Statistical Association Bulletin, vol. 34, no. 1-2, pp. 51–63, 1985. View at: Publisher Site | Google Scholar
2. T. Ogawa, H. Sonoda, S. Ishiwa, and Y. Shigeta, “An application of autoregressive model to pattern discrimination of brain electrical activity mapping,” Brain Topography, vol. 6, no. 1, pp. 3–11, 1992. View at: Publisher Site | Google Scholar
3. A. Subasi, A. Alkan, E. Koklukaya, and M. K. Kiymik, “Wavelet neural network classification of EEG signals by using AR model with MLE preprocessing,” Journal of the International Neural Network Society, vol. 18, pp. 985–997, 2005. View at: Google Scholar
4. M. Maleki and A. R. Nematollahi, “Autoregressive models with mixture of scale mixtures of gaussian innovations,” Iranian Journal of Science & Technology, Transactions A: Science, vol. 41, no. 4, pp. 1099–1107, 2017. View at: Publisher Site | Google Scholar
5. J. Conlisk, “Stability in a random coefficient model,” International Economic Review, vol. 15, no. 2, pp. 529–533, 1974. View at: Publisher Site | Google Scholar
6. D. F. Nicholls and B. G. Quinn, “The estimation of random coefficient autoregressive models I,” Journal of Time Series Analysis, vol. 1, no. 1, pp. 37–46, 1980. View at: Publisher Site | Google Scholar
7. D. F. Nicholls and B. G. Quinn, “The estimation of random coefficient autoregressive models II,” Journal of Time Series Analysis, vol. 2, no. 3, pp. 185–203, 1981. View at: Publisher Site | Google Scholar | MathSciNet
8. D. F. Nicholls and B. G. Quinn, Random Coefficient Autoregressive Models: An Introduction, Lecture Notes in Statistics, Springer-Verlag, New York, NY, USA, 1982. View at: MathSciNet
9. A. Brandt, “The stochastic equation Yn+1 = AnYn + Bn with stationary coefficients,” Advances in Applied Probability, vol. 18, no. 1, pp. 211–220, 1986. View at: Publisher Site | Google Scholar
10. D. Wang and S. K. Ghosh, “Bayesian estimation and unit root tests for random coefficient autoregressive models,” Model Assisted Statistics and Applications, vol. 3, no. 4, pp. 281–295, 2008. View at: Publisher Site | Google Scholar
11. D. Wang, S. K. Ghosh, and S. G. Pantula, “Maximum likelihood estimation and unit root test for first order random coefficient autoregressive models,” Journal of Statistical Theory and Practice, vol. 4, no. 2, pp. 261–278, 2010. View at: Publisher Site | Google Scholar
12. T. V. Ramanathan and M. B. Rajarshi, “Rank tests for testing the randomness of autoregressive coefficients,” Statistics & Probability Letters, vol. 21, no. 2, pp. 115–120, 1994. View at: Publisher Site | Google Scholar
13. S. Lee, J. Ha, O. Na, and S. Na, “The cusum test for parameter change in time series models,” Scandinavian Journal of Statistics, vol. 30, no. 4, pp. 781–796, 2003. View at: Publisher Site | Google Scholar
14. M. Moreno and J. Romo, “Robust unit root tests with autoregressive errors,” Communications in Statistics—Theory and Methods, vol. 45, no. 20, pp. 5997–6021, 2016. View at: Publisher Site | Google Scholar
15. V. K. Rohatgi, A. K. Md Ehsanes Sleh, R. Ahluwalia, and P. Ji, “Null distribution of locally most powerful tests for the two sample problem when the combined sample is type II censored,” Communications in Statistics - Theory and Methods, vol. 19, pp. 2337–2355, 1992. View at: Google Scholar
16. M. S. Chikkagoudar and B. S. Biradar, “Locally most powerful rank tests for comparison of two failure rates based on multiple type-ii censored data,” Communications in Statistics: Theory and Methods, vol. 41, no. 23, pp. 4315–4331, 2012. View at: Publisher Site | Google Scholar
17. A. Manik, N. Balakrishna, and T. V. Ramanathan, “Testing the constancy of the thinning parameter in a random coefficient integer autoregressive model,” Statistical Papers, pp. 1–25, 2017. View at: Publisher Site | Google Scholar
18. P. J. Huber, Robust Statistics, John Wiley & Sons, New York, NY, USA, 1981. View at: MathSciNet
19. T. P. Hettmansperger, Statistical Inference Based on Ranks, John Wiley & Sons, Inc., New York, NY, USA, 1984. View at: MathSciNet
20. Z.-W. Zhao, D.-H. Wang, and C.-X. Peng, “Coefficient constancy test in generalized random coefficient autoregressive model,” Applied Mathematics and Computation, vol. 219, no. 20, pp. 10283–10292, 2013. View at: Publisher Site | Google Scholar | MathSciNet
21. H. C. Thode, Testing for Normality, CRC Press, New York, NY, USA, 2002. View at: Publisher Site
22. P. Billingsley, “The lindeberg-Lvy theorem for martingales,” Proceedings of the American Mathematical Society, vol. 12, no. 1, pp. 788–792, 1961. View at: Google Scholar | MathSciNet
23. P. Hall and C. C. Heyde, Martingale Limit Theory and Its Application, Academic Press, New York, NY, USA, 1980. View at: MathSciNet

Copyright © 2019 Li Bi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.