Research Article | Open Access

Lei Song, Hongchang Hu, Xiaosheng Cheng, "Hypothesis Testing in Generalized Linear Models with Functional Coefficient Autoregressive Processes", *Mathematical Problems in Engineering*, vol. 2012, Article ID 862398, 19 pages, 2012. https://doi.org/10.1155/2012/862398

# Hypothesis Testing in Generalized Linear Models with Functional Coefficient Autoregressive Processes

**Academic Editor:**Ming Li

#### Abstract

The paper studies the hypothesis testing in generalized linear models with functional coefficient autoregressive (FCA) processes. The quasi-maximum likelihood (QML) estimators are given, which extend those estimators of Hu (2010) and Maller (2003). Asymptotic chi-squares distributions of pseudo likelihood ratio (LR) statistics are investigated.

#### 1. Introduction

Consider the following generalized linear model: where is -dimensional unknown parameter, are functional coefficient autoregressive processes given by where are independent and identically distributed random variable errors with zero mean and finite variance , is a one-dimensional unknown parameter, and is a real valued function defined on a compact set which contains the true value as an inner point and is a subset of . The values of and are unknown. is a known continuous differentiable function.

Model (1.1) includes many special cases, such as an ordinary regression model (when; see [1–7]), an ordinary generalized regression model (when ; see [8–13]), a linear regression model with constant coefficient autoregressive processes (when , ; see [14–16]), time-dependent and function coefficient autoregressive processes (when ; see [17]), constant coefficient autoregressive processes (when , ; see [18–20]), time-dependent or time-varying autoregressive processes (when ; see [21–23]), and a linear regression model with functional coefficient autoregressive processes (when; see [24]). Many authors have discussed some special cases of models (1.1) and (1.2) (see [1–24]). However, few people investigate the model (1.1) with (1.2). This paper studies the model (1.1) with (1.2). The organization of this paper is as follows. In Section 2, some estimators are given by the quasi-maximum likelihood method. In Section 3, the main results are investigated. The proofs of the main results are presented in Section 4, with the conclusions and some open problems in Section 5.

#### 2. The Quasi-Maximum Likelihood Estimate

Write the “true” model as where . Define, and by (2.2), we have Thus is measurable with respect to the field generated by, and

Assume at first that the are i.i.d. , we get the log-likelihood of conditional on given by At this stage we drop the normality assumption, but still maximize (2.5) to obtain QML estimators, denoted by. The estimating equations for unknown parameters in (2.5) may be written as Thus, satisfy the following estimation equations where

*Remark 2.1. *If , then the above equations become the same as Hu’s (see [24]). If ,, then the above equations become the same as Maller’s (see [15]). Thus we extend those QML estimators of Hu [24] and Maller [15].

For ease of exposition, we will introduce the following notations, which will be used later in the paper. Let vector . Define
By (2.7), we have
where the * indicates that the elements are filled in by symmetry,
Because and are mutually independent, we have
where
By (2.8) (2.7) and, we have

#### 3. Statement of Main Results

In the section pseudo likelihood ratio (LR) statistics for various hypothesis tests of interest are derived. We consider the following hypothesis: When the parameter space is restricted by a hypothesis , letbe the corresponding QML estimators of , and let be minus twice the log-likelihood, evaluated at the fitted parameters. Also let be the “deviance” statistic for testing against. From (2.5) and (2.8), and similarly

In order to obtain our results, we give some sufficient conditions as follows.(A1) is positive definite for sufficiently large and where and denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.(A2) There is a constant such that(A3)andexist and are bounded, andis twice continuously differentiable, , .

Theorem 3.1. *Assume (2.1), (2.2) and (A1)–(A3).*(1)*Suppose and is a continuous function, holds. Then
*(2)* Suppose , holds. Then
*(3)* Suppose , holds. Then
*

#### 4. Proof of Theorem

To prove Theorem 3.1, we first introduce the following lemmas.

Lemma 4.1. *Suppose that (A1)–(A3) hold. Then, for all ,
**
where
*

*Proof. *Similar to proof of Lemma 4.1 in Hu [24], here we omit.

Lemma 4.2. *Suppose that (A1)–(A3) hold. Then , and
**
where are on the line ofand.*

*Proof. *Similar to proof of Theorem 3.1 in Hu [24], we easily prove that, and . Since (4.4) is easily proved, here we omit the proof (4.4).

*Proof of Theorem 3.1. *Note that and are nonsingular. By Taylor’s expansion, we have
where for some . Since , also . By (4.1), we have
Thus is a symmetric matrix with. By (4.5) and (4.6), we have
Letdenoteand, respectively. By (4.7), we have
Note that
By (2.15), (4.2) and (4.8), we get
Note that
By (2.1), (2.11) and (4.12), we have
By (4.13) and (2.10), we have
By (4.13), we have
By (4.15), we have
By (4.14) and (4.16), we have
By (4.15), we have
Thus, by (4.17) and (4.18), we have
Since , we have
Thus, by (4.17), (4.20) and mean value theorem, we have
where for some .

It is easy to know that
By Lemma 4.2 and (4.22), we have
Hence, by (4.11), we have
By (4.24), we have
By Lemma 4.2, we have
Now, we prove (3.8). By (4.12), we have
Note that
From (4.28), we have
By (2.8) and (2.10), we have
From (4.30), we obtain that
By (4.29), (4.31) and Lemma 4.2, we have
By (3.3)–(3.5), we have
Under the , and by (4.26), (4.32) and (4.33), we have
It is easily proven that
Thus, by (4.33)–(4.35), we finish the proof of (3.8).

Next we prove (3.9). Under, , and , we have
Hence
By (2.8), (2.10), we have
From (4.38), we obtain,
Thus, by (4.37), (4.39) and Lemma 4.2, we have
By (3.3)–(3.5), we have
Under the, by (4.26), (4.40), and (4.41), we obtain
Thus, by (4.35), (4.42), (3.9) holds.

Finally, we prove (3.10). Under, we have
Thus
By (2.8) and (2.10), we have
From (4.45), we obtain
By (4.44), (4.46) and Lemma 4.2, we have