Abstract

We introduce an unbiased two-parameter estimator based on prior information and two-parameter estimator proposed by Özkale and Kaçıranlar, 2007. Then we discuss its properties and our results show that the new estimator is better than the two-parameter estimator, the ordinary least squares estimator, and explain the almost unbiased two-parameter estimator which is proposed by Wu and Yang, 2013. Finally, we give a simulation study to show the theoretical results.

1. Introduction

Consider the following linear regression model: where shows an vector of observations on the dependent variable, shows an known design matrix of rank , shows a vector of unknown regression coefficients, and shows an vector of disturbances with and variance-covariance matrix .

As we all know, the ordinary least squares (OLS) estimator has been regarded as the best estimator for a long time. However, when the multicollinearity occurs, the OLS estimator is no longer a good estimator. To treat this problem, many approaches have been presented. One method is to consider the biased estimator, such as Hoerl and Kennard [1], Swindel [2], Farebrother [3], Liu [4], Sakallog lu and Akdeniz [5], Özkale and Kaçıranlar [6, 7], Yang and Chang [8], and Wu and Yang [9, 10]. Although these biased estimators can treat multicollinearity, these estimators have big bias. In order to reduce the bias, Crouse et al. [11] and Sakallog lu and Akdeniz [5] based on ridge estimator and Liu estimator proposed the unbiased ridge estimator and unbiased Liu estimator with prior information, respectively. The unbiased ridge estimator and unbiased Liu estimator not only can deal with multicollinearity, but also have no bias.

In this paper, we will introduce an unbiased two-parameter estimator with prior information and show some properties of the new estimator.

The reminder of this paper is organized as follows. In Section 2, we give the unbiased two-parameter estimator and comparisons with OLS, two-parameter estimator proposed by Özkale and Kaçıranlar [7], and almost unbiased two-parameter estimator proposed by Wu and Yang [9] in the sense of MMSE criterion. The estimators of the parameters and are proposed in Section 3. A simulation study is given to explain the theoretical results in Section 4 and some conclusion remarks are given in Section 5.

2. Analysis of Unbiased Two-Parameter Estimator with Prior Information

In this section, we also consider the general linear regression model (1) and thus the for .

Crouse et al. [11] presented the unbiased ridge estimator based on the ridge estimator and prior information , which is defined as follows: with being uncorrelated with and . In (2), . And in (2) the prior information is a random vector for specified mean and covariance.

The two-parameter estimator proposed by Özkale and Kaçıranlar [7] is defined as follows: where is the OLS estimator, , and , .

Based on the two-parameter estimator, Wu and Yang [9] proposed an almost unbiased two-parameter estimator: Now we study the following convex estimator: with presenting a matrix and showing the identity matrix. Then we can compute the mean squared error (MSE) of : Now we find a matrix such that reaches a minimum. Solve Then we obtain . Accordingly, we get .

Now we can define the following estimator: Hence, for optimal value of under the minimum MSE, the optimal convex estimator is an unbiased estimator of .

For (8), since , then we get . Then for , .

For (8), it is easy to see that is an unbiased estimator of and we call this estimator as UTP estimator.

Then in the following section we will give the comparisons of the new estimator with the OLS estimator, the TP estimator, and the AUTP estimator in the matrix mean squared error. Firstly, we give the definition of the matrix mean squared error (MMSE).

The matrix mean squared error (MMSE) is denoted as follows: where shows an estimator of and and present the dispersion matrix and bias vector of , respectively.

The mean squared error is denoted as .

Lemma 1. Let and be two estimators of . Then is called MMSE superior to if

Lemma 2 (see [12]). Let be a positive definite matrix, namely, , and let be some vector; then if and only if .

Lemma 3 (see [13]). Suppose that is a positive definite matrix and is an nonnegative definite matrix; then

2.1. Comparison of the OLS Estimator and the Unbiased Two-Parameter (UTP) Estimator

Now we compare the unbiased two-parameter (UTP) estimator with the OLS estimator in the matrix mean squared error (MMSE) sense.

Theorem 4. The unbiased two-parameter estimator always dominates the OLS estimator in the MMSE sense for and .

Proof. Since so from the definition of MMSE, we have Then from (13) and (14), we obtain that is a nonnegative definite matrix for and .
The proof of Theorem 4 is completed.

2.2. Comparison of TP Estimator and the Unbiased Two-Parameter (UTP) Estimator

Now we state the following theorem to compare the unbiased two-parameter estimator (UTP) with the TP estimator in the sense of MMSE.

Theorem 5. The unbiased two-parameter estimator (UTP) is superior to the TP estimator in the sense of MMSE if and only if

Proof. From the definition of the MMSE, we have Thus, from (14) and (17), we obtain Since , and using Lemma 2, we obtain that is nonnegative definite matrix if and only if So we can conclude that the unbiased two-parameter estimator (UTP) is superior to the TP estimator in the sense of MMSE if and only if

2.3. Comparison of AUTP Estimator and the Unbiased Two-Parameter (UTP) Estimator

Now we state the following theorem to compare the unbiased two-parameter estimator (UTP) with the AUTP estimator proposed by Wu and Yang [9] in the sense of MMSE.

Theorem 6. If , the unbiased two-parameter estimator (UTP) is superior to the AUTP estimator in the sense of MMSE if and only if

Proof. By (4), we have Thus, Now we consider the following difference: Since and , thus by Lemma 3, if , then . Then by Lemma 2, if , the UTP is better than the AUTP estimator.

3. Estimation of the Parameter and Parameter

In this section, we discuss how to estimate the biasing parameters and .

3.1. The Estimating of the Biasing Parameter

In the definition of the new estimator, the OLS is independent of . Then and From (26), if is known, for a fixed , we can get an unbiased estimator of found as follows: When is unknown, we use the following to estimate : and then an estimate of is where and is the eigenvalue of .

Note that in (27) and (29) the estimator of may be negative. So when being in this situation, one might try to denote . Summing up these results, the may be presented as follows.

Case I. Assuming is known,(i)if , then (ii)otherwise

Case II. Assuming is unknown,(i)if , then (ii)otherwise where is an unbiased estimator of .

3.2. The Estimating of the Biasing Parameter

From (26), if is known, for a fixed , an unbiased estimate of is defined as follows: When is unknown, similarly an estimate of is Note that in (34) and (35) the estimator of may be negative. So when being in this situation, one might try to denote . However, there always exists a such that the unbiased two-parameter estimator has smaller MSE than . Thus, define . With the above discussion, may be presented as follows.

Case I. Assuming is known,(i)if , then (ii)otherwise

Case II. Assuming is unknown,(i)if , then (ii)otherwise

where is an unbiased estimator of . In applications there may be other estimates of that may also be used.

It is worthwhile to point that the proposed and provide an unbiased two-parameter estimator of while the two-parameter estimator is biased.

4. A Simulation Study

In this section, we will give a simulation study to explain the theoretical results. Following McDonald and Galarneau [14], the explanatory variables are produced using the following device: where and show independent standard normal pseudorandom numbers and is specified so that the correlation between any two explanatory variables is given by .

And observations on the dependent variable are then produced by

In this paper we consider , , , and . The simulation study results are given in Tables 1, 2, 3, 4, 5, 6, 7, and 8. By Tables 18, we can conclude that when multicollinearity is serve, our new estimator performs well; when is small, our new estimator performs well; when is small, our new estimator performs well; when is big, our new estimator performs well; in all cases, our new estimator is better than the OLS estimator. So we can see that our new estimator not only is unbiased, but also can overcome multicollinearity. Our estimator is meaningful in practice.

5. Conclusion

In this paper, we introduce an unbiased two-parameter estimator with prior information. We also show the superiority of the new estimator over the OLS estimator, the TP estimator, and the AUTP estimator in the MMSE sense. Furthermore, the estimators of the biasing parameters are also discussed in this paper.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation Project of CQ CSTC (Grant no. cstc2014jcyjA0999), the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant no. R2013SC12), and the National Natural Science Foundation of China (Grant no. 11201505).