Journal of Applied Mathematics

Volume 2014, Article ID 314875, 10 pages

http://dx.doi.org/10.1155/2014/314875

## Two Kinds of Weighted Biased Estimators in Stochastic Restricted Regression Model

^{1}College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China^{2}Chongqing College of Electronic Engineering, Chongqing 401331, China

Received 20 January 2014; Accepted 16 May 2014; Published 2 June 2014

Academic Editor: Xinkai Chen

Copyright © 2014 Chaolin Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider two kinds of weighted mixed almost unbiased estimators in a linear stochastic restricted regression model when the prior information and the sample information are not equally important. The superiorities of the two new estimators are discussed according to quadratic bias and variance matrix criteria. Under such criteria, we perform a real data example and a Monte Carlo study to illustrate theoretical results.

#### 1. Introduction

In a linear regression, the ordinary least squares estimator (LS) is unbiased and has minimum variance among all linear unbiased estimators and has been treated as the best estimator for a long time. However, the LS can be highly variable when the notorious multicollinearity is present although it has the minimum variance property in the class of linear unbiased estimators. Hence biased alternatives to the ordinary least squares estimator have been recommended in order to obtain a substantial reduction in variance, such as the ordinary ridge regression estimator (ORE) proposed by Hoerl and Kennard [1] and the ordinary Liu regression estimator (OLE) proposed by Liu [2], and many modified methods. On the other hand, for reducing the bias of a biased estimator, Kadiyala [3] introduced a class of almost unbiased shrinkage estimators and Singh et al. [4] introduced the almost unbiased generalized ridge estimator by the jackknife procedure, and Akdeniz and Kaçiranlar [5] studied the almost unbiased generalized Liu estimator. Akdeniz and Erol [6] studied bias corrected estimators of the ORE and OLE and discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE). iray et al. [7] discussed - class estimator and Wu [8] developed principal component Liu-type estimator in the linear regression model.

An alternative technique to combat the multicollinearity problem is to consider the parameter estimator in addition to sample information. When the addition of stochastic linear restrictions on the unknown parameter vector is assumed to be held, Theil [9] proposed the ordinary mixed estimator (OME). Hubert and Wijekoon [10] proposed the stochastic restricted Liu estimator (SRLE). And Li and Yang [11] introduced the stochastic restricted ridge estimator (SRRE) by grafting the ORE into the mixed estimation procedure. Wu [12] discussed Stochastic restricted - class estimator and Stochastic restricted - class estimator in linear regression model. When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [13] introduced the method of weighted mixed regression and developed the weighted mixed estimator (WME). Li and Yang [14] grafted the ORE into the weighted mixed estimation procedure and proposed the weighted mixed ridge estimator.

In this paper, when additional stochastic linear restrictions are supposed to hold, we propose the stochastic weighted mixed almost unbiased ridge estimator by combining the WME and the AURE and also propose the stochastic weighted mixed almost unbiased Liu estimator by combining the WME and the AULE in a linear regression model. We discuss performances of new estimators over other competitive estimators with respect to the quadratic bias (QB) and variance matrix criteria. The results show that the proposed stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator are proved to have smaller quadratic biases than the SRRE and SRLE, respectively. And the variance matrix of the new estimators is more competitive. The rest of the paper is organized as follows: we describe the statistical model and propose the stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator in Section 2. Section 3 compares new estimators with competitive estimators according to quadratic bias criterion. In Section 4, according to variance matrix, superiorities of proposed estimators over relative estimators are compared. Finally, a real data example and a Monte Carlo simulation are studied to justify superiorities of new estimators in Section 5. Some discussions are given in Section 6.

#### 2. The Proposed Estimator

Consider the linear regression model: where is an -dimensional response vector, with is a known matrix of full column rank, is a vector of unknown parameters, is an vector of errors with expectation and covariance matrix , and is an identity matrix of order . The ordinary least squares estimator (LS) for is defined as follows: where . In the presence of multicollinearity, the LS can be highly variable. Two well-known biased estimators, the ordinary ridge estimator (ORE) proposed by Hoerl and Kennard [1] and the ordinary Liu estimator (OLE) studied by Liu [2], are defined, respectively, as where , , is a scalar constant, and , is also a scalar constant.

Kadiyala [3] introduced an almost unbiased shrinkage estimator which can be more efficient than the LS estimator and be fewer biases than the corresponding biased estimator. Akdeniz and Erol [6] discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE) which are given as follows: respectively.

In addition to the model (1), let us give some prior information about in the form of a set of independent stochastic linear restrictions as follows: where is a known matrix of rank , is a vector of disturbances with expectation 0, and covariance matrix , is supposed to be known and positive definite matrix, and the vector can be interpreted as a random variable with expectation . Furthermore, it is also assumed that the random vector is stochastically independent of .

For the restricted model specified by (1) and (6), the ordinary mixed estimator (OME) introduced by Theil [9] is defined as

Hubert and Wijekoon [10] proposed the stochastic restricted Liu estimator (SRLE) by combining the ordinary mixed estimator and the Liu estimator, which is defined as

And Li and Yang [11] introduced the stochastic restricted ridge estimator (SRRE) by grafting the RE into the mixed estimation procedure, which is defined as

In practical, the prior information and the sample information may be not equally important, which resulted in emergence of the weighted mixed estimator (WME) [13]: where is a nonstochastic and nonnegative scalar weight.

Now, we are ready to introduce two almost unbiased estimators in the stochastic restricted linear regression model. Combining the WME with the AURE and the AULE, respectively, we can propose the stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator as follows: where is the weighted mixed estimator.

In fact, note that

Then, the WME (10) can be rewritten as

Therefore, we can also derive and as the LS of in the framework of the following augmented models: where and is a random vector of disturbances with , , and . Then the LS of from the augmented models (14) is

It can be seen from definitions of and that they are two general estimators that include the WME, AURE, AULE, OME, and LS, as special cases; that is,

Now, by some straightforward calculations, we can compute bias vectors and covariance matrices of the WME, AURE, AULE, SRRE, SRLE, , and as where and . In rest sections, our primary aim is to study performances of new estimators over relative estimators under the quadratic bias (QB) and variance matrix criteria.

#### 3. Quadratic Bias Comparisons of Estimators

In this section, we will focus on quadratic bias comparisons among the AURE, AULE, SRRE, SRLE, , and . Let be some estimator of parameter vector , and then the quadratic bias of is defined as , where . According to the definition of the quadratic bias, we can easily compute quadratic biases of the AURE, AULE, SRRE, SRLE, , and as

Note that the AURE and the have the same quadratic bias , and the AULE and the also have the same quadratic bias . Thus we just compare quadratic biases between the SRRE, SRLE and , , respectively.

##### 3.1. Quadratic Bias Comparison between the SRRE and

In this subsection, we will focus on the quadratic bias comparison between the SRRE and . Firstly, we derive the difference of the quadratic bias from (33) and (35) as

Theorem 1. *The is superior to the estimator under the quadratic bias criterion, namely, . That is, the proposed can be seen as bias corrected estimator of the SRRE.*

*Proof. *For , we just consider . For , there exists some orthogonal matrix such that , where and denoted the ordered eigenvalues of the matrix . Therefore we can easily compute that , where and , . Note that ; thus , , which means , namely, is a positive definition matrix. Therefore, .

This completes the proof.

##### 3.2. Quadratic Bias Comparison between the SRLE and

In this subsection, the comparison between the quadratic bias of SRLE and the quadratic bias of is discussed. We get the difference of the quadratic bias from (34) and (36) as

Theorem 2. *The is superior to the estimator under the quadratic bias criterion, namely, . That is, the proposed can be seen as bias corrected estimator of the SRLE.*

*Proof. *For , we just consider . Note that , where and , . For , , thus , , which means . Therefore, .

This completes the proof.

#### 4. Variance Comparisons of Estimators

In this section, we will focus on variance matrix comparisons among the WME, AURE, AULE, SRRE, SRLE, , and . For the sake of convenience, we list a lemma needed in the following discussions.

Lemma 3. *Let two matrices , , and then .*

*Proof. *See Rao et al. [15].

##### 4.1. Variance Comparison between the WME and

In this subsection, the comparison of variance matrix between the and the WME is discussed.

Firstly, from (18) and (28), we can compute the difference of variance matrices between the WME and the as where and are two real symmetric positive definite matrices, and . We can compute where . We get and . Since and have the same zero eigenvalues as those of and , respectively, we have and . Therefore, we can get ; namely, .

##### 4.2. Variance Comparison between the WME and

In this subsection, the comparison of variance matrix between the and the WME is discussed.

From (18) and (30), we can compute the difference of variance matrices between the WME and the as where . We can compute

For , thus and since and have the same zero eigenvalues as those of and , respectively. We get and . Therefore, we can get ; namely, .

##### 4.3. Variance Comparison between the AURE and

In this subsection, the comparison of variance matrix between the and the AURE is discussed.

From (20) and (28), we can get the difference of variance matrices between the AURE and the as

We can compute

For , we get . Note that . Thus . Therefore, we can derive .

##### 4.4. Variance Comparison between the AULE and

In this subsection, the comparison of variance matrix between the and the AULE is discussed.

It can be seen from (22) and (30) that the difference of variance matrices between the AULE and the is

Note that and ; therefore, we can easily get that .

##### 4.5. Variance Comparison between the SRRE and

In this subsection, we compare the superiority of the variance matrix between and SRRE.

We can compute the difference of the variance matrix between SRRE and from (24) and (28) as where . Note that , and . Therefore, applying Lemma 3, when , we can get ; namely, .

##### 4.6. Variance Comparison between the SRLE and

In this subsection, we compare the superiority of the variance matrix between and SRLE.

We can get the difference of the variance matrix between SRLE and from (26) and (30) as where . Note that , , and . Therefore, applying Lemma 3, when , we can derive ; namely, .

#### 5. Numerical Example and Monte Carlo Simulation

In order to illustrate our theoretical results, firstly we consider in this section a data set originally due to Webster et al. [16]. Considering that comparison results depended on unknown parameters and and replaced them by their unbiased estimators, namely, LS, the results here and below are performed with R 2.14.1.

We can easily obtain that the condition number is approximately 208.5. This information indicates a moderate multicollinearity among regression vectors. The ordinary least squares estimator of is with . Consider the following stochastic linear restrictions:

Note that quadratic bias values of AURE, AULE and , are the same from (31), (32), (35), and (36), respectively. And quadratic bias values of the SRRE, , SRLE, and do not change when takes different values. Therefore, we just compare the quadratic biases of SRRE and , SRLE and when or take different values. For the SRRE, , SRLE, and , their quadratic bias values are given in Table 1. For the WME, AURE, SRRE, , AULE, SRLE, and , their estimated MSE values are obtained in Table 2 by replacing in the corresponding theoretical MSE expressions all unknown model parameters by their respective least squares estimators.

It can be seen from Table 1 that the has smaller quadratic bias values than the SRRE for every case, and this same situation exists that the quadratic bias of the is smaller than that of the SRLE. From Table 2, we can find that the has smaller MSE values than the AURE for every case. However, when the MSE values of the WME, SRRE, and are compared, there is no estimator which is always superior over than other estimators. Especially, when increases from 0.2 to 0.95, the difference of MSE values between the SRRE and is always positive and becomes larger and larger. The similar situation can be found when we compare the WME, AULE, SRLE, and in MSE values.

In order to further illustrate the behavior of our proposed estimator, we are to perform a Monte Carlo simulation study under different levels of multicollinearity. Following Xu and Yang [17], we can get the explanatory variables by the following equations: where are independent standard normal pseudorandom numbers and is specified so that the theoretical correlation between any two explanatory variables is given by . Observations on the dependent variable are then generated by where are independent normal pseudorandom numbers with expectation zero and variance . In this study, we choose , , , , and the stochastic restrictions , , . Furthermore, we study the two cases when .

For different values of the biasing parameters, the quadratic bias values of the SRRE, , SRLE, and are obtained in Table 3. The MSE values of the WME, AURE, SRRE, , AULE, SRLE, and are derived in Tables 4 and 5 for 0.99 and 0.999, respectively.

From the simulation results shown in Tables 3–5, we can find that the quadratic biases and the MSE values of the estimators are increasing with the increase of multicollinearity. The and have smaller quadratic biases than the SRRE and SRLE, respectively, for every case. On the other hand, the value of is the level of the weight to sample information and prior information, and we can see from the Tables 4 and 5 that the estimated MSE values of the WME, , and become more and more smaller when the value of increases. The has smaller estimated MSE values than the AURE and SRRE for every case when =0.5, , and , , and the has smaller estimated MSE values than the AULE for every case when , , and , . Also, superiorities between the and WME or SRRE and between the and WME or SRLE with respect to the MSE depend on the choice of parameters , , and . More details can be found in Tables 4 and 5.

In the numerical example, the execution times to compute the quadratic bias of the AURE, AULE, , and are 0.003625 s, 0.003415 s, 0.003764 s, and 0.003813s, respectively. Moreover, the execution times to compute the MSE of the AURE, AULE, , and are 0.00399 s, 0.00400 s, 0.00800 s, and 0.00500 s, respectively. On the other hand, in the Monte Carlo study when , the execution times to compute the quadratic bias of the AURE, AULE, , and are 0.00799 s, 0.00700 s, 0.00999 s, and 0.00799 s, respectively. In addition, the execution times to compute the MSE of the AURE, AULE, are 0.00799 s, 0.00700 s, 0.00999 s, and 0.00799 s, respectively. In addition, the execution times to compute the MSE of the AURE and AULE are 0.01100 s, 0.00900 s, 0.01299 s, and 0.01167 s, respectively. All the experiments are implemented in R 2.14.1 on a personal computer (PC) with the AMD Sempron Processor 3100+, 1.81 GHz.

#### 6. Conclusion Remarks

In this paper, we propose two stochastic weighted mixed almost unbiased estimators which are the stochastic weighted mixed almost unbiased ridge estimator and the stochastic weighted mixed almost unbiased Liu estimator. A detailed discussion is given about performances of proposed estimators with respect to quadratic bias (QB) and variance matrix. The proposed stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator are proved to have small quadratic biases than the SRRE and SRLE, respectively. Superiorities of new estimators over relative estimators according to variance matrix have been discussed. Finally, a real data example and a Monte Carlo study are performed and simulation results show that the stochastic weighted mixed almost unbiased ridge estimator and the stochastic weighted mixed almost unbiased Liu estimator can effectively reduce the quadratic biases compared to the SRRE and SRLE, though new estimators are still actually biased estimators, which can support the finding of our theoretical results.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This work is supported by the Fundamental Research Funds for the Central Universities, project no. CQDXWL-2013-007.

#### References

- A. E. Hoerl and R. W. Kennard, “Ridge regression: biased estimation for non-orthogonal problem,”
*Technometrics*, vol. 12, no. 1, pp. 55–67, 1970. View at Google Scholar - K. J. Liu, “A new class of biased estimate in linear regression,”
*Communications in Statistics. Theory and Methods*, vol. 22, no. 2, pp. 393–402, 1993. View at Publisher · View at Google Scholar · View at MathSciNet - K. Kadiyala, “A class of almost unbiased and efficient estimators of regression coefficients,”
*Economics Letters*, vol. 16, no. 3-4, pp. 293–296, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - B. Singh, Y. P. Chaubey, and T. D. Dwivedi, “An almost unbiased ridge estimator,”
*Sankhyā B*, vol. 48, no. 3, pp. 342–346, 1986. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. Akdeniz and S. Kaçiranlar, “On the almost unbiased generalized Liu estimator and unbiased estimation of the bias and MSE,”
*Communications in Statistics. Theory and Methods*, vol. 24, no. 7, pp. 1789–1797, 1995. View at Google Scholar - F. Akdeniz and H. Erol, “Mean squared error matrix comparisons of some biased estimators in linear regression,”
*Communications in Statistics. Theory and Methods*, vol. 32, no. 12, pp. 2389–2413, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - G. Ü. Şiray, S. Kaçiranlar, and S. Sakallıoğlu, “
*r - k*class estimator in the linear regression model with correlated errors,”*Statistcal Papers*, vol. 55, no. 2, pp. 393–407, 2014. View at Publisher · View at Google Scholar - J. Wu, “On the performance of principal component Liu-type estimator under the mean square error criterion,”
*Journal of Applied Mathematics*, vol. 2013, Article ID 858794, 7 pages, 2013. View at Publisher · View at Google Scholar - H. Theil, “On the use of incomplete prior information in regression analysis,”
*Journal of the American Statistical Association*, vol. 58, pp. 401–414, 1963. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. H. Hubert and P. Wijekoon, “Improvement of the Liu estimator in linear regression model,”
*Statistical Papers*, vol. 47, no. 3, pp. 471–479, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Li and H. Yang, “A new stochastic mixed ridge estimator in linear regression model,”
*Statistical Papers*, vol. 51, no. 2, pp. 315–323, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Wu, “On the stochastic restricted
*r - k*class estimator and stochastic restricted*r - d*class estimator in linear regression model,”*Journal of Applied Mathematics*, vol. 2014, Article ID 173836, 6 pages, 2014. View at Publisher · View at Google Scholar - B. Schaffrin and H. Toutenburg, “Weighted mixed regression,”
*Zeitschrift für Angewandte Mathematik und Mechanik*, vol. 70, no. 6, pp. T735–T738, 1990. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Li and H. Yang, “A new ridge-type estimator in stochastic restricted linear regression,”
*Statistics*, vol. 45, no. 2, pp. 123–130, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. R. Rao, H. Toutenburg, Shalabh, and C. Heumann,
*Linear Models and Generalizations: Least Squares and Alternatives*, Springer, New York, NY, USA, 2008. View at MathSciNet - J. T. Webster, R. F. Gunst, and R. L. Mason, “Latent root regression analysis,”
*Technometrics*, vol. 16, pp. 513–522, 1974. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Xu and H. Yang, “More on the bias and variance comparisons of the restricted almost unbiased estimators,”
*Communications in Statistics. Theory and Methods*, vol. 40, no. 22, pp. 4053–4064, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet