Research Article | Open Access

Jibo Wu, "On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion", *Journal of Applied Mathematics*, vol. 2013, Article ID 858794, 7 pages, 2013. https://doi.org/10.1155/2013/858794

# On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion

**Academic Editor:**Renat Zhdanov

#### Abstract

Wu (2013) proposed an estimator, principal component Liu-type estimator, to overcome multicollinearity. This estimator is a general estimator which includes ordinary least squares estimator, principal component regression estimator, ridge estimator, Liu estimator, Liu-type estimator, class estimator, and class estimator. In this paper, firstly we use a new method to propose the principal component Liu-type estimator; then we study the superior of the new estimator by using the scalar mean squares error criterion. Finally, we give a numerical example to show the theoretical results.

#### 1. Introduction

Consider the multiple linear regression model where is an vector of observation, is an known matrix of rank , is a vector of unknown parameters, and is an vector of disturbances with expectation and variance-covariance matrix .

According to the Gauss-Markov theorem, the classical ordinary least squares estimator (OLSE) is obtained as follows: The OLSE has been regarded as the best estimator for a long time. However, when multicollinearity is present and the matrix is ill-conditioned, the OLSE is no longer a good estimator. To improve OLSE, many ways have been proposed. One way is to consider biased estimator, such as, principal component regression estimator [1], ridge estimator [2], Liu estimator [3], Liu-type estimator [4], two-parameter ridge estimator [5], class estimator [6], class estimator [7], and modified class estimator [8].

An alternative method to overcome the multicollinearity is to consider the restrictions. Xu and Yang [9] introduced a stochastic restricted Liu estimator; Li and Yang [10] introduced a stochastic restricted ridge estimator.

To overcome multicollinearity, Hoerl and Kennard [2] solve the following problem: where is a Lagrangian multiplier and is a constant, and obtain the ridge estimator (RE):

Liu [3] introduced the Liu estimator (LE): where is OLSE. This estimator can be obtained by solving the following problem: This estimator can also be obtained by the following ways. Suppose that satisfied . Then, we use the mixed method [11]; we can obtain the Liu estimator.

Recently, Huang et al. [4] introduced a Liu-type estimator which includes the OLSE, RE, and LE, defined as follows: where is OLSE. This estimator can be obtained by solving the following problem:

Let us consider the following transformation for the model (1): where and are diagonal matrices such that that the main diagonal elements of the matrix are the largest eigenvalues of , while are the remaining eigenvalues. The matrix is orthogonal with consisting of its first columns and consisting of the remaining columns of the matrix . The PCRE for can be written as

Baye and Parker [6] introduced the application of ridge approach to improve the PCR estimator, namely, class estimator as

Alternatively, Kaçıranlar and Sakallıoğlu [7] introduced the class estimator which is the combination of the LE and the PCRE, which is defined as follows:

Wu [12] proposed the principal component Liu-type estimator (PCTTE), which is defined as

In this paper, firstly we use a new method to propose the principal component Liu-type estimator. Then, we show that, under certain conditions, the PCTTE is superior to the related estimator in the mean square error criterion. Finally, we give a numerical example to illustrate the theoretical results.

#### 2. The Principal Component Liu-Type Estimator

Using the symbols in (9) and (10), (1) can be written as follows: The PCRE can be obtained by omitted , and the model (15) reduced to: Then, solve the following problem: we obtain Then, transforming to the original parameter space, we can get the PCRE of parameter .

Now, we give a method to obtain the principal component Liu-type estimator. Let be a constant and a Lagrangian multiplier, minimizing where . Then we get After transforming back to original parameters pace, we obtain This estimator can also be got by minimizing the function It is easy to see that the new estimator has the following properties:(1) is the PCRE;(2) is the OLSE;(3) is the class estimator;(4) is the LE;(5) is the class estimator;(6) is the RE;(7) is the LTE.

#### 3. Superiority of the Principal Component Liu-Type Estimator over Some Estimators in the Mean Square Error Criterion

The mean square error (MSE) of an estimator is defined as where is the dispersion matrix and is the bias vector. For two given estimators and , the estimator is said to be superior to in the MSE criterion, if and only if

##### 3.1. versus

Firstly, we compute that where . Then, the mean square error (MSE) of is given as follows:

Let in (26); we obtain the MSE of as follows:

Now we consider the following difference: If , then when , . If , then when and , . So we have the following theorem.

Theorem 1. *The estimator is superior to the estimator for under the mean square error criterion for:*(a)* if for all ,*(b)* and , if for all .*

##### 3.2. versus

From the definition of the , we know that let in , and we obtain the .

Theorem 2. *Let for all . Then, there exists a strictly positive such that is superior to in the mean square error criterion for .*

*Proof. *We know that , so that by continuity it is sufficient to show that decreasing in the neighborhood of .

Performing the calculus for fixed , we can see that
So when , ; that is to say, . The proof of Theorem 2 is completed.

##### 3.3. versus

From the definition of the , we know that let in , and we obtain that ,

Now, we discuss the following difference: Then by (31), if we want , then when , it is easy to know that is always less than 1, and is to be bigger than 0, if Then, we get for with all .

Thus, we obtain the following theorem.

Theorem 3. *If and for all , then the is better than the in the mean square error sense for and
*

##### 3.4. versus

In this subsection, we will give the comparison of the and under the mean square error criterion.

Let and in (26); we obtain the MSE of as follows:

In order to compare the and , now we consider the following difference: when , then Since the lower bound of is less than 1, the lower bound of may be less than . If the lower bound of of is less than 0, then we can choose any in . Thus, we can get the following theorem.

Theorem 4. (1)*If for all , then is superior to in the mean square error sense for any and .*(2)*If for some , then is superior to in the mean square error sense for when .*

##### 3.5. versus

In this subsection, we will compare with in the mean square error sense.

Define , then the mean square error of can be written as Firstly, we discuss the difference between and : So when , if , then .

If , then .

Then we have the following theorem.

Theorem 5. (1)*If for some , then there exists a nonnegative such that for , where .*(2)*If for all , then there exists a nonnegative such that for , where , where
*

#### 4. Numerical Example

To illustrate our theoretical results, firstly we use a numerical example to investigate the estimators studied in the dataset originally due to Gruber [13], and later considered by Akdeniz and Erol [14]. We assemble the data as follows: Firstly, we can compute the OLSE of as follows: For the OLSE, PCRE, class estimator, class estimator, Liu-type estimator (LTE), and new estimator (PCTTE), their estimated mean square error (MSE) values are obtained by replacing all unknown model parameters by their, respectively, least squares estimators in corresponding expressions.

Firstly, we see the comparison of the OLSE and the PCTTE. When is fixed, then if the values of is big then the new estimator has smaller MSE values than OLSE; that is to say, the new estimator is better than the OLSE. So we see that the new estimator improved the OLSE.

From Figure 3, we see that when , fixed, if , then the new estimator is better than the PCRE. In Theorem 1, we see that if , the new estimator is better. For the numerical example, when , the new estimator is more efficient than the PCRE.

By Figures 1, 2, 3, 4, 5, 6, 7, and 8, we see that when , then new estimator is better than the LTE. So in practice, we can choose bigger and smaller .

#### 5. Conclusion

In this paper, we use a new method to propose the principal component Liu-type estimator. Then, we discuss the superiority of the new estimator with the OLSE, PCRE, class estimator, class estimator, and Liu-type estimator in the sense of mean square error. We also give a method to choose the biasing parameters. Finally, we give a numerical example to illustrate the performance the various estimators.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The author is grateful to three anonymous referees for their valuable comments which improved the quality of the paper. This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grants nos. Y2013SC43, R2013SC12), Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant no. KJTD201321), and the National Natural Science Foundation of China (nos. 11201505, 71271227).

#### References

- W. F. Massy, “Principal components regression in exploratory statistical reseach,”
*Journal of the American Statistical Association*, vol. 60, pp. 234–256, 1965. View at: Publisher Site | Google Scholar - A. E. Hoerl and R. W. Kennard, “Ridge regression: biased estimation for nonorthogonal problems,”
*Technometrics*, vol. 12, no. 1, pp. 55–67, 1970. View at: Google Scholar - K. J. Liu, “A new class of biased estimate in linear regression,”
*Communications in Statistics*, vol. 22, no. 2, pp. 393–402, 1993. View at: Publisher Site | Google Scholar | MathSciNet - W. H. Huang, J. J. Qi, and N. T. Huang, “Liu-type estimation for a linear regression model with linear restrictions,”
*Journal of Systems Science and Mathematical Sciences*, vol. 29, no. 7, pp. 937–946, 2009. View at: Google Scholar | MathSciNet - S. Toker and S. Kaçıranlar, “On the performance of two parameter ridge estimator under the mean square error criterion,”
*Applied Mathematics and Computation*, vol. 219, no. 9, pp. 4718–4728, 2013. View at: Publisher Site | Google Scholar | MathSciNet - M. R. Baye and D. F. Parker, “Combining ridge and principal component regression: a money demand illustration,”
*Communications in Statistics*, vol. 13, no. 2, pp. 197–205, 1984. View at: Publisher Site | Google Scholar | MathSciNet - S. Kaçıranlar and S. Sakallıoğlu, “Combining the Liu estimator and the principal component regression estimator,”
*Communications in Statistics*, vol. 30, no. 12, pp. 2699–2705, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - M. I. Alheety and B. M. G. Kibria, “Modified Liu-type estimator based on ($r$-$k$) class estimator,”
*Communications in Statistics*, vol. 42, no. 2, pp. 304–319, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. Xu and H. Yang, “On the restricted almost unbiased estimators in linear regression,”
*Journal of Applied Statistics*, vol. 38, no. 3, pp. 605–617, 2011. View at: Publisher Site | Google Scholar | MathSciNet - Y. Li and H. Yang, “Two kinds of restricted modified estimators in linear regression model,”
*Journal of Applied Statistics*, vol. 38, no. 7, pp. 1447–1454, 2011. View at: Publisher Site | Google Scholar | MathSciNet - J. Durbin, “A note on regression when there is extraneous information that one of coecients,”
*Journal of the American Statistical Association*, vol. 48, pp. 799–808, 1953. View at: Publisher Site | Google Scholar - J. B. Wu,
*Research on the properties of parameter estimation in linear regression model [Ph.D. papers]*, Chongqing University, Chongqing, China, 2013. - M. H. J. Gruber,
*Improving Efficiency by Shrinkage: The James-Stein and Ridge Regression Estimators*, vol. 156, Marcel Dekker, New York, NY, USA, 1998. View at: MathSciNet - F. Akdeniz and H. Erol, “Mean squared error matrix comparisons of some biased estimators in linear regression,”
*Communications in Statistics*, vol. 32, no. 12, pp. 2389–2413, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

#### Copyright

Copyright © 2013 Jibo Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.