Abstract

We introduce the weighted mixed almost unbiased ridge estimator (WMAURE) based on the weighted mixed estimator (WME) (Trenkler and Toutenburg 1990) and the almost unbiased ridge estimator (AURE) (Akdeniz and Erol 2003) in linear regression model. We discuss superiorities of the new estimator under the quadratic bias (QB) and the mean square error matrix (MSEM) criteria. Additionally, we give a method about how to obtain the optimal values of parameters and . Finally, theoretical results are illustrated by a real data example and a Monte Carlo study.

1. Introduction

Consider the linear regression model where is an -dimensional response vector, with is a known matrix of full column rank, is a vector of unknown parameters, and is an vector of errors with expectation and covariance matrix is an identity matrix of order .

It is well known that the ordinary least squares estimator (LS) for is given by which has been treated as the best estimator for a long time. However, many results have proved that LS is no longer a good estimator when the multicollinearity is present in model (1). To tackle this problem, some suitable biased estimators have been developed, such as principal component regression estimator (PCR) [1], ordinary ridge estimator (RE) [2], class estimator [3], Liu estimator (LE) [4], and class estimator [5]. Kadiyala [6] introduced a class of almost unbiased shrinkage estimator which can be not only almost unbiased but also more efficient than the LS. Singh et al. [7] introduced the almost unbiased generalized ridge estimator by the jackknife procedure, and Akdeniz and Kaçiranlar [8] studied the almost unbiased generalized Liu estimator. By studying bias corrected estimators of the RE and the LE, Akdeniz and Erol [9] discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE).

An alternative technique to tackle the multicollinearity is to consider the parameter estimator in addition to the sample information, such as some exact or stochastic restrictions on unknown parameters. When additional stochastic linear restrictions on unknown parameters are assumed to be held, Durbin [10], Theil and Goldberger [11], and Theil [12] proposed the ordinary mixed estimator (OME). Hubert and Wijekoon [13] proposed the stochastic restricted Liu estimator, and Yang and Xu [14] obtained a new stochastic restricted Liu estimator. By grafting the RE into the mixed estimation procedure, Li and Yang [15] introduced the stochastic restricted ridge estimator. When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [16] studied the weighted mixed regression and developed the weighted mixed estimator (WME). Li and Yang [17] grafted the RE into the weighted mixed estimation procedure and proposed the weighted mixed ridge estimator (WMRE).

In this paper, by combining the WME and the AURE, we propose a weighted mixed almost unbiased ridge estimator (WMAURE) for unknown parameters in a linear regression model when additional stochastic linear restriction is supposed to be held. Furthermore, we discuss the performance of the new estimator over the LS, WME, AURE, and WMRE with respect to the quadratic bias (QB) and the mean square error matrix (MSEM) criteria.

The rest of the paper is organized as follows. In Section 2, we describe the statistical model and propose the weighted mixed almost unbiased ridge estimator. We compare the new estimator with the weighted mixed ridge estimator and the almost unbiased ridge estimator under the quadratic bias criterion in Section 3. In Section 4, superiorities of the proposed estimator over relative estimators are considered under the mean square error matrix criterion. In Section 5, the selection of parameters and is discussed. Finally, to justify the superiority of the new estimator, we perform a real data example and a Monte Carlo simulation study in Section 6. We give some conclusions in Section 7.

2. The Proposed Estimator

The ordinary ridge estimator proposed by Hoerl and Kennard [2] is defined as where , . Let ; we may rewrite as

The almost unbiased ridge estimator obtained by Akdeniz and Erol [9] is denoted as

In addition to model (1), let us give some prior information about in the form of a set of which is independent stochastic linear restriction where is a known matrix of rank , is a vector of disturbances with expectation 0 and covariance matrix , is supposed to be known and positive definite, and the vector can be interpreted as a random variable with expectation . Then, we can derive that (6) does not hold exactly but in the mean. We assume to be a realized value of the random vector, so that all expectations are conditional on [18]. We will not separately mention this in the following discussions. Furthermore, it is also supposed that is stochastically independent of .

For the restricted model specified by (1) and (6), the OME introduced by Durbin [10], Theil and Goldberger [11], and Theil [12] is defined as

When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [16] considered the WME which is denoted as where is a nonstochastic and nonnegative scalar weight.

Note that

Then, the WME (8) can be rewritten as

Additionally, by combining the WME and RE, Li and Yang [17] obtained the WMRE which is defined as

The WMRE also can be rewritten as

Now, based on the WME [16] and the AURE [9], we can define the following weighted mixed almost unbiased ridge estimator: which is according to the way in [17].

Using (10), (14) can be rewritten as

From the definition of , it can be seen that is a general estimator, and as special cases of it, the WME, LS, and AURE can be described as and if ,

It is easy to compute expectation values and covariance matrices of the LS, WME, WMRE, AURE, and WMAURE as where .

In the rest of the sections, we intend to study the performance of the new estimator over relative estimators under the quadratic bias and the mean square error matrix criteria.

3. Quadratic Bias Comparison of Estimators

In this section, quadratic bias comparisons are performed among the AURE, WMRE, and WMAURE. Let be the estimator of , then the quadratic bias of is defined as , where . Based on the definition of quadratic bias, we can easily get quadratic biases of AURE, WMRE, and WMAURE:

3.1. Quadratic Bias Comparison between the AURE and WMAURE

Here, we focus on the quadratic bias comparison between the AURE and WMAURE. The difference of quadratic biases can be derived as

Firstly, we just consider . Note that and

It can be seen that , namely, . Then we can get

Thus, we have by (21) and (22). Therefore, we can derive that and the outperforms the according to the quadratic bias criterion.

Based on the above analysis, we can derive the following theorem.

Theorem 1. According to the quadratic bias criterion, the WMAURE performs better than the AURE.

3.2. Quadratic Bias Comparison between the WMRE and WMAURE

Similarly, the quadratic bias comparison between the WMRE and WMAURE will be discussed. The difference of quadratic biases of both estimators can be obtained by

We consider . Note that and

For , there exists an orthogonal matrix such that , where and denote the ordered eigenvalues of . Therefore, we can easily compute that , where , . For , , we obtain which means that . Thus, we have and . Note that , and , have same nonzero eigenvalues, respectively, which means that .

Therefore, we can derive that and the performs better than the under the quadratic bias criterion.

We can get the following theorem.

Theorem 2. According to the quadratic bias criterion, the WMAURE outperforms the WMRE.

4. Mean Square Error Matrix Comparisons of Estimators

In this section, We will compare the proposed estimator with relative estimators under the mean square error matrix (MSEM) criterion.

For the sake of convenience, we list some lemmas needed in the following discussions.

Lemma 3. Let be a positive definite matrix, namely, ; let be a vector, then if and only if .

Proof. See [19].

Lemma 4. Let , , be two competing homogeneous linear estimators of . Suppose that , then if and only if , where , denote the MSEM and bias vector of , respectively.

Proof. See [20].

Lemma 5. Let two matrices , , then .

Proof. See [18].

Firstly, the MSEM of an estimator is defined as

For two given estimators and , the estimator is said to be superior to under the MSEM criterion if and only if

The scalar mean square error (MSE) is defined as . It is well known that the MSEM criterion is superior over the MSE criterion; we just compare the MSEM of the WMAURE with other relative estimators.

Now, from (18), we can easily obtain the MSEM of the WME, WMRE, AURE, and WMAURE as follows: where , and .

4.1. MSEM Comparison of the WME and WMAURE

To compare MSEM values between the WMAURE and WME. Firstly, from (28) and (31), the difference of MSEM values between the WME and WMAURE can be gained by where .

Theorem 6. The WMAURE is superior to the WME under MSEM criterion, namely, if and only if .

Proof. Note that . We can easily compute that , where and which means . Observing that , we have .
Applying Lemma 3, we can get that if and only if .
This completes the proof.

4.2. MSEM Comparison of the WMRE and WMAURE

Similarly, we compare MSEM values between the WMAURE and WMRE. Firstly, from (29) and (31), the difference of MSEM values between the WMRE and the WMAURE can be computed by where .

Theorem 7. The WMAURE is superior to the WMRE under MSEM criterion, namely, if and only if .

Proof. Firstly, we prove .
Note that , we can compute , where
Thus, we can get . Observing that , we have .
Applying Lemma 4, we can get if and only if .
This completes the proof.

4.3. MSEM Comparison of the AURE and WMAURE

Now, we compare MSEM values between the WMAURE and AURE. Firstly, from (30) and (31), the difference of MSEM values between the AURE and WMAURE can be obtained by where .

Theorem 8. When , the WMAURE is superior to the AURE in the MSEM sense, namely, if and only if .

Proof. Note that and . When , we can get that by applying Lemma 5. Thus, from (36) and applying Lemma 4, we have if and only if .
The proof is completed.

4.4. MSEM Comparison of the LS and WMAURE

Finally, we compare MSEM values between the LS and WMAURE. From (27) and (31), the difference of MSEM values between the LS and WMAURE can be computed by where and according to Section 4.1.

Firstly, using (9), we can compute that

Moreover, it can be computed that

Therefore, . Applying Lemma 3, we can get that if and only if .

Based on the above analysis, we can state the following theorem.

Theorem 9. The WMAURE is superior to the LS according to the MSEM criterion, namely, if and only if .

5. Selection of Parameters and

In this section, we give a method about how to choose parameters and . Firstly, a linear regression model can be transformed to a canonical form by the orthogonal transformation. Let be an orthogonal matrix such that , where is the eigenvalue of , and , . Then, we get a canonical form of model (1) as

Note that and . It is supposed that and are commutative, then we have where .

Optimal values for and can be derived by minimizing

For a fixed value of , differentiating with respect to leads to and equating it to zero. Note that and after unknown parameters and are replaced by their unbiased estimators, we obtain the optimal estimator of for a fixed value as

The value which minimizes the function can be found by differentiating with respect to when is fixed and equating it to zero. After unknown parameters and are replaced by their unbiased estimators, we get the optimal estimator of for a fixed value as where and .

6. Numerical Example and Monte Carlo Simulation

In order to verify our theoretical results, we firstly conduct an experiment based on a real data set originally due to Woods et al. [21]. In this experiment, we replace the unknown parameters and by their unbiased estimators, which is according to the way in [17]. The result here and below is performed with R 2.14.1.

We can easily obtain that the condition number is about . This information indicates a serious multicollinearity among the regression vector. The ordinary least squares estimator of is with .

Consider the following stochastic linear restrictions used in [17]:

For the WMRE, AURE, and WMAURE, their quadratic bias values are given in Table 1 and their estimated MSE values are obtained in Table 2 by replacing all unknown parameters in the corresponding theoretical MSE expressions by their least squares estimators.

It can be seen from Table 1 that the WMAURE has smaller quadratic bias values than the WMRE and AURE for every case, which agrees with our theoretical finding in Section 3. From Table 2, we can get that MSE values of our proposed estimator are the smallest among the LS, WME, WMRE, AURE, and WMAURE when is fixed, which agrees with our theoretical finding in Theorems 69.

To further illustrate the behavior of our proposed estimator, we are to perform a Monte Carlo simulation study under different levels of multicollinearity. Following the way in [22, 23], we can get explanatory variables by the following equations: where is an independent standard normal pseudorandom number, and is specified so that the theoretical correlation between any two explanatory variables is given by . A dependent variable is generated by where is a normal pseudo-random number with mean zero and variance . In this study, we choose , , , , and the stochastic restriction , , . Furthermore, we discuss three cases when .

For three different levels of multicollinearity, MSE values of LS, WME, AURE, WMRE, and WMAURE are obtained in Tables 3, 4, and 5, respectively. From Tables 35, we can derive the following results.(1)With the increase of multicollinearity, MSE values of the LS, WME, WMRE, AURE, and WMAURE are increasing. And for all cases, the WMAURE has smaller estimated MSE values than the LS, AURE, and WME. (2)The value of is the level of the weight to the sample information and the prior information; we can see from three tables that estimated MSE values of the WME, WMRE, and WMAURE become more and more smaller when the value of increases. It can be concluded that we get more exact estimator of the parameter with more depended prior information.

7. Conclusions

In this paper, we propose the WMAURE based on the WME [16] and the AURE [9] and discuss some properties of the new estimator over the relative estimators. In particular, we prove that the WMAURE has smaller quadratic bias than the AURE and WMRE and derive that the proposed estimator is superior to the LS, WME, WMRE, and AURE in the mean squared error matrix sense under certain conditions. The optimal values of parameters and are obtained. Furthermore, we perform a real data example and a Monte Carlo study to support the finding of our theoretical results.

Acknowledgments

The authors would like to thank the editor and the two anonymous referees for their very helpful suggestions and comments, which improve the presentation of the paper. Also, the authors acknowledge the financial support of the National Natural Science Foundation of China, project no. 11171361, and Ph.D. Programs Foundation of Ministry of Education of China, project no. 20110191110033.