Journal of Applied Mathematics

Journal of Applied Mathematics / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 902715 | https://doi.org/10.1155/2013/902715

Chaolin Liu, Hu Yang, Jibo Wu, "On the Weighted Mixed Almost Unbiased Ridge Estimator in Stochastic Restricted Linear Regression", Journal of Applied Mathematics, vol. 2013, Article ID 902715, 10 pages, 2013. https://doi.org/10.1155/2013/902715

On the Weighted Mixed Almost Unbiased Ridge Estimator in Stochastic Restricted Linear Regression

Academic Editor: Tai-Ping Chang
Received05 Jan 2013
Revised22 Mar 2013
Accepted22 Mar 2013
Published30 Apr 2013

Abstract

We introduce the weighted mixed almost unbiased ridge estimator (WMAURE) based on the weighted mixed estimator (WME) (Trenkler and Toutenburg 1990) and the almost unbiased ridge estimator (AURE) (Akdeniz and Erol 2003) in linear regression model. We discuss superiorities of the new estimator under the quadratic bias (QB) and the mean square error matrix (MSEM) criteria. Additionally, we give a method about how to obtain the optimal values of parameters and . Finally, theoretical results are illustrated by a real data example and a Monte Carlo study.

1. Introduction

Consider the linear regression model where is an -dimensional response vector, with is a known matrix of full column rank, is a vector of unknown parameters, and is an vector of errors with expectation and covariance matrix is an identity matrix of order .

It is well known that the ordinary least squares estimator (LS) for is given by which has been treated as the best estimator for a long time. However, many results have proved that LS is no longer a good estimator when the multicollinearity is present in model (1). To tackle this problem, some suitable biased estimators have been developed, such as principal component regression estimator (PCR) [1], ordinary ridge estimator (RE) [2], class estimator [3], Liu estimator (LE) [4], and class estimator [5]. Kadiyala [6] introduced a class of almost unbiased shrinkage estimator which can be not only almost unbiased but also more efficient than the LS. Singh et al. [7] introduced the almost unbiased generalized ridge estimator by the jackknife procedure, and Akdeniz and Kaçiranlar [8] studied the almost unbiased generalized Liu estimator. By studying bias corrected estimators of the RE and the LE, Akdeniz and Erol [9] discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE).

An alternative technique to tackle the multicollinearity is to consider the parameter estimator in addition to the sample information, such as some exact or stochastic restrictions on unknown parameters. When additional stochastic linear restrictions on unknown parameters are assumed to be held, Durbin [10], Theil and Goldberger [11], and Theil [12] proposed the ordinary mixed estimator (OME). Hubert and Wijekoon [13] proposed the stochastic restricted Liu estimator, and Yang and Xu [14] obtained a new stochastic restricted Liu estimator. By grafting the RE into the mixed estimation procedure, Li and Yang [15] introduced the stochastic restricted ridge estimator. When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [16] studied the weighted mixed regression and developed the weighted mixed estimator (WME). Li and Yang [17] grafted the RE into the weighted mixed estimation procedure and proposed the weighted mixed ridge estimator (WMRE).

In this paper, by combining the WME and the AURE, we propose a weighted mixed almost unbiased ridge estimator (WMAURE) for unknown parameters in a linear regression model when additional stochastic linear restriction is supposed to be held. Furthermore, we discuss the performance of the new estimator over the LS, WME, AURE, and WMRE with respect to the quadratic bias (QB) and the mean square error matrix (MSEM) criteria.

The rest of the paper is organized as follows. In Section 2, we describe the statistical model and propose the weighted mixed almost unbiased ridge estimator. We compare the new estimator with the weighted mixed ridge estimator and the almost unbiased ridge estimator under the quadratic bias criterion in Section 3. In Section 4, superiorities of the proposed estimator over relative estimators are considered under the mean square error matrix criterion. In Section 5, the selection of parameters and is discussed. Finally, to justify the superiority of the new estimator, we perform a real data example and a Monte Carlo simulation study in Section 6. We give some conclusions in Section 7.

2. The Proposed Estimator

The ordinary ridge estimator proposed by Hoerl and Kennard [2] is defined as where , . Let ; we may rewrite as

The almost unbiased ridge estimator obtained by Akdeniz and Erol [9] is denoted as

In addition to model (1), let us give some prior information about in the form of a set of which is independent stochastic linear restriction where is a known matrix of rank , is a vector of disturbances with expectation 0 and covariance matrix , is supposed to be known and positive definite, and the vector can be interpreted as a random variable with expectation . Then, we can derive that (6) does not hold exactly but in the mean. We assume to be a realized value of the random vector, so that all expectations are conditional on [18]. We will not separately mention this in the following discussions. Furthermore, it is also supposed that is stochastically independent of .

For the restricted model specified by (1) and (6), the OME introduced by Durbin [10], Theil and Goldberger [11], and Theil [12] is defined as

When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [16] considered the WME which is denoted as where is a nonstochastic and nonnegative scalar weight.

Note that

Then, the WME (8) can be rewritten as

Additionally, by combining the WME and RE, Li and Yang [17] obtained the WMRE which is defined as

The WMRE also can be rewritten as

Now, based on the WME [16] and the AURE [9], we can define the following weighted mixed almost unbiased ridge estimator: which is according to the way in [17].

Using (10), (14) can be rewritten as

From the definition of , it can be seen that is a general estimator, and as special cases of it, the WME, LS, and AURE can be described as and if ,

It is easy to compute expectation values and covariance matrices of the LS, WME, WMRE, AURE, and WMAURE as where .

In the rest of the sections, we intend to study the performance of the new estimator over relative estimators under the quadratic bias and the mean square error matrix criteria.

3. Quadratic Bias Comparison of Estimators

In this section, quadratic bias comparisons are performed among the AURE, WMRE, and WMAURE. Let be the estimator of , then the quadratic bias of is defined as , where . Based on the definition of quadratic bias, we can easily get quadratic biases of AURE, WMRE, and WMAURE:

3.1. Quadratic Bias Comparison between the AURE and WMAURE

Here, we focus on the quadratic bias comparison between the AURE and WMAURE. The difference of quadratic biases can be derived as

Firstly, we just consider . Note that and

It can be seen that , namely, . Then we can get

Thus, we have by (21) and (22). Therefore, we can derive that and the outperforms the according to the quadratic bias criterion.

Based on the above analysis, we can derive the following theorem.

Theorem 1. According to the quadratic bias criterion, the WMAURE performs better than the AURE.

3.2. Quadratic Bias Comparison between the WMRE and WMAURE

Similarly, the quadratic bias comparison between the WMRE and WMAURE will be discussed. The difference of quadratic biases of both estimators can be obtained by

We consider . Note that and

For , there exists an orthogonal matrix such that , where and denote the ordered eigenvalues of . Therefore, we can easily compute that , where , . For , , we obtain which means that . Thus, we have and . Note that , and , have same nonzero eigenvalues, respectively, which means that .

Therefore, we can derive that and the performs better than the under the quadratic bias criterion.

We can get the following theorem.

Theorem 2. According to the quadratic bias criterion, the WMAURE outperforms the WMRE.

4. Mean Square Error Matrix Comparisons of Estimators

In this section, We will compare the proposed estimator with relative estimators under the mean square error matrix (MSEM) criterion.

For the sake of convenience, we list some lemmas needed in the following discussions.

Lemma 3. Let be a positive definite matrix, namely, ; let be a vector, then if and only if .

Proof. See [19].

Lemma 4. Let , , be two competing homogeneous linear estimators of . Suppose that , then if and only if , where , denote the MSEM and bias vector of , respectively.

Proof. See [20].

Lemma 5. Let two matrices , , then .

Proof. See [18].

Firstly, the MSEM of an estimator is defined as

For two given estimators and , the estimator is said to be superior to under the MSEM criterion if and only if

The scalar mean square error (MSE) is defined as . It is well known that the MSEM criterion is superior over the MSE criterion; we just compare the MSEM of the WMAURE with other relative estimators.

Now, from (18), we can easily obtain the MSEM of the WME, WMRE, AURE, and WMAURE as follows: where , and .

4.1. MSEM Comparison of the WME and WMAURE

To compare MSEM values between the WMAURE and WME. Firstly, from (28) and (31), the difference of MSEM values between the WME and WMAURE can be gained by where .

Theorem 6. The WMAURE is superior to the WME under MSEM criterion, namely, if and only if .

Proof. Note that . We can easily compute that , where and which means . Observing that , we have .
Applying Lemma 3, we can get that if and only if .
This completes the proof.

4.2. MSEM Comparison of the WMRE and WMAURE

Similarly, we compare MSEM values between the WMAURE and WMRE. Firstly, from (29) and (31), the difference of MSEM values between the WMRE and the WMAURE can be computed by where .

Theorem 7. The WMAURE is superior to the WMRE under MSEM criterion, namely, if and only if .

Proof. Firstly, we prove .
Note that , we can compute , where
Thus, we can get . Observing that , we have .
Applying Lemma 4, we can get if and only if .
This completes the proof.

4.3. MSEM Comparison of the AURE and WMAURE

Now, we compare MSEM values between the WMAURE and AURE. Firstly, from (30) and (31), the difference of MSEM values between the AURE and WMAURE can be obtained by where .

Theorem 8. When , the WMAURE is superior to the AURE in the MSEM sense, namely, if and only if .

Proof. Note that and . When , we can get that by applying Lemma 5. Thus, from (36) and applying Lemma 4, we have if and only if .
The proof is completed.

4.4. MSEM Comparison of the LS and WMAURE

Finally, we compare MSEM values between the LS and WMAURE. From (27) and (31), the difference of MSEM values between the LS and WMAURE can be computed by where and according to Section 4.1.

Firstly, using (9), we can compute that

Moreover, it can be computed that

Therefore, . Applying Lemma 3, we can get that if and only if .

Based on the above analysis, we can state the following theorem.

Theorem 9. The WMAURE is superior to the LS according to the MSEM criterion, namely, if and only if .

5. Selection of Parameters and

In this section, we give a method about how to choose parameters and . Firstly, a linear regression model can be transformed to a canonical form by the orthogonal transformation. Let be an orthogonal matrix such that , where is the eigenvalue of , and , . Then, we get a canonical form of model (1) as

Note that and . It is supposed that and are commutative, then we have where .

Optimal values for and can be derived by minimizing

For a fixed value of , differentiating with respect to leads to and equating it to zero. Note that and after unknown parameters and are replaced by their unbiased estimators, we obtain the optimal estimator of for a fixed value as

The value which minimizes the function can be found by differentiating with respect to when is fixed and equating it to zero. After unknown parameters and are replaced by their unbiased estimators, we get the optimal estimator of for a fixed value as where and .

6. Numerical Example and Monte Carlo Simulation

In order to verify our theoretical results, we firstly conduct an experiment based on a real data set originally due to Woods et al. [21]. In this experiment, we replace the unknown parameters and by their unbiased estimators, which is according to the way in [17]. The result here and below is performed with R 2.14.1.

We can easily obtain that the condition number is about . This information indicates a serious multicollinearity among the regression vector. The ordinary least squares estimator of is with .

Consider the following stochastic linear restrictions used in [17]:

For the WMRE, AURE, and WMAURE, their quadratic bias values are given in Table 1 and their estimated MSE values are obtained in Table 2 by replacing all unknown parameters in the corresponding theoretical MSE expressions by their least squares estimators.



WMRE 16.567 20.217 20.336 1.160 1.413 1.421
AURE 2456.43 3662.01 3706.28 2456.43 3662.01 3706.28
WMAURE 13.166 19.627 19.864 0.922 1.375 1.391


WMRE 0.296 0.360 0.362 0.190 0.231 0.232
AURE 2456.43 3662.01 3706.28 2456.43 3662.01 3706.28
WMAURE 0.235 0.351 0.355 0.151 0.225 0.228



LS 4912.13 4912.13 4912.13 4912.13 4912.13 4912.13
WME 59.771 59.771 59.771 39.278 39.278 39.278
WMRE 50.321 53.666 53.783 38.616 38.848 38.855
AURE 2663.66 3666.43 3709.16 2663.66 3666.43 3709.16
WMAURE 47.718 53.094 53.323 38.434 38.810 38.826


LS 4912.13 4912.13 4912.13 4912.13 4912.134912.13
WME 38.639 38.639 38.639 38.620 38.620 38.620
WMRE 38.470 38.528 38.530 38.620 38.549 38.550
AURE 2663.66 3666.43 3709.16 2663.66 3666.433709.16
WMAURE 38.423 38.519 38.524 38.482 38.543 38.546

It can be seen from Table 1 that the WMAURE has smaller quadratic bias values than the WMRE and AURE for every case, which agrees with our theoretical finding in Section 3. From Table 2, we can get that MSE values of our proposed estimator are the smallest among the LS, WME, WMRE, AURE, and WMAURE when is fixed, which agrees with our theoretical finding in Theorems 69.

To further illustrate the behavior of our proposed estimator, we are to perform a Monte Carlo simulation study under different levels of multicollinearity. Following the way in [22, 23], we can get explanatory variables by the following equations: where is an independent standard normal pseudorandom number, and is specified so that the theoretical correlation between any two explanatory variables is given by . A dependent variable is generated by where is a normal pseudo-random number with mean zero and variance . In this study, we choose , , , , and the stochastic restriction , , . Furthermore, we discuss three cases when .

For three different levels of multicollinearity, MSE values of LS, WME, AURE, WMRE, and WMAURE are obtained in Tables 3, 4, and 5, respectively. From Tables 35, we can derive the following results.(1)With the increase of multicollinearity, MSE values of the LS, WME, WMRE, AURE, and WMAURE are increasing. And for all cases, the WMAURE has smaller estimated MSE values than the LS, AURE, and WME. (2)The value of is the level of the weight to the sample information and the prior information; we can see from three tables that estimated MSE values of the WME, WMRE, and WMAURE become more and more smaller when the value of increases. It can be concluded that we get more exact estimator of the parameter with more depended prior information.



LS 0.0898720 0.0898720 0.0898720 0.0898720 0.0898720 0.0898720
WME 0.0889010 0.0889010 0.0889010 0.0867887 0.0867887 0.0867887
WMRE 0.0889032 0.0902973 0.0911375 0.0867896 0.088059 0.0888259
AURE 0.0898720 0.0898718 0.0898717 0.0898720 0.0898718 0.0898717
WMAURE0.0889010 0.0889008 0.0889007 0.0867887 0.0867885 0.0867884


LS 0.0898720 0.0898720 0.0898720 0.0898720 0.0898720 0.0898720
WME 0.0854822 0.0854822 0.0854822 0.0853378 0.0853378 0.0853378
WMRE 0.0854817 0.0866088 0.0872931 0.0853367 0.0864015 0.0870495
AURE 0.0898720 0.0898718 0.0898717 0.0898720 0.0898718 0.0898717
WMAURE0.0854822 0.0854821 0.0854820 0.0853378 0.0853376 0.0853375



LS 0.09870500.09870500.09870500.09870500.09870500.0987050
WME 0.09753630.09753630.09753630.09501430.09501430.0950143
WMRE 0.09753890.09921480.10022480.09501520.09652730.0974417
AURE 0.09870500.09870470.09870460.09870500.09870470.0987398
WMAURE0.09753630.09753610.09753590.0950143 0.095014 0.0950139


LS 0.0987050 0.0987050 0.0987050 0.0987050 0.0987050 0.0987050
WME 0.09347380.09347380.09347380.09330530.09330530.0933053
WMRE 0.09347290.09480170.09560920.09330360.09455300.0953142
AURE 0.09870500.09870470.09873980.09870500.09870470.0987398
WMAURE0.09347380.09347360.09347350.09330530.09330500.0933049



LS 0.13769030.13769030.1376903 0.1376903 0.1376903 0.1376903
WME 0.13543790.1354379 0.1354379 0.1307427 0.1307427 0.1307427
WMRE 0.1354425 0.138655 0.1405906 0.130743 0.1335329 0.1352221
AURE 0.1376903 0.1376896 0.1376892 0.1376903 0.1376896 0.1376892
WMAURE0.1354379 0.1354372 0.1354368 0.1307426 0.130742 0.1307416


LS 0.1376903 0.1376903 0.1376903 0.1376903 0.1376903 0.1376903
WME 0.1280238 0.1280238 0.1280238 0.127739 0.127739 0.127739
WMRE 0.12801960.13036490.1317949 0.1277330.1298949 0.1312179
AURE 0.13769030.1376896 0.1376892 0.1376903 0.1376896 0.1376892
WMAURE0.12802380.1280231 0.1280228 0.127739 0.1277384 0.127738