- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Discrete Dynamics in Nature and Society

Volume 2012 (2012), Article ID 696927, 17 pages

http://dx.doi.org/10.1155/2012/696927

## Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

School of Mathematics and Statistics, Chongqing University of Technology, Chongqing 400054, China

Received 31 October 2011; Accepted 2 January 2012

Academic Editor: M. De la Sen

Copyright © 2012 Liyun Su et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

#### 1. Introduction

The heteroscedasticity in classical linear regression model is defined by the variances of random items which are not the same for different explanatory variables and observations. Heteroscedasticity often occurs in data sets in which there is a wide disparity between the largest and smallest observed values. The larger the disparity between the size of observations in a sample, the larger the likelihood that the error term observations associated with them will have different variances and therefore be heteroscedastic. That is, we would expect that the error term distribution for very large observations might have a large variance, but the error term distribution for small observations might have a small variance. Besides, researchers have observed that heteroscedasticity is usually found in cross-sectional data rather than in time series data [1, 2]. In cross-sectional data we generally deal with members of a population at a given point in time, such as individual consumers or their families; firms; industries; or geographical subdivisions, such as small, medium, or large firms, or low, medium, or high income. Besides, this study is also more common in economics, such as the relationship of sale and research and development, and profit in a certain year.

When there is a heteroscedasticity in a linear regression model [3–6], we can apply parametric methods, also the heteroscedastic approaches can be applied, such as Park test, White test and so on. Another heteroscedastic method is shown in [7]. When parametric approaches are applied, estimations of parameters we obtained by ordinary least squares estimation (OLS) are still linear and unbiased. However, the efficiency is bad [8–10]. This could lead to a uncorrect statistical diagnosis for the parameters’ significance test. Similarly, it unecessarily enlarges the confidence interval when we estimate the parameter interval. Besides, the accuracy of predictive value may lower when we estimate with the regression model that we obtained. In order to solve the problem above, we can use generalized least squares estimation (GLS) when the covariance matrix of the random items is known. If it is unknown, we usually use two-stage least squares estimate, that is, we first estimate variances of the residual error, and then the generalized least squares estimator is used to obtain the coefficients of the model by using the estimate of variances of the random items [11, 12]. However, the traditional estimation method is that we suppose the residual error variances as a certain parametric model. In this paper, we try to applying local polynomial fitting to random item variances as the first step, and then GLS is used to estimate the coefficients of the model. A problem cannot be neglected is that the existence of inverse of regressor matrix transpose multiplied by regressor matrix is necessary for the existence of the unique estimator when applying the least squares method or their generalized versions. In the generalized versions, because kernel function is a symmetric probability density function with bounded support, the weighted matrix based on kernel function is positive definite, generally being also symmetric as done in usual theoretical and practical selections. In other words, the regressor matrix has to be full column rank and weighted matrix has to be nonsingular. The issues have been discussed in many papers used in theoretical issues and many applications, see [13–17]. On the one hand, because of local polynomial fitting’s various nice statistical properties, the estimations obtained with this technology also possess the same good statistical properties [12, 18, 19]. On the other hand, we exploit a heteroscedastic regression model rather than the artificial structure of heteroscedasticity. Then, we can directly get the heteroscedastic function based on the nonparametric technique, which shows the relationship between variance function of random items and explanatory variables from regression results. Thus, it is unnecessary to test heteroscedasticity of the model. Particularly, the estimated value by local polynomial fitting is more accurate than that by the traditional method. Besides, we study variance function fitting when parameters change and reach the optimal parameters. Finally, a case of economics is cited in order to show that our method is indeed effective in finite-sample situations.

The rest of this paper is organized as follows. In Section 2 we construct local polynomial regression: in Section 2.1 we talk about local polynomial fitting, in Section 2.2 we study estimation with Parameters selections. Section 3 contains two-stage method with local polynomial fitting. In Section 4, we do the simulations on a given model and study the fittings under different parameters. In Section 5, we collect some real data and use the local polynomial estimating the coefficients. We draw the results in Section 6.

#### 2. Local Polynomial Regression

Local polynomial regression [20–23] is a widely used nonparametric technique. Local polynomial fitting is an attractive method both from theoretical and practical point of view. Local polynomial method has a small mean squared error compared with the Nadaraya-Watson estimator which leads to an undesirable form of the bias and the Gasser-Muller estimator which has to pay a price in variance when dealing with a random design model. Local polynomial fitting also has other advantages. The method adapts to various types of designs such as random and fixed designs, highly clustered and nearly uniform designs. Furthermore, there is an absence of boundary effects: the bias at the boundary stays automatically of the same order as the interior, without use of specific boundary kernels. The local polynomial approximation approach is appealing on general scientific grounds: the least squares principle to be applied opens the way to a wealth of statistical knowledge and thus easy generalizations. In this section, we briefly outline the idea of the extension of local polynomial fitting to linear regression.

##### 2.1. Local Polynomial Fitting

Consider the bivariate data , which form an independent and identically distributed sample from a population . Of interest is to estimate the regression function and its derivatives . To help us understand the estimation methodology, we can regard the data as being generated from the model where , , and and are independent. However, this location-scale model assumption is not necessary for our development, but is adopted to provide intuitive understanding. We always denote the conditional variance of given by and the marginal density of , that is, the design density, by .

Suppose that the derivative of at the point exists. We then approximate the unknown regression function locally by a polynomial of order . A Taylor expansion gives, for in a neighborhood of , This polynomial is fitted locally by a weighted least squares regression problem: minimize where is a bandwidth controlling the size of the local neighborhood, and with a kernel function assigning weights to each datum point. Throughout this paper we assume that is a symmetric probability density function with bounded support, although this technical assumption can be relaxed significantly [24–26].

Denote by , the solution to the least squares problem (2.3). It is clear from the Taylor expansion in (2.2) that is an estimator for . To estimate the entire function we solve the above weighted least squares problem for all points in the domain of interest.

It is more convenient to work with matrix notation. Denote by the design matrix of problem (2.3): and let Further, let be the diagonal matrix of weights: Then the weighted least squares problem (2.3) can be written as with . The solution vector is provided by weighted least squares theory and is given by where the regressor matrix has to be full column rank and the weighted matrix has to be nonsingular. If such an inverse in (2.8) does not exist, a method of ridge regression can be adopted to solve this problem [27]. Furthermore, we can get the estimation , where is a column vector (the same size of ) with the first element equal to 1, and the rest equal to zero, that is, .

Computing the will suffer from large computational cost. We can use the recursive least squared method to reduce the computation complexity, and it is very powerful especially in the local polynomial fitting problems. There are several important issues about the bandwidth, the order of local polynomial function and the kernel function which have to be discussed. The three problems will be presented in Section 2.2.

##### 2.2. Parameters Selections

To implement the local polynomial estimator, one need to choose the order , the kernel and the bandwidth . These parameters are of course related to each other.

First of all, the choice of the bandwidth parameter is considered, which plays a rather crucial role. A too large bandwidth under-parametrizes the regression function [28, 29], causing a large modeling bias, while a too small bandwidth over-parametrizes the unknown function and results in noisy estimates. The basic idea is to find a bandwidth that minimizes the estimated mean integrated square error (MISE): where , the asymptotic bias and the asymptotic variance are denoted in Lemma A.1 of the appendix. Then we can find an asymptotically optimal constant bandwidth given by where, is a constant which relates to the kernel function and the order of the local polynomial, , is the density function of . However, this ideal bandwidth is not directly usable since it depends on unknown functions.

Another issue in local polynomial fitting is the choice of the order of the polynomial. Since the modeling bias is primarily controlled by the bandwidth, the issue is less crucial however. For a given bandwidth, , a large value of would expectedly reduce the modeling bias, but would cause a large variance and a considerable computational cost. It is shown in [30] that there is a general pattern of increasing variability: for estimate , there is no increase in variability when passing from an even (i.e., even) order fit to an odd order fit, but when passing from an odd order fit to the consecutive even order fit, there is a price to be paid in terms of increased variability. Therefore, even order fits are not recommended. Since the bandwidth is used to control the modeling complexity, we recommend the use of lowest odd order, that is, , or occasionally .

Another question concerns the choice of the kernel function . Since the estimation is based on the local regression (2.3), no negative weight should be used. As shown in [30], the optimal weight function is , the Epanechnikov kernel, which minimizes the asymptotic mean square error (MSE) of the resulting local polynomial estimators.

#### 3. Two-Stage Method with Local Polynomial Fitting

Let the dependent variable and the explanatory variable fulfill the following regression model: where are the observations and are independent variables. Denote Therefore, (3.1) can be abbreviated as

Suppose that(1). (2), where .(3) are not all equal, that is, there is a heteroscedasticity in model (3.4). Therefore, GLS for is If covariance matrix is known, the coefficients can be estimated. Equation (3.4) is considered as the weighted least squares estimation (WLS) for and it possesses nice qualities. However, how to estimate the is still a problem. Therefore, the so-called two-stage method of estimation is used to solve the heteroscedasticity problem. The two-stage method based on local polynomial fitting can be depicted as follows: firstly, apply local polynomial fitting to get the estimate for , that is, for , and then we can obtain the estimate for by using (3.5). The estimator follows that Because of , we construct the following regression model in order to estimate , where is the difference between and its expectation. Suppose that is the OLS of model (3.4). Although the ordinary least squares estimate is ineffective, it is still consistent. Therefore, the corresponding residuals hold that Consequently, we can approximately get

It can be taken as a regression model, in which the variance function is regression function and the squared residuals are dependent variables. In order to estimate this model, parameter estimation method would usually be taken in some articles. In other words, they suppose , where the form of is known and are the parameters to be estimated. Note that what we usually discuss about are , and so on [31]. However, the discussion for these models requires the analysts to have a better understanding of the background in practical problems. As an example, variance of asset return is always in direct proportion with its past return. Since the variance function must be nonnegative, a nonparametric method is proposed to fit . This method can be depicted as follows. Then, a -order local polynomial estimation for the variances function is obtained according to formula (2.2). Using the least squares method for the data around the local window, we can estimate the local intercept via minimizing with respect to . Therefore, the solution vector can be written as where the design matrix the weighted matrix and . Consequently, the estimated variance function is . Finally, we can get two-stage estimate for by substituting estimate for into (3.5), which has some wonderful statistics qualities, seeing Lemma A.2, Lemma A.3, and Theorem A.4 in the appendix.

#### 4. Simulation and Analysis

In order to discuss the qualities of under the limited sample, this section gives the following model, in which we do the comparison and study the fittings under different parameters. Considering the practical background which is applied to economics, we suppose the variance function of the following form. Besides, we study the fitting effects of variance in different parameters to obtain the best one.

Denote the linear model by where are independent variables, are observations. Also, , and are not all equal. Suppose that the variance function of the error term is .

*Step 1. *Firstly, obtain the estimation of two coefficients in model (4.1) with the ordinary least squares estimation. Secondly, calculate the squares of the residuals . Thirdly, do the local polynomial regression based on the model (3.8). In this section, it is necessary to discuss how to choose parameters. Suppose that the range of is and the kernel function is the Epanechnikov kernel . In addition, the criteria of selecting a bandwidth is minimizing the mean integrated square error (), which can be given by

We could get values of MISE under different orders and bandwidths by 10000 replicates calculation with the above equation, where , see Table 1.

Furthermore, the scatter plot about and MISE under different orders can be drawn, see Figure 1. From Figure 1, it can be seen that values of MISE are the minimum when whether , , and . Therefore, is the optimal bandwidth. Further, value of MISE when is the minimum among the above three, that is, is the optimal order. It can be drawn the conclusion that the optimal parameters are and , respectively. Figure 2 shows the fitting plot and residual plot for variance function after 10000 replicates, where , and .

*Step 2. *Now we substitute which is obtained from Step 1 into model (3.5), and then we will get the estimation of GLS, that is, , . Figure 3 depicts the histograms and asymptotic distributions of and , by which we do 10000 replicates with GLS and choose . It is easy to see from Figure 3 that the estimated distributions of parameters are subject to normal asymptotically. Besides, the OLS for and can easily be obtained, say, , . The fitted and true values for GLS and ones of OLS are listed in Table 1. Here, the relative error is defined by , where and present true and fitted values, respectively. From the comparison in Table 2, we can conclude that the estimators for parameters by GLS are much better than those by OLS. Furthermore, curves of original and regression functions estimated by GLS and OLS are plotted together in order to demonstrate accuracy by GLS, see Figure 4. It is not difficult to see that the GLS regression curve is almost coincident with original curve, while there is a parent bias between OLS regression and original curve. Consequently, estimators for parameters by GLS are what we require.

#### 5. Application

As an example of pure cross-sectional data with potential for heteroscedasticity, consider the data given in Table 3, which gives data on per capita consumption () and per capita gross domestic product (GDP)() for 31 province or city in the People’s Republic of China in 2008. Since the cross-sectional data presented in this table are quite heterogenous in a regression of per capita consumption () on per capita GDP (), heteroscedasticity is likely.

If we want to understand relationship between per capita consumption and per capita GDP, then the regression function is as follows: where and are regression coefficients and is a vector of random errors. The ordinary least squares (OLS) estimations for and can easily be obtained, that is, and . As follow, according to the known sample data, the regression plot with OLS can easily be drawn, seeing Figure 5. Not surprising, there is a positive relationship between per capita consumption and per capita GDP, although it is not statistically significant at the traditional levels.

To testify if the regression (5.1) suffers from heteroscedasticity, we obtain the residuals of the model and plotted them against with per capita GDP, as shown in Figure 6, in which we can see that the residuals are increasing around horizontal axis with the increase in per capita GDP. Therefore, there is a heteroscedasticity in the regression (5.1). Heteroscedasticity can be also obtained through other tests such as Park test and White test. So it can be said that the regression (5.1) suffers from heteroscedasticity. Furthermore, the generalized least squares (GLS) estimations and can easily be got after 10000 replicates, provided with the bandwidth , the order , and . Then the regression plot can be drawn in Figure 7. Compared with Figures 5 and 7, it can be said that the distribution of points around regression line in Figure 7 is more uniform than that in Figure 5. Finally, it can be exactly said that more accurate regression can be obtained by GLS than that by OLS.

#### 6. Conclusions

In this paper we presented a new method for estimation of linear heteroscedastic regression model based on local polynomial estimation with nonparametric technique. The proposed scheme firstly adopted the local polynomial fitting to estimate heteroscedastic function, then the coefficients of regression model are obtained based on generalized least squares method. Our approach avoided the test of heteroscedasticity for the linear model. Due to nonparametric technique of local polynomial estimation, if the heteroscedastic function is unknown, the precision of estimation was improved. Furthermore, the effect of parameters on the fitting was researched and the optimal fitting was obtained. Besides, the asymptotic normality of parameters was verified by the results of numerical simulations. Finally, the simulation results under different parameters and local polynomial estimation of real data in a case of economics really indicated that our approach was effective in finite-sample situations, which did not need to assume the form of heteroscedastic function. The presented algorithm could be easily used to heteroscedastic regression model in some practical problems.

#### Appendix

Suppose is a stationary sequence. Let be the -algebra of events generated by the random variables , and consists of -measurable random variables with finite second moment. Before giving the Lemma A.1, we show the condition as follows.

Condition:(i) The kernel is bounded with a bounded support.(ii) and are continuous at the points and .(iii) The sequence is -mixing processes, that is,as . we also assume that , .(iv) For -mixing processes, there exists a sequence of positive integers satisfying and such that as .Then we have the following lemma.

Lemma A.1. *Under Condition 1, if and is continuous at the point , then as ,
**
where , , , and , and where and are defined as and , respectively.*

The proof of Lemma A.1 is shown in [23]. An immediate consequence of Lemma A.1 is that the asymptotic bias and the asymptotic variance for the local polynomial estimator are defined as

Then, the mean square error (MSE) at point is

Lemma A.2. *For the model (3.4), is assumed to be a vector and follows normal distribution with mean zero and covariance , where , and , a finite positive definite matrix. Then one has*(1)* is a consistent estimator of *(2)* is asymptotically normal with the mean and covariance matrix .*

The proof is shown in [32].

Lemma A.3. *Suppose is a -order local polynomial estimator (LPE) of the variance function . Under conditions , and : *(1)*if is an odd, and the variance function is continuous derivative and bounded, then**
where and are constants. Therefore,*(2)*if is an even, and the variance function is continuous and derivative and bounded, then**
where , and are constants. Then we can get .*

The proof is shown in [18].

Theorem A.4. * For the model (3.4), is assumed to be a vector and follows normal distribution with mean zero and covariance , where , and , , where and are positive definite matrixes. Besides, they satisfy the conditions of Lemma A.3, then β_{ LPE}is asymptotically normal with mean and covariance .*

*Proof. *By Lemma A.2 and the bibliography [32], in order to prove the above results, we only need to testify
where is an estimator of , that is to testify
By Lemma A.3, whether is an odd or an even, we have
Therefore,
The conclusions follow.

#### Acknowledgments

This work was supported by Chongqing CSTC foundations of China (CSTC2010BB2310, CSTC2011jjA40033), Chongqing CMEC foundations of China (KJ080614, KJ100810, and KJ100818).

#### References

- T. Amemiya, “A note on a heteroscedastic model,”
*Journal of Econometrics*, vol. 6, no. 3, pp. 365–370, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. Chen and D. Suter, “A bilinear approach to the parameter estimation of a general heteroscedastic linear system, with application to conic fitting,”
*Journal of Mathematical Imaging and Vision*, vol. 28, no. 3, pp. 191–208, 2007. View at Publisher · View at Google Scholar - J. Ma and L. Liu, “Multivariate nonlinear analysis and prediction of Shanghai stock market,”
*Discrete Dynamics in Nature and Society*, vol. 2008, Article ID 526734, 8 pages, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Liu, Z. Zhang, and Q. Zhao, “The volatility of the index of shanghai stock market research based on ARCH and its extended forms,”
*Discrete Dynamics in Nature and Society*, vol. 2009, Article ID 743685, 9 pages, 2009. View at Publisher · View at Google Scholar - H. Hu, “QML estimators in linear regression models with functional coefficient autoregressive processes,”
*Mathematical Problems in Engineering*, vol. 2010, Article ID 956907, 30 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. R. Rao, Y. Wu, and Q. Shao, “An
*M*-estimation-based procedure for determining the number of regression models in regression clustering,”*Journal of Applied Mathematics and Decision Sciences*, vol. 2007, Article ID 37475, 15 pages, 2007. View at Publisher · View at Google Scholar - Q. X. He and M. Zheng, “Local polynomial regression for heteroscedaticity in the simple linear model,”
*Systems Engineering-Theory Methodology Applications*, vol. 12, no. 2, pp. 153–156, 2003. View at Google Scholar - W. Härdle, M. Müller, S. Sperlich, and A. Werwatz,
*Nonparametric and Semiparametric Models*, Springer Series in Statistics, Springer, New York, NY, USA, 2004. - L. Galtchouk and S. Pergamenshchikov, “Sharp non-asymptotic oracle inequalities for non-parametric heteroscedastic regression models,”
*Journal of Nonparametric Statistics*, vol. 21, no. 1, pp. 1–18, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Fan, “Local linear regression smoothers and their minimax efficiencies,”
*The Annals of Statistics*, vol. 21, no. 1, pp. 196–216, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Fan and W. Zhang, “Statistical estimation in varying coefficient models,”
*The Annals of Statistics*, vol. 27, no. 5, pp. 1491–1518, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - I. Miranda Cabrera, H. L. Baños Díaz, and M. L. Martínez, “Dynamical system and nonlinear regression for estimate host-parasitoid relationship,”
*Journal of Applied Mathematics*, vol. 2010, Article ID 851037, 10 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Bilbao-Guillerna, M. De la Sen, A. Ibeas, and S. Alonso-Quesada, “Robustly stable multiestimation scheme for adaptive control and identification with model reduction issues,”
*Discrete Dynamics in Nature and Society*, vol. 2005, no. 1, pp. 31–67, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - T. J. Koo, “Stable model reference adaptive fuzzy control of a class of nonlinear systems,”
*IEEE Transactions on Fuzzy Systems*, vol. 9, no. 4, pp. 624–636, 2001. View at Publisher · View at Google Scholar · View at Scopus - M. De la Sen, “Identification of a class of linear time-varying continuous-time systems,”
*Automation and Remote Control*, vol. 63, no. 2, pp. 246–261, 2002. View at Google Scholar - M. Jankovic, “Adaptive nonlinear output feedback tracking with a partial high-gain observer and backstepping,”
*Transactions on Automatic Control*, vol. 42, no. 1, pp. 106–113, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. Jansson and B. Wahlberg, “On consistency of subspace methods for system identification,”
*Automatica*, vol. 34, no. 12, pp. 1507–1519, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Fan, “Design-adaptive nonparametric regression,”
*Journal of the American Statistical Association*, vol. 87, no. 420, pp. 998–1004, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. Pandya, K. Bhatt, and P. Andharia, “Bayes estimation of two-phase linear regression model,”
*International Journal of Quality, Statistics, and Reliability*, vol. 2011, Article ID 357814, 9 pages, 2011. View at Publisher · View at Google Scholar - L.-Y. Su, “Prediction of multivariate chaotic time series with local polynomial fitting,”
*Computers & Mathematics with Applications*, vol. 59, no. 2, pp. 737–744, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. Y. Su, “Multivariate local polynomial regression with application to Shenzhen component index,”
*Discrete Dynamics in Nature and Society*, vol. 2011, Article ID 930958, 11 pages, 2011. View at Publisher · View at Google Scholar - L. Y. Su, Y. J. Ma, and J. J. Li, “The application of local polynomial estimation in suppressing strong chaotic noise,”
*Chinese Physics B*, vol. 21, no. 2, Article ID 020508, 2012. View at Google Scholar - J. Fan and Q. Yao,
*Nonlinear Time Series: Nonparametric and Parametric Methods*, Springer Series in Statistics, Springer, New York, NY, USA, 2003. View at Publisher · View at Google Scholar - D. Ruppert and M. P. Wand, “Multivariate locally weighted least squares regression,”
*The Annals of Statistics*, vol. 22, no. 3, pp. 1346–1370, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,”
*IEEE Transactions on Image Processing*, vol. 13, no. 10, pp. 1327–1344, 2004. View at Publisher · View at Google Scholar · View at Scopus - H. Takeda, S. Farsiu, and P. Milanfar, “Robust kernel regression for restoration and reconstruction of images from sparse noisy data,” in
*Proceedings of the IEEE International Conference on Image Processing (ICIP '06)*, pp. 1257–1260, Atlanta, Ga, USA, October 2006. View at Publisher · View at Google Scholar · View at Scopus - L. Firinguetti, “Ridge regression in the context of a system of seemingly unrelated regression equations,”
*Journal of Statistical Computation and Simulation*, vol. 56, no. 2, pp. 145–162, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Fan and I. Gijbels, “Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation,”
*Journal of the Royal Statistical Society B*, vol. 57, no. 2, pp. 371–394, 1995. View at Google Scholar · View at Zentralblatt MATH - L. Y. Su and F. L. Li, “Deconvolution of defocused image with multivariate local polynomial regression and iterative wiener filtering in DWT domain,”
*Mathematical Problems in Engineering*, vol. 2010, Article ID 605241, 14 pages, 2010. View at Publisher · View at Google Scholar - J. Fan and I. Gijbels,
*Local Polynomial Modelling and Its Applications*, vol. 66 of*Monographs on Statistics and Applied Probability*, Chapman & Hall, London, UK, 1996. - H. C. Rutemiller and D. A. Bowers, “Estimation in a heteroscedastic regression model,”
*Journal of the American Statistical Association*, vol. 63, pp. 552–557, 1968. View at Publisher · View at Google Scholar - H. G. William,
*Econometric Analysis*, Macmillan, New York, NY, USA, 1993.