- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Discrete Dynamics in Nature and Society

Volume 2012 (2012), Article ID 201678, 11 pages

http://dx.doi.org/10.1155/2012/201678

## Local Polynomial Regression Solution for Partial Differential Equations with Initial and Boundary Values

^{1}School of Mathematics and Statistics, Chongqing University of Technology, Chongqing 400054, China^{2}Institute of Library, Chongqing University of Technology, Chongqing 400054, China

Received 15 April 2012; Accepted 29 July 2012

Academic Editor: Leonid Shaikhet

Copyright © 2012 Liyun Su et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Local polynomial regression (LPR) is applied to solve the partial differential equations (PDEs). Usually, the solutions of the problems are separation of variables and eigenfunction expansion methods, so we are rarely able to find analytical solutions. Consequently, we must try to find numerical solutions. In this paper, two test problems are considered for the numerical illustration of the method. Comparisons are made between the exact solutions and the results of the LPR. The results of applying this theory to the PDEs reveal that LPR method possesses very high accuracy, adaptability, and efficiency; more importantly, numerical illustrations indicate that the new method is much more efficient than B-splines and AGE methods derived for the same purpose.

#### 1. Introduction

There are some effective and convenient numerical methods for partial differential equations problems with initial and boundary values;for example, the radial basis functions are used to solve two-dimensional sine-Gordon equation in [1], a family of second-order methods are used to solve variable coefficient fourth-order parabolic partial differential equations in [2], and fifth-degree B-spline solution is available for a fourth-order parabolic partial differential equations in [3]. Recently, H. Caglar and N. Caglar [4] have used local polynomial regression (LPR) method for the numerical solution of fifth-order boundary value problems [4]. Moreover, they manage to solve linear Fredholm and Volterra integral equations by using LPR method [5]. Consequently, we can consider LPR method is applied to some PDEs with initial and boundary values, and numerical results demonstrate that local polynomial fitting method is more accurate, simple, and efficient. First, we consider the numerical approximation for the simple nonhomogeneous PDE with initial values with the use of local polynomial regression:

#### 2. Bivariate Local Polynomial Regression

Bivariate local polynomial regression is an attractive method both from theoretical and practical point of view. Multivariate local polynomial method has a small mean squared error compared with the Nadaraya-Watson estimator which leads to an undesirable form of the bias and the Gasser-Muller estimator which has to pay a price in variance when dealing with a random design model. Multivariate local polynomial fitting also has other advantages. The method adapts to various types of designs such as random and fixed designs and highly clustered and nearly uniform designs. Furthermore, there is an absence of boundary effects: the bias at the boundary stays automatically of the same order as the interior, without the use of specific boundary kernels. The local polynomial approximation approach is appealing on general scientific grounds; the least squares principle to be applied opens the way to a wealth of statistical knowledge and thus easy generalizations. All the above-mentioned assertions or advantages can be found in literatures [6–10]. The basic idea of multivariate local polynomial regression is also proposed in [11–14]. In this section, we briefly outline the idea of the extension of bivariate local polynomial fitting to bivariate regression.

Suppose that the state vector at point is : Our purpose is to obtain the estimation . In this paper, we use the th order multivariate local polynomial to predict the value of the fixed-point . The polynomial function can be described as where is the order of the expansion:

In the bivariate regression method, the change of on the attractor is assumed to be the same as those of nearby points, according to the distance order. Using pairs of , for which the values are already known, the coefficients of are determined by minimizing For the weighted least squared problem, when is inverse, the solution can be described by where and is the , Then we can get the estimation : where . There are several important issues about the bandwidth, the order of multivariate local polynomial function, and the kernel function which have to be discussed.

##### 2.1. Parameters Estimations and Selections

There are many of the embedding dimensions algorithms [15, 16]. In univariate series , a popular method that is used for finding the embedding dimensions is the so-called false nearest-neighbor method [17, 18]. Here, we apply this method to the bivariate case.

For the multivariate local polynomial estimator, there are three important problems which have significant influence on the prediction accuracy and computational complexity. First of all, there is the choice of the bandwidth matrix, which plays a rather crucial role. The bandwidth matrix is taken to be a diagonal matrix. For simplification, the bandwidth matrix is designed into , where is a unit matrix of two orders. In theory, there exists an optimal bandwidth in the meaning of mean squared error, such that

But the optimal bandwidth cannot be solved directly. So we discuss how to get the asymptotically optimal bandwidth. We make use of searching method to select bandwidth. When the bandwidth varies from small to big, compared with the values of the objective function, we can choose the smallest bandwidth to ensure the minimum value of the objective function. So the smallest bandwidth is the optimal bandwidth.

Given , where is minimum, is efficient of expansion. We search the bandwidth to ensure the objective function value of minimum in district , where the objective function stands for the mean square error (MSE).

First, we assume and then increase by making use of efficient of expansion . When stops, the minimum is the optimal bandwidth if obtains the minimum.. can be taken place of where is the number predicted. In this paper, we choose where ,. Compared to other methods, this method is more convenient.

In order to get closer to the ideal optimal bandwidth, we search once again by narrowing the interval on the basis of the previous searching process. Suppose is the bandwidth which makes optimal in the above searching process. Now, small interval is divided into equal intervals again. Suppose Among bandwidth, the bandwidth that makes minimized is the optimal bandwidth.

Another issue in multivariate local polynomial fitting is the choice of the order of the polynomial. Since the modeling bias is primarily controlled by the bandwidth, this issue is less crucial, however. For a given bandwidth , a large value of would expectedly reduce the modeling bias but would cause a large variance and a considerable computational cost. Since the bandwidth is used to control the modeling complexity, and due to the sparsity of local data in multidimensional space, a higher-order polynomial is rarely used. So, we apply the local quadratic regression to fit the model (i.e., ). The third issue is the selection of the kernel function. In this paper, we choose the optimal spherical *Epanechnikov* kernel function [6, 7] which minimizes the asymptotic mean square error (MSE) of the resulting multivariate local polynomial estimators, as our kernel function.

#### 3. LPR Solutions for PDE

For the following nonhomogeneous PDE with boundary values:

Suppose , so our purpose is to obtain estimation given pairs of points , where , , ; then we can get that is described by Consequently, the corresponding minimization function is In order to find expression of , we need to find the elements of at first.

Since binary function’s *p*-order Taylor expansion has items, then we can get expressions (3.5), (3.6), and (3.7):

where is matrix:

Substitute (3.6) and (3.7) into Now, we can get the estimation , where .

#### 4. Numerical Illustrations and Discussions

In this section we consider the numerical results obtained by the schemes discussed previously by applying them to the following second-order and fourth-order initial boundary value problems. All computations are carried out using MATLAB 7.0.

Furthermore, in order to evaluate the accuracy and effectiveness, we apply the following indices, namely, the mean squared prediction error (MSE): and the absolute error

*Example 4.1. *First, we consider the following second-order nonhomogeneous PDEs:

The exact solution for problems (4.3) is . We solve Example 4.1 with by choosing and various values of parameters presented in Table 1. The errors in the solutions are computed by our method (3.9). In Table 1, . Given , , it is found that the magnitude of MSE is between 10 to the power of −3 and 10 to the power of −4. Given , , it can be seen that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −7. However, by setting up , , it is obvious that the magnitude of MSE is between 10 to the power of −5 and 10 to the power of −8. We conclude that MSE decreases with the increase of the value of the value of order has no much influence on MSE for a large value of . Consider parameter ; specifically, MSEs get up to minimum for , when , for , when , for , when . We can find that the optimal bandwidth locates in district [0.025, 0.04] by using method (2.12)-(2.13) for , 30, 50. For , there exists the same situation with . Here, is a unit matrix of two orders. The fitting figure is depicted in Figure 1 using matlab 7.0.

*Example 4.2. *Next we consider the following fourth-order nonhomogeneous PDEs:
with the initial conditions,
and the boundary conditions at and of the form

The exact solution for problems (4.4), (4.5), and (4.6) is . This problem has been solved by several authors [19, 20]. Here, we try to solve Example 4.2 with by choosing and various values of parameters presented in Table 2. The errors in the solutions are computed by our method (3.9); the splines method for three kinds of values of in [19], and the AGE method for two kinds of values of its parameters in [20] have been presented in Table 2. In Table 1, . Given , it is found that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −6 for , between 10 to the power of −6 and 10 to the power of −8 for . We can see the value of order has a little influence on MSE. Consider parameter ; specifically, MSEs get up to minimum for , when , for , when , for , when . Similar to Example 4.1, we can also acknowledge the optimal bandwidth exists in district [0.025, 0.03] by using method (2.12)-(2.13), for and given . Furthermore, we have tabulated the absolute errors (AEs) at for different combination of parameters for and , respectively. Given , , 5, the magnitude of AE is between 10 to the power of −3 and 10 to the power of −6 at point , and given , , 5, the magnitude of AE is between 10 to the power of −5 and 10 to the power of −7 at point . Here, is a unit matrix of two orders. The fitting figure is also illustrated in Figure 2 using matlab 7.0.

#### 5. Conclusions

In this paper, LPR method can be applied for the numerical solution of some kinds of PDEs. We can also see that LPR method has been exploited to solve fifth-order boundary value problems [4] and Fredholm and Volterra integral equations [5] whose maximum absolute errors are very small and calculation processes are simple and feasible. Compared with the splines method [19] and the AGE method [20], LPR method converges to solutions with fewer number of nodes, and the maximum absolute errors are a little smaller. Moreover, it is more flexible to resolve problems just only adjusting parameters . However, LPR methods have shortcomings which need more computation time compared with splines method [19] for the same problems, which can be tried to discuss and resolve afterward. In any case, it is more important that we can conclude LPR solution is a powerful tool for numerical solutions of differential equations with initial and boundary values.

#### Acknowledgment

This work was supported by Natural Science Foundation Projects of CQ CSTC of China (CSTC2010BB2310, CSTC2011jjA40033, CSTC2012jjA00037) and Chongqing CMEC Foundations of China (KJ080614, KJ120829).

#### References

- M. Dehghan and A. Shokri, “A numerical method for solution of the two-dimensional sine-Gordon equation using the radial basis functions,”
*Mathematics and Computers in Simulation*, vol. 79, no. 3, pp. 700–715, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Q. M. Khaliq and E. H. Twizell, “A family of second order methods for variable coefficient fourth order parabolic partial differential equations,”
*International Journal of Computer Mathematics*, vol. 23, pp. 63–76, 1987. - H. Caglar and N. Caglar, “Fifth-degree B-spline solution for a fourth-order parabolic partial differential equations,”
*Applied Mathematics and Computation*, vol. 201, no. 1-2, pp. 597–603, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - H. Caglar and N. Caglar, “Solution of fifth order boundary value problems by using local polynomial regression,”
*Applied Mathematics and Computation*, vol. 186, no. 2, pp. 952–956, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Caglar and N. Caglar, “Numerical solution of integral equations by using local polynomial regression,”
*Journal of Computational Analysis and Applications*, vol. 10, no. 2, pp. 187–195, 2008. View at Zentralblatt MATH · View at Scopus - J. Fan and I. Gijbels,
*Local Polynomial Modelling and Its Applications*, vol. 66, Chapman & Hall, London, UK, 1996. - J. Fan and Q. Yao,
*Nonlinear Time Series*, Springer, New York, NY, USA, 2003. View at Publisher · View at Google Scholar - J. Fan and I. Gijbels, “Adaptive order polynomial fitting: bandwidth robustification and bias reduction,”
*Journal of Computational and Graphical Statistics*, vol. 4, no. 3, pp. 213–227, 1995. - J. Fan and I. Gijbels, “Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation,”
*Journal of the Royal Statistical Society B*, vol. 57, no. 2, pp. 371–394, 1995. View at Zentralblatt MATH - J. Fan, N. E. Heckman, and M. P. Wand, “Local polynomial kernel regression for generalized linear models and quasi-likelihood functions,”
*Journal of the American Statistical Association*, vol. 90, no. 429, pp. 141–150, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. Su and F. Li, “Deconvolution of defocused image with multivariate local polynomial regression and iterative wiener filtering in DWT domain,”
*Mathematical Problems in Engineering*, vol. 2010, Article ID 605241, 14 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - L. Su, Y. Zhao, and T. Yan, “Two-stage method based on local polynomial fitting for a linear heteroscedastic re-gression model and its application in economics,”
*Discrete Dynamics in Nature and Society*, vol. 2012, Article ID 696927, 17 pages, 2012. View at Publisher · View at Google Scholar - L.-Y. Su, “Prediction of multivariate chaotic time series with local polynomial fitting,”
*Computers & Mathematics with Applications*, vol. 59, no. 2, pp. 737–744, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. Su, Y. Ma, and J. Li, “Application of local polynomial estimation in suppressing strong chaotic noise,”
*Chinese Physics B*, vol. 21, no. 2, Article ID 020508, 2012. - M. B. Kennel, R. Brown, and H. D. I. Abarbanel, “Determining embedding dimension for phase-space reconstruction using a geometrical construction,”
*Physical Review A*, vol. 45, no. 6, pp. 3403–3411, 1992. View at Publisher · View at Google Scholar · View at Scopus - S. Boccaletti, D. L. Valladares, L. M. Pecora, H. P. Geffert, and T. Carroll, “Reconstructing embedding spaces of coupled dynamical systems from multivariate data,”
*Physical Review E*, vol. 65, no. 3, Article ID 035204, pp. 1–4, 2002. View at Publisher · View at Google Scholar · View at Scopus - H. Kantz and T. Schreiber,
*Nonlinear Time Series Analysis*, vol. 7, Cambridge University Press, Cambridge, UK, 1997. - A. M. Fraser and H. L. Swinney, “Independent coordinates for strange attractors from mutual information,”
*Physical Review A*, vol. 33, no. 2, pp. 1134–1140, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - T. Aziz, A. Khan, and J. Rashidinia, “Spline methods for the solution of fourth-order parabolic partial differential equations,”
*Applied Mathematics and Computation*, vol. 167, no. 1, pp. 153–166, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. J. Evans and W. S. Yousif, “A note on solving the fourth order parabolic equation by the AGE method,”
*International Journal of Computer Mathematics*, vol. 40, pp. 93–97, 1991.