Abstract
Local polynomial regression (LPR) is applied to solve the partial differential equations (PDEs). Usually, the solutions of the problems are separation of variables and eigenfunction expansion methods, so we are rarely able to find analytical solutions. Consequently, we must try to find numerical solutions. In this paper, two test problems are considered for the numerical illustration of the method. Comparisons are made between the exact solutions and the results of the LPR. The results of applying this theory to the PDEs reveal that LPR method possesses very high accuracy, adaptability, and efficiency; more importantly, numerical illustrations indicate that the new method is much more efficient than B-splines and AGE methods derived for the same purpose.
1. Introduction
There are some effective and convenient numerical methods for partial differential equations problems with initial and boundary values;for example, the radial basis functions are used to solve two-dimensional sine-Gordon equation in [1], a family of second-order methods are used to solve variable coefficient fourth-order parabolic partial differential equations in [2], and fifth-degree B-spline solution is available for a fourth-order parabolic partial differential equations in [3]. Recently, H. Caglar and N. Caglar [4] have used local polynomial regression (LPR) method for the numerical solution of fifth-order boundary value problems [4]. Moreover, they manage to solve linear Fredholm and Volterra integral equations by using LPR method [5]. Consequently, we can consider LPR method is applied to some PDEs with initial and boundary values, and numerical results demonstrate that local polynomial fitting method is more accurate, simple, and efficient. First, we consider the numerical approximation for the simple nonhomogeneous PDE with initial values with the use of local polynomial regression:
2. Bivariate Local Polynomial Regression
Bivariate local polynomial regression is an attractive method both from theoretical and practical point of view. Multivariate local polynomial method has a small mean squared error compared with the Nadaraya-Watson estimator which leads to an undesirable form of the bias and the Gasser-Muller estimator which has to pay a price in variance when dealing with a random design model. Multivariate local polynomial fitting also has other advantages. The method adapts to various types of designs such as random and fixed designs and highly clustered and nearly uniform designs. Furthermore, there is an absence of boundary effects: the bias at the boundary stays automatically of the same order as the interior, without the use of specific boundary kernels. The local polynomial approximation approach is appealing on general scientific grounds; the least squares principle to be applied opens the way to a wealth of statistical knowledge and thus easy generalizations. All the above-mentioned assertions or advantages can be found in literatures [6–10]. The basic idea of multivariate local polynomial regression is also proposed in [11–14]. In this section, we briefly outline the idea of the extension of bivariate local polynomial fitting to bivariate regression.
Suppose that the state vector at point is : Our purpose is to obtain the estimation . In this paper, we use the th order multivariate local polynomial to predict the value of the fixed-point . The polynomial function can be described as where is the order of the expansion:
In the bivariate regression method, the change of on the attractor is assumed to be the same as those of nearby points, according to the distance order. Using pairs of , for which the values are already known, the coefficients of are determined by minimizing For the weighted least squared problem, when is inverse, the solution can be described by where and is the , Then we can get the estimation : where . There are several important issues about the bandwidth, the order of multivariate local polynomial function, and the kernel function which have to be discussed.
2.1. Parameters Estimations and Selections
There are many of the embedding dimensions algorithms [15, 16]. In univariate series , a popular method that is used for finding the embedding dimensions is the so-called false nearest-neighbor method [17, 18]. Here, we apply this method to the bivariate case.
For the multivariate local polynomial estimator, there are three important problems which have significant influence on the prediction accuracy and computational complexity. First of all, there is the choice of the bandwidth matrix, which plays a rather crucial role. The bandwidth matrix is taken to be a diagonal matrix. For simplification, the bandwidth matrix is designed into , where is a unit matrix of two orders. In theory, there exists an optimal bandwidth in the meaning of mean squared error, such that
But the optimal bandwidth cannot be solved directly. So we discuss how to get the asymptotically optimal bandwidth. We make use of searching method to select bandwidth. When the bandwidth varies from small to big, compared with the values of the objective function, we can choose the smallest bandwidth to ensure the minimum value of the objective function. So the smallest bandwidth is the optimal bandwidth.
Given , where is minimum, is efficient of expansion. We search the bandwidth to ensure the objective function value of minimum in district , where the objective function stands for the mean square error (MSE).
First, we assume and then increase by making use of efficient of expansion . When stops, the minimum is the optimal bandwidth if obtains the minimum.. can be taken place of where is the number predicted. In this paper, we choose where ,. Compared to other methods, this method is more convenient.
In order to get closer to the ideal optimal bandwidth, we search once again by narrowing the interval on the basis of the previous searching process. Suppose is the bandwidth which makes optimal in the above searching process. Now, small interval is divided into equal intervals again. Suppose Among bandwidth, the bandwidth that makes minimized is the optimal bandwidth.
Another issue in multivariate local polynomial fitting is the choice of the order of the polynomial. Since the modeling bias is primarily controlled by the bandwidth, this issue is less crucial, however. For a given bandwidth , a large value of would expectedly reduce the modeling bias but would cause a large variance and a considerable computational cost. Since the bandwidth is used to control the modeling complexity, and due to the sparsity of local data in multidimensional space, a higher-order polynomial is rarely used. So, we apply the local quadratic regression to fit the model (i.e., ). The third issue is the selection of the kernel function. In this paper, we choose the optimal spherical Epanechnikov kernel function [6, 7] which minimizes the asymptotic mean square error (MSE) of the resulting multivariate local polynomial estimators, as our kernel function.
3. LPR Solutions for PDE
For the following nonhomogeneous PDE with boundary values:
Suppose , so our purpose is to obtain estimation given pairs of points , where , , ; then we can get that is described by Consequently, the corresponding minimization function is In order to find expression of , we need to find the elements of at first.
Since binary function’s p-order Taylor expansion has items, then we can get expressions (3.5), (3.6), and (3.7):
where is matrix:
Substitute (3.6) and (3.7) into Now, we can get the estimation , where .
4. Numerical Illustrations and Discussions
In this section we consider the numerical results obtained by the schemes discussed previously by applying them to the following second-order and fourth-order initial boundary value problems. All computations are carried out using MATLAB 7.0.
Furthermore, in order to evaluate the accuracy and effectiveness, we apply the following indices, namely, the mean squared prediction error (MSE): and the absolute error
Example 4.1. First, we consider the following second-order nonhomogeneous PDEs:
The exact solution for problems (4.3) is . We solve Example 4.1 with by choosing and various values of parameters presented in Table 1. The errors in the solutions are computed by our method (3.9). In Table 1, . Given , , it is found that the magnitude of MSE is between 10 to the power of −3 and 10 to the power of −4. Given , , it can be seen that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −7. However, by setting up , , it is obvious that the magnitude of MSE is between 10 to the power of −5 and 10 to the power of −8. We conclude that MSE decreases with the increase of the value of the value of order has no much influence on MSE for a large value of . Consider parameter ; specifically, MSEs get up to minimum for , when , for , when , for , when . We can find that the optimal bandwidth locates in district [0.025, 0.04] by using method (2.12)-(2.13) for , 30, 50. For , there exists the same situation with . Here, is a unit matrix of two orders. The fitting figure is depicted in Figure 1 using matlab 7.0.
Example 4.2. Next we consider the following fourth-order nonhomogeneous PDEs: with the initial conditions, and the boundary conditions at and of the form
The exact solution for problems (4.4), (4.5), and (4.6) is . This problem has been solved by several authors [19, 20]. Here, we try to solve Example 4.2 with by choosing and various values of parameters presented in Table 2. The errors in the solutions are computed by our method (3.9); the splines method for three kinds of values of in [19], and the AGE method for two kinds of values of its parameters in [20] have been presented in Table 2. In Table 1, . Given , it is found that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −6 for , between 10 to the power of −6 and 10 to the power of −8 for . We can see the value of order has a little influence on MSE. Consider parameter ; specifically, MSEs get up to minimum for , when , for , when , for , when . Similar to Example 4.1, we can also acknowledge the optimal bandwidth exists in district [0.025, 0.03] by using method (2.12)-(2.13), for and given . Furthermore, we have tabulated the absolute errors (AEs) at for different combination of parameters for and , respectively. Given , , 5, the magnitude of AE is between 10 to the power of −3 and 10 to the power of −6 at point , and given , , 5, the magnitude of AE is between 10 to the power of −5 and 10 to the power of −7 at point . Here, is a unit matrix of two orders. The fitting figure is also illustrated in Figure 2 using matlab 7.0.
5. Conclusions
In this paper, LPR method can be applied for the numerical solution of some kinds of PDEs. We can also see that LPR method has been exploited to solve fifth-order boundary value problems [4] and Fredholm and Volterra integral equations [5] whose maximum absolute errors are very small and calculation processes are simple and feasible. Compared with the splines method [19] and the AGE method [20], LPR method converges to solutions with fewer number of nodes, and the maximum absolute errors are a little smaller. Moreover, it is more flexible to resolve problems just only adjusting parameters . However, LPR methods have shortcomings which need more computation time compared with splines method [19] for the same problems, which can be tried to discuss and resolve afterward. In any case, it is more important that we can conclude LPR solution is a powerful tool for numerical solutions of differential equations with initial and boundary values.
Acknowledgment
This work was supported by Natural Science Foundation Projects of CQ CSTC of China (CSTC2010BB2310, CSTC2011jjA40033, CSTC2012jjA00037) and Chongqing CMEC Foundations of China (KJ080614, KJ120829).