Abstract

Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions.

1. Introduction

The sparse representation of functions via a linear combination of a small number of basic functions has recently received a lot of attention in several mathematical fields such as approximation theory as well as signal and image processing [1]. The advantage of the sparse grid approach is that it can be extended to nonsmooth solutions by adaptive refinement methods; that is, it can capture the steep waves appearing in the solution of the PDEs. In fact, the boundary conditions can also be taken as the nonsmooth parts appearing in the solution, especially to the Neumann boundary. Furthermore, it can be generalized from piecewise linear to high-order polynomials. Also, more sophisticated basis functions like interpolets, prewavelets, or wavelets can be used in a straightforward way [2]. In practice, the standard piecewise linear multiscale basis in one dimension, that is, the Faber-Schauder, can be viewed as a scaling function in the wavelet analysis. As an interpolation operator, the basis functions act as the Dirac delta function when operating on itself and its derivatives [3]. So, the interpolation wavelet such as the Shannon wavelet, Shannon-Gabor wavelet, Harr wavelet, and the autocorrelation function of the Daubechies scaling function can be taken as the basis function to construct the sparse grid approach directly.

Faber-Schauder and Haar scaling function do not have the smoothness property, so the function and its derivative to be approximated cannot be represented exactly by them. The autocorrelation function of Daubechies scaling function has been widely used in various numerical methods for PDEs such as the wavelet collocation method and the sparse grids method. The Daubechies scaling functions possess almost all the excellent numerical properties, such as orthogonality, smoothness, and compact support, which are helpful in improving numerical accuracy and efficiency. However, the autocorrelation function of the Daubechies scaling function loses the orthogonality. In addition, Daubechies scaling function has no exact analytical expression. This will bring error to the approximation solution from the Daubechies wavelet numerical method.

Cattani studied the properties of the Shannon wavelet function [4], which possesses many advantages such as orthogonality and is continuous and differentiable. It also has the advantage over the Hermite DAF in that it is an interpolating function, producing matrix equations that have the potential to be relatively sparse. In addition, the second-order approximation of a C2-function, based on Shannon wavelet functions, is given [5]. The approximation is compared with the wavelet reconstruction formula and the error of approximation is explicitly computed [6]. The advantages of the Shannon wavelet have been illustrated in solving PDEs in engineering [7], which can avoid the shortcomings of Daubechies wavelet such as the interpolation property. Furthermore, Cattani studied the fractional calculus problems with the Shannon firstly. A perceived disadvantage of the Shannon scaling function is that it tends towards zero quite slowly. The direct consequence of this is that a large number of the nodal values will contribute significantly when calculating the derivatives of the function to be approximated. It is for that reason that Hoffman et al. constructed the Shannon-Gabor wavelet [8] using the Gaussian window function. In some ways it improves the approximation to a Dirac delta function compared with Shannon wavelet. However, the presence of the Gaussian window destroys the orthogonal properties possessed by the Shannon wavelet, effectively worsening the approximation to a Dirac delta function. In order to test the multilevel interpolation operator constructed in this paper, the Faber-Schauder, Shannon scaling functions and the autocorrelation function of the Daubechies scaling function are taken as the basis employed in the multilevel interpolator to discretize the PDEs in the experiments, respectively.

There are many ways to solve the system of nonlinear ODEs, which are obtained from the discretization of the nonlinear PDEs using the multilevel interpolator. Compared with the finite difference method, the retained grid points are sparse and the dimensionality of the system of ODEs is smaller. This is helpful to improve the efficiency, but the small change of the condition number and the smoothness of the function to be approximated can destroy the exactness of the numerical solution obtained by the traditional difference method. The variational iteration method proposed by Inokuti et al. in 1978 [9], which has been developed by He [10, 11] and widely used in various fields [12ā€“14], is able to give the solution of the nonlinear problems in an infinite series usually converging to an accurate solution rapidly. By means of the precise integration method (PIM), VIM has been generated to solve the system of nonlinear ODEs by Mei and Zhang [15]. In fact, both of the PIM and VIM are the analytical method; so, the impact of the system of ODEs on the choice of the sparse grid points can be neglected.

The dynamic choice of the inner grid points relates with the smoothness and the gradient at each point of the solution to be approximated, and the external grid points relates with the boundary conditions. A better choice of the external grid points can restrict the boundary effect effectively. Most of the schemes are based on the extension of the solution function, such as the interval wavelet in the wavelet collocation method [16] and Lagrange multiplier in the sparse grid approaches [17]. In most cases, the smoothness and the gradient around the boundary are variational dynamically such as the Nuemann boundary conditions. The extension method by means of the Lagrange multiplier is not suited to change the external grid points dynamically.

In our approach we want to achieve several goals so that the solution should be sparse and should be a good approximation. First of all is to construct a multilevel interpolation operator with which the adaptive sparse grid approach can be simplified to a linear combination of the interpolation operators. The operator should be independent with the basic functions. So, we can take different basis functions in the interpolation operator to solve different problems. Second goal is to construct an adaptive sparse grid approach by combining the multilevel interpolation operator and the VIM. The last one is to construct a dynamic choice scheme on the external grid points, so that both of the inner and external grid points are dynamic with the development of the solution, especially to PDEs with the Neumann boundary conditions.

2. Multilevel Interpolator on Sparse Grids

2.1. Interpolating Multiresolution Analysis

Let us start with the interpolating multiresolution analysis [18] that is necessary for a detailed discussion of sparse grids for purposes of interpolation or approximation, respectively. Let be any of the interpolating basis function such as the Shannon, Faber-Schauder, scaling functions or the autocorrelation function of the Daubechies scaling function. This mother of all basis functions can be used to generate an arbitrary by dilation and translation; that is It is easy to check that introducing the spaces For convenience of notation we use the superscript to denote the level of resolution and the subscript to denote the location in physical space. The sequence is a multiresolution analysis, which is an increasing sequence of closed subspaces of . We call such a structure an interpolating multiresolution analysis due to the fact that the function verifies what we call interpolation property, that is . We may then define an interpolation operator It is obvious that is just the nodal point basis of the finite-dimensional space . Additionally, we introduce the hierarchical increments where .

Let , we may remark that the function verifies It is obvious that Such multiresolution analysis has been extensively investigated in [9]. According to this theory, any function can be represented approximately as: The coefficients and are defined as: respectively. This shows that the coefficient measures the lack of approximation of by [19].

2.2. Multilevel Interpolation Operator on Sparse Grids

Equation (7) is the approximate representation of function , which is not unique since the set of functions is not linearly independent. In this section, we will try to determine the sparsest representation, that is, a representation with a maximal number of vanishing coefficients among the coefficients . The conventional scheme in signal processing, acquiring the entire signal and then compressing it, was questioned by Donoho and Elad [20]. Indeed, this technique uses tremendous resources to acquire often very large signals, just to throw away information during compression. The popular solving scheme is the compressed sensing technique proposed by Donoho [21]. In contrast to it, we try to achieve the same goals by constructing a multilevel interpolation operator via combining the interpolating multiresolution analysis described above and the wavelet transform theory [22].

Let us start with the definition of the interpolation operator is the interpolation function. According to the wavelet transform theory, function can be expressed approximately as: where , and the interpolation wavelet transform coefficient can be denoted as: where, , ,ā€‰ā€‰, and is the restriction operator defined as: Suppose Substituting (13) into (11), we obtain If , then Substituting the restriction operator (12) and the wavelet transform coefficient (13) into (10), the approximate expression of the solution can be obtained as: According to the definition of the interpolation operator (9), itā€™s easy to obtain the expression of the interpolation operator as follows: The corresponding -order derivative of the interpolation operator is Substitute (17) and (18) into the nonlinear PDEs, and it can be changed to a system of nonlinear ODEs, the approximate analytical solution of which can be obtained with the variational iteration method (VIM).

3. Coupling Technique of VIM and Sparse Grid Method for Nonlinear PDEs

As mentioned above, the multilevel interpolation operator is independent of the basic functions; that is, any basis function with the interpolation property can be employed in (17) directly. But the basis function without the th order derivative cannot be employed in (18) directly. In this section, we just consider the parabolic PDEs with the second order derivative as follows: where is the definition domain in plane.

Therefore, there are two cases that will be discussed in detail in the following. One is that the basic function to be employed in (7) has second-order derivative; the other aims at the Faber-Schauder scaling function.

3.1. Basis Function with C2 Continuity

Substituting (16) into (19), it is easy to obtain the nonlinear matrix differential equations as follows: where is a linear operator, a nonlinear operator, and is an inhomogeneous term, is an -dimensional unknown vector, and dot stands for the differential with respect to time variable . For convenience, (20) can be rewritten as: is a given constant matrix, and is an -dimensional nonlinear external force vector.

According to VIM, we can write down a correction functional as follows: where is a general Lagrange vector multiplier [23] which can be identified optimally via the variational theory. The subscript denotes the th approximation and is considered as a restricted variation [24ā€“27]; that is, .

Using VIM, the stationary conditions of (22) can be obtained as follows: The Lagrange vector multiplier can therefore be readily identified,

As a result, we obtain the following iteration formula:

According to VIM, we can start with an arbitrary initial approximation that satisfies the initial condition. So we take the exact analytic solution of as the initial approximation; that is, where is the given initial value vector.

Substituting (26) into (25) and after simplification, we have According to the theory of matrices, the analytical expression of the external force is required now, but it is not always available except is a constant vector ; that is, The integration term of (15) is where the exponential matrix can be calculated accurately in PIM, and is a unit matrix. Substituting (29) into (27), we obtain the variational iteration formula of the matrix differential equation as follows: can be solved exactly by means of the precise integration method (PIM) [28].

3.2. Basis Function with C1 Continuity

Faber-Schauder scaling function is the typical basis with continuity. For convenience to construct the variational equation, the parameter should be discretized as , where , . Then, can be approximated as: Substituting above equation into (19), we obtain Obviously, it is the initial-boundary elliptic PDEs. Using the virtual displacement theory, the variation equation can be obtained as: where is the Sobolev space.

According to the interpolation wavelet transform theory, the variables and can be approximated as: The first-order derivatives are respectively. Substituting (35)-(36) into (32), the sparse method for the parabolic PDEs based on the Faber-Schauder scaling function will be obtained. The system of ODEs can be solved exactly by means of the precise integration method (PIM).

4. Dynamic Choice Scheme on the External Grid Points

Combining the multilevel interpolation operator with the threshold scheme, it is easy to obtain the sparse inner grid points dynamically. Any adaptive method can capture the steep gradient appearing in the solution; that is, the inner grid points can concentrate around the larger gradient points adaptively. The PDEs in engineering are always defined in the finite domain, so the boundary condition can usually change the smoothness of solution around the boundary. This results in that more grid points around the boundary contribute to the solution and increase the calculation amount. The reasonable choice of the external grid points can decrease the boundary effect and improve the precision of the solution. In this section, we try to give a dynamic choice scheme of the external grid points, which is deduced from concept of the interval interpolation wavelet and is different from it.

4.1. Construction of the Interval Interpolation Wavelet

In general, the interpolation basis functions defined in interval can be represented as:where where is the amount of the external collocation points; the amount of discrete points in the definition domain is ; is the definition domain of the approximated function.

Equations (37) and (38) show that the interval wavelet is derived from the domain extension. The supplementary discrete points in the extended domain are called external points. The value of the approximated function at the external points can be obtained by Lagrange extrapolation method. Using the interval wavelet to approximate a function, the boundary effect can be left in the supplementary domain; that is, the boundary effect is eliminated in the definition domain.

According to (37) and (38), the interval wavelet approximant of the function can be expressed as is the given value at the discrete point . At the external points, can be calculated by extrapolation method; that is,So the interval wavelet approximant of can be rewritten as: Let then and correspond to the left and the right external points, respectively. They are obtained by Lagrange extrapolation using the internal collocation points near the boundary. So, the interval waveletā€™s influence on the boundary effect can be attributed to Lagrange extrapolation. It should be pointed out that we did not care about the reliability of the extrapolation. The only function of the extrapolation is enlarging the definition domain of the given function which can avoid the boundary effect occurring in the domain. Therefore, we can discuss the choice of by means of Lagrange inner- and extrapolation error polynomial as follows: Equation (44) indicates that the approximation error is both related to the smoothness and the gradient of the original function near the boundary. Setting different can satisfy the different error tolerance requirement.

4.2. Dynamic Choice Scheme of External Points in Sparse Grids Approach

This scheme is made up with 2 steps. First, the Newton interpolation operator is employed instead of the traditional Lagrange interpolation. Second, both of the error tolerance and condition number are taken as the termination procedure of dynamic choice of external grid points. We will discuss it in detail in this section.

In order to construct the dynamic choice scheme of external grid points, the Newton interpolation theory should be introduced instead of the traditional Lagrange interpolation theory. It is well known that the Newton interpolation is equivalent with Lagrange interpolation, but the Lagrange interpolation algorithm has no inheritance which is the key feature of Newton interpolation. So, the advantage of Newton interpolation method is that the basis function does not need to be recalculated as one point is added except only one more term is needed to be added, which reduces the number of compute operation, especially the multiplication.

The expression of Newton interpolation can be written as: substituting the Newton interpolation instead of the Lagrange interpolation into (43), which can be rewritten as: where

It is well known that the Newton interpolation is equivalent to the Lagrange interpolation. The corresponding error estimation can be expressed as: And the simplest criterion to terminate the dynamic choice on is ( is the absolute error tolerance). Obviously, it is difficult to define which should meet with the precision requirement of all approximated curves. In fact, the difference coefficient can be used directly as the criterion; that is, As mentioned above, once the curves with lower-order smoothness are approximated by higher-order polynomial expression, the errors will become bigger on the contrary. In fact, even if the is infinite, the computational precision cannot be satisfied except increasing computational complexity. To avoid this, we design the termination procedure of dynamic choice about as follows:ā€‰ā€‰ā€‰ā€‰ā€‰ā€‰

In the field of numerical analysis, the condition number of a function with respect to an argument measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. There is no doubt that the choice of can change the condition number of the system of algebraic equations discretized by the wavelet interpolation operator or the finite difference method. Therefore, the choice of should take the condition number into account. In fact, if the condition number , then you may lose up to digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. According to the general rule of thumb, the choice should follow the rule as follows:

The computational complexity of interpolation calculation is not proportional to the increasing points. The former is mainly up to the computation amount of and the derivative operations. Obviously, according to (9), the increase in computational complexity is when the number of extension points increases by 1. But the computational complexity of adaptively increasing collocation points is related to the different basis functions. For the basis with compact support property such as Daubechies wavelet and Shannon wavelet, the value of is impossible to be infinite. For Haar scaling function which has no smoothness property, can be taken as 0 at most since it does not need to be extended. For Faber-Schauder wavelet, L can be taken as 1 at most. For Daubechies wavelet, can be taken as different values according to the order of vanishing moments, but it must be finite. For the wavelets without compact support property, can take value dynamically, such as Shannon wavelet. The computational complexity of increasing points is mainly dependent on the basis function of itself.

5. Numerical Experiments

5.1. Dynamic Choice of the Sparse Grid Points

In order to test the adaptability of the sparse grid approach proposed in this paper, the Faber-Schauder and Shannon scaling function, the autocorrelation function of Daubechies scaling functions, are taken as the basis, respectively. Faber-Schauder scaling function has first-order derivative and Daubechies scaling function has second derivative, so both of the dynamic choice schemes will be tested.

Example 1. Burger equation with Dirichlet boundary conditions.
As a test problem for the numerical algorithm described in the previous section, we will consider Burgers equation as follows: with initial and boundary conditions where represents the time and denotes the Reynolds number. With the increasing of the value of , the solution develops into a saw-tooth wave at the origin point. The gradient at the origin reaches its maximum value. Therefore, the performance of a numerical method is often judged by its ability to resolve the large gradient region that develops in the solution, which is shown in Figure 1.
Using the difference coefficient to approximate the partial differential operator , the Burgers equation becomes According to the virtual displacement theory, the variational form of the Burgers equation can be represented as: This can be solved by means of (35)-(36).
In the experiments, the Reynolds number , and the time step .
The numerical results showed in Figure 2 are obtained by the finite difference method. As the amount of the even discrete points is taken as 512, the Gibbs phenomena appeared at where exists a steep slope in the solution (Figure 2(b)). Increasing the discrete points can restrict the Gibbs phenomena (Figure 2(a)).
Figure 3 illustrates the performance of the sparse solution method on this example by comparison of the sparse solution and the true solution produced using a standard fully resolved method (finite difference method).
With the increasing of the parameter , the gradient of the solution at the point becomes more and more large. Any of the Faber-Schauder scaling function and the auto-correlation function of the Daubechies scaling function is taken as the basis functions, the sparse method can capture the steep slope appeared in the solution effectively. This shows that more and more grid points concentrate around the point . However, the maximum of the gradient at appeared as , the 1024 coefficients used in the true solution, only 64 with Faber-Schauder basis and 152 with Daubecies basis, are retained in the sparse solution (about 6.25% and 14.84%), respectively. Begin with , the gradient value of the solution at becomes smaller and smaller with the increasing of . The amount of the sparse grid points also decreases with the increasing of accordingly. This is illustrated in Figure 3. The adaptability of the proposed sparse method is helpful to improve the efficiency and the calculation precision of the algorithm.

Besides, we also noticed that the condition number of Burger equation from Table 1 varies with the change of , , and the time step . In fact, the condition number relates closely with the sparse grid points. As and are smaller, the steep gradient will not appear in the solution and the grid points are sparse. In this case, the condition number is smaller and will not destroy the numerical precision apparently. On the contrary, if the , , and the time step are larger, the steep slope appearing in the solution and the error brought from the larger time step will bring more grid points adaptively into the algorithm. This will deduce that the condition number becomes larger. It has been mentioned in Section 4.2 that the larger condition number can decrease the calculation precision greatly. Table 1 shows that the condition number () increases more rapidly than with the increase of and . This also can be illustrated in Figure 4.

Figure 4 illustrates that the external grid points change with the development of dynamically. As (Figure 4(a)), the solution function is smooth and the condition number is smaller. The approach can take more external grid points dynamically to improve the precision. As (Figure 4(b)), the steep slope is appearing in the solution and the condition number is increasing. In this case, the increase of the external grid points cannot improve the precision anymore. In fact, this explained the reason why we construct the dynamic choice scheme of external grid points to some extent.

5.2. Comparison between the Dynamic Choice Scheme and the Wavelet Collocation Method

Example 2. Consider the Heat equation with the initial and boundary conditions where denotes the time parameter.
Let and denote the interpolation operator and the corresponding derivative; according to the classical collocation approach, the approximating formulation of a function can be written as: Substituting (57) into (55) leads to a system of nonlinear ordinary differential equations as follows: where . The corresponding vector expression is The corresponding Neumann boundary condition can be expressed as: where Equations (59)-(60) can be solved by the VIM and PIM. In the following, we will take heat equation as examples to illustrate the effectiveness of the algorithm proposed in this paper. The Shannon scaling function is employed to be the basis function. The exact analytical solution of (55) is . Obviously, the solution functionā€™s infinite order derivative exists for all in the definition domain.

(1) Comparison between the Dynamic Choice Scheme and the Static Interval Wavelet. Let and let . The computational error curve of the dynamic sparse grid approach is shown in Figure 5. The maximum of the absolute error is 0.0471, which occurs near the right boundary. This shows that the bigger gradient of the solution can cause bigger error. The dynamic and the iteration times at the same value are shown in Table 2. The value of varies between 2, 3, and 4, and the iteration times at is much more than equaling 2 or 3. So, we take in the static interval wavelet PIM to solve the heat equation in the same parameters with the dynamic scheme. The numerical solution is shown in Figure 6. Obviously, the error is too big that the algorithm is invalid. There are many reasons that can lead to this result such as the smoothness of the solution and the nonlinear term in the PDEs. As the time step , the error curve was shown in Figure 7. The dynamic and the iteration times at the same value are shown in Table 3. With the decreasing of the time step, the influence of the nonlinear term on PDEs becomes smaller and smaller. The biggest errors of both dynamic and static interval wavelet PIM are . This shows that the construction of dynamic grid approach is necessary for nonlinear PDEs with Neumann boundary conditions.

(2) Comparison between the VIM and PIM and Runge-Kutta Method for Time-Domain Integration. The numerical solution and error curves with the VIM and PIM and Runge-Kutta method are shown in Figure 8. It is obvious that the calculation precision of VIM and PIM (Figure 5) is better than Runge-Kutta method. It should be pointed out that Runge-Kutta is not sensitive to the time step ā€‰ā€‰ (Table 4) compared with VIM and PIM. One of the most important reasons is that the nonlinear term in PDEs was integrated with explicit format in VIM and PIM, and implicit format was employed in Runge-Kutta method.

6. Conclusions

The multilevel interpolation operator constructed in this paper is independent of the basis. Although Faber-Schauder scaling function has no second-order derivative, it still can be the basis employed in the multiscale interpolation operator to solve the Burgers equation, while only retaining important nodes. The reduced dynamics created by the sparse projection property can dynamically capture the true phenomena exhibited by the solution. This sparse projection amounts to a shrinkage of the coefficients of the updated solution at every time step. Compared with the finite difference method, the retained coefficients are less than 10% in the sparse solution of the Burgers equation.

The dynamic sparse grid approach, which is constructed by combining the multiscale interpolation operator and the variational iteration method, is able to choose both of the internal and external grid points based on the gradient and the smoothness of the solution, the condition number of the PDEs, and the error tolerance dynamically. This property is good suit to the PDEs with Neumann boundary conditions. It can eliminate the boundary effect efficiently. With regard to the accuracy and time complexity of the solution in comparison with those obtained with other algorithms, the dynamic sparse grid approach constructed in this paper is more reasonable. The numerical experiments illustrate that it is necessary to construct the dynamic sparse grid approach for the nonlinear PDEs with Neumann boundary conditions and Dirichlet boundary conditions.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 41171337) and the National High Technology Research and Development Program of China (no. 2012BAD35B02).