Research Article  Open Access
Approximation Solution of Fractional Partial Differential Equations by Neural Networks
Abstract
Neural networks with radial basis functions method are used to solve a class of initial boundary value of fractional partial differential equations with variable coefficients on a finite domain. It takes the case where a lefthanded or righthanded fractional spatial derivative may be present in the partial differential equations. Convergence of this method will be discussed in the paper. A numerical example using neural networks RBF method for a twosided fractional PDE also will be presented and compared with other methods.
1. Introduction
In this paper, I will use neural network method to solve the fractional partial differential equation (FPDE) of the form:
On a finite domain , . Here, I consider the case , where the parameter is the fractional order (fractor) of the spatial derivative. The function is source/sink term [1]. The functions and may be interpreted as transportrelated coefficients. We also assume an initial condition for and zero Dirichlet boundary conditions. For the case , the addition of classical advective term on the righthand side of (1.1) does not impact the analysis performed in this paper and has been omitted to simplify the notation.
The lefthand and righthand fractional derivatives in (1.1) are the RiemannLiouville fractional derivatives of order [5] defined by where is an integer such that . If is an integer, then the above definitions give the standard integer derivatives, that is
When , and setting , (1.1) becomes the following classical parabolic PDE:
Similarly, when and setting , (1.1) reduces to the following classical hyperbolic PDE:
The case represents a superdiffusive process, where particles diffuse faster than the classical model (1.4) predicts. For some applications to physics and hydrology, see [2–4].
I also note that the lefthanded fractional derivative of at a point depends on all function values to the left of the point , that is, this derivative is a weighted average of such function values. Similarly, the righthanded fractional derivative of at a point depends on all function values to the right of this point. In general, lefthanded and righthanded derivatives are not equal unless is an even integer, in which case these derivatives become localized and equal. When is an odd integer, these derivatives become localized and opposite in sign. For more details on fractional derivative concepts and definitions, see [1, 3, 5, 6]. Reference [7] provides a more detailed treatment of the righthanded fractional derivatives as well as a substantial treatment of the lefthanded fractional derivatives.
Published papers on the numerical solution of fractional partial differential equation are scarce. A different method for solving the fractional partial differential equation (1.1) is pursued in the recent paper [4]. They transform this partial differential equation into a system of ordinary differential equations (method of lines), which is then solved using backward differentiation formulas. Another very recent paper, [8] develops a finite element method for twopoint boundary value problem, and [1], finds the numerical solution of (1.1) by finite differences method.
2. Multilayer Neural Networks
The RumelhartHintonWilliam’s multilayer network [9], that we consider here, is a feedforward type network with connections between adjoining layers only. Networks generally have hidden layers between the input and output layers. Each layer consists of computational units. The inputoutput relationship of each unit is represented by inputs output , connection weights , threshold , and differential function as follows: the learning rule of this network is known as the backpropagation algorithm [9], which is an algorithm that uses a gradient descent method to modify weight and thresholds as the error between the desired output and the output signal of the network is minimized. I generally use a bounded and monotonic increasing differentiable function which is called a radial basis function for each unit output function.
If a multilayer network has input units and output units, then the inputoutput relationship defines a continuous mapping from dimensional Euclidean space to dimensional Euclidean space. I call this mapping the inputoutput mapping of the network. I study the problem of network capability from the point of view of inputoutput mapping. It observed that for the study of mappings defined by multilayer networks, it is sufficient to consider networks whose and whose output functions for input and output layers are linear.
3. Approximate Realization of Continuous Mappings by Neural Networks
Reference [10] considers the possibility of representing continuous mappings by neural networks whose output functions in hidden layers are sigmoid function, for example, . It is simply noted here that general continuous mappings cannot be exactly represented by RumelhartHintonWilliam’s networks. For example, if a real analytic output function such as the sigmoid function is used, then an inputoutput mapping of this network is analytic and generally cannot represent all continuous mappings.
Let points of dimensional Euclidean space be denoted by and the norm of defined by .
Definition 3.1 (see [11]). Let is a linear space. A function is called a radial basis function that can be represented in the form , where , , and , .
In this paper, I study approximate by using which is a representation model of neural networks, through analyzing previous theoretical studies. Also we present a study of convergent of solution by approximate solution , where is radial basis function.
We also give some theorems for soliciting the conditions to converge the approximation solution for (1.1) by neural networks method.
Theorem 3.2 (IrieMiyake, [6]). Let , that is, let be absolutely integrable and . Let be integrable and be Fourier transforms of and , respectively. If , then
Theorem 3.3. Let be a radial basis function, nonconstant, bounded, and monotone increasing continuous function. Let be a compact subset of and a real value continuous function on . Then for an arbitrary , there exists an integral and real constants such that satisfies . In other words, for an arbitrary , there exists a threelayer network whose output function for the hidden layer is , whose output functions for input and output layers are linear, and which has an inputoutput function such that .
Proof. First
Since is a continuous function on a compact subset of , can be extended to be a continuous function on with compact support.
If I operate the mollifier on , function with compact support. Furthermore, uniformly on . Therefore, we may suppose is a function with compact support for proving Theorem 3.3. By the PaleyWiener theorem [10], the Fourier transform of is real analytic and, for any integer , there exists a constant such that
In particular .
I define , , and as follows:
where is defined by , .
The essential part of the proof of IrieMiyake’s integral formula [1] is the equality , and this is derived from
in our discussion, using the estimate of , it is easy to prove that uniformly on . Therefore, uniformly on . I can state that for any , there exists such that
Second
I will approximate by finite integrals on . For , fix which satisfies (3.5). For , set
I will show that, for , I can take so that
Using the following equation:
the fact and compactness of , we can take so that
Therefore,
Third
From (3.5) and (3.7), I can say that for any , there exists such that
can be approximated by the finite integral uniformly on . The integral of can be replaced by the real part and is continuous on , hence can be approximated by the Riemann sum uniformly on . Since
the Riemann sum can be represented by a threelayer network. Therefore can be represented approximately by threelayer network.
Theorem 3.4. Let be a linear space. The set of radial basis functions denoted by is fundamental in .
Proof. Let denotes the linear span of the set, as I will prove an algebra in , for and (where is the dual space of ) [11]
Let be the zero functional in , then for all . Hence, contains constants.
Now if and , then there exists such that .
hence separates point of .
Now, I will prove each basic neighborhood of a fixed point in intersects ; is compact set such that , a basic neighborhood of corresponding to , and has the form , now restrict and all members of to , then is still on algebracontaining constants separating of , hence is dense in , and consequently, intersects , as required. Therefore, is fundamental in and required.
4. Numerical Methods
Conceding a feed forward network with input layer of a single hidden layer, and an output layer consisting of a single unit, I have purposely chosen a single output unit to simplify the exposition without loss of generality.
The network is designed to perform a nonlinear mapping form the input space to the hidden space, followed by a linear mapping from the hidden space to the output space.
Given a set of different points and a corresponding set real numbers , we find a function : that satisfied the interpolation conditions
For strict interpolation as specified here, the interpolating surface (i.e.; function ) is constraining to pass through all the training data points, initial , , , we will use algorithm of backpropagation neural network [11] to find the best weights , and , inserting the interpolation conditions of (1.1) in (2.1), we obtain the following set of simultaneous linear equations for the unknown coefficients of the expansion where , .
Compute the error , and .
Calculate is training rat ,
After that the best weights , , , are fined. Given the substitute in (2.1), the approximate solution is found.
The above method is supposed to work in this situation. In such case, the values of absence of the exact solution of independent variables will be substituted in the boundary conditions in order to get the exact values. Those values can be used in the phase of training of the considered backpropagation neural network. The approach is proved to get good weights of (2.1). The radial basis function in Theorems 3.3 and 3.4 was used in order to obtain converge training of the backpropagation neural network.
5. Numerical Example
The following twosided fractional partial differential equation were considered on a finite domain and with the coefficient functions and , and the forcing function initial condition , and boundary conditions . This fractional PDE has the exact solution , which can be verified applying the fractional formulas (see [1]).
Table 1 shows the training data by values of the boundary conditions. Table 2 shows the comparisons between exact solution and the approximated solution test points at , . Table 3 compares maximum error between approximate solution by artificial neural networks method and finite difference numerical method [1].



6. Conclusion
Referring back to Tables 1–3, it is clear that the approximation of fractional partial FPDE using neural network with RBF obtains a good approximated solution. The converge of this method is given as well. In addition, the discussed method is able to solve fractional partial differential equations with more than two variables where many other methods fail, clearly. The suggested method provides a general approximated solution to the interval domain and depends on boundary conditions.
References
 M. M. Meerschaert and C. Tadjeran, “Finite difference approximations for twosided spacefractional partial differential equations,” NSFgrant DMC, pp. 563–573, 2004. View at: Google Scholar
 A. S. Chaves, “A fractional diffusion equation to describe Lévy flights,” Physics Letters A, vol. 239, no. 12, pp. 13–16, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. A. Benson, S. W. Wheatcraft, and M. M. Meerschaert, “The fractionalorder governing equation of Levy motion,” Water Resources Research, vol. 36, no. 6, pp. 1413–1423, 2000. View at: Publisher Site  Google Scholar
 F. Liu, V. Aon, and I. Turner, “Numerical solution of the fractional advectiondispersion equation,” 2002, http://academic.research.microsoft.com/Publication/3471879. View at: Google Scholar
 K. S. Miller and B. Ross, An introduction to the fractional calculus and fractional differential equations, A WileyInterscience Publication, John Wiley & Sons, New York, 1993.
 R. Gorenflo, F. Mainardi, E. Scalas, and M. Raberto, “Fractional calculus and continuoustime finance. III. The diffusion limit,” in Mathematical Finance, Trends Math., pp. 171–180, Birkhäuser, Basel, Switzerland, 2001. View at: Google Scholar
 S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach Science Publishers, Yverdon, Swizerland, 1993.
 G. J. Fix and J. P. Roop, “Least squares finiteelement solution of a fractional order twopoint boundary value problem,” Computers & Mathematics with Applications, vol. 48, no. 78, pp. 1017–1033, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Haykin, Neural Networks, PrnticeHall, 2006.
 K.I. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, vol. 2, no. 3, pp. 183–192, 1989. View at: Google Scholar
 A. AlMarashi and K. AlWagih, “Approximation solution of boundary values of partial differential equations by using neural networks,” Thamar University Journal, no. 6, pp. 121–136, 2007. View at: Google Scholar
Copyright
Copyright © 2012 Adel A. S. Almarashi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.