Abstract

Neural networks with radial basis functions method are used to solve a class of initial boundary value of fractional partial differential equations with variable coefficients on a finite domain. It takes the case where a left-handed or right-handed fractional spatial derivative may be present in the partial differential equations. Convergence of this method will be discussed in the paper. A numerical example using neural networks RBF method for a two-sided fractional PDE also will be presented and compared with other methods.

1. Introduction

In this paper, I will use neural network method to solve the fractional partial differential equation (FPDE) of the form:

On a finite domain , . Here, I consider the case , where the parameter is the fractional order (fractor) of the spatial derivative. The function is source/sink term [1]. The functions and may be interpreted as transport-related coefficients. We also assume an initial condition for and zero Dirichlet boundary conditions. For the case , the addition of classical advective term on the right-hand side of (1.1) does not impact the analysis performed in this paper and has been omitted to simplify the notation.

The left-hand and right-hand fractional derivatives in (1.1) are the Riemann-Liouville fractional derivatives of order [5] defined by where is an integer such that . If is an integer, then the above definitions give the standard integer derivatives, that is

When , and setting , (1.1) becomes the following classical parabolic PDE:

Similarly, when and setting , (1.1) reduces to the following classical hyperbolic PDE:

The case represents a superdiffusive process, where particles diffuse faster than the classical model (1.4) predicts. For some applications to physics and hydrology, see [24].

I also note that the left-handed fractional derivative of at a point depends on all function values to the left of the point , that is, this derivative is a weighted average of such function values. Similarly, the right-handed fractional derivative of at a point depends on all function values to the right of this point. In general, left-handed and right-handed derivatives are not equal unless is an even integer, in which case these derivatives become localized and equal. When is an odd integer, these derivatives become localized and opposite in sign. For more details on fractional derivative concepts and definitions, see [1, 3, 5, 6]. Reference [7] provides a more detailed treatment of the right-handed fractional derivatives as well as a substantial treatment of the left-handed fractional derivatives.

Published papers on the numerical solution of fractional partial differential equation are scarce. A different method for solving the fractional partial differential equation (1.1) is pursued in the recent paper [4]. They transform this partial differential equation into a system of ordinary differential equations (method of lines), which is then solved using backward differentiation formulas. Another very recent paper, [8] develops a finite element method for two-point boundary value problem, and [1], finds the numerical solution of (1.1) by finite differences method.

2. Multilayer Neural Networks

The Rumelhart-Hinton-William’s multilayer network [9], that we consider here, is a feed-forward type network with connections between adjoining layers only. Networks generally have hidden layers between the input and output layers. Each layer consists of computational units. The input-output relationship of each unit is represented by inputs output , connection weights , threshold , and differential function as follows: the learning rule of this network is known as the backpropagation algorithm [9], which is an algorithm that uses a gradient descent method to modify weight and thresholds as the error between the desired output and the output signal of the network is minimized. I generally use a bounded and monotonic increasing differentiable function which is called a radial basis function for each unit output function.

If a multilayer network has input units and output units, then the input-output relationship defines a continuous mapping from -dimensional Euclidean space to -dimensional Euclidean space. I call this mapping the input-output mapping of the network. I study the problem of network capability from the point of view of input-output mapping. It observed that for the study of mappings defined by multilayer networks, it is sufficient to consider networks whose and whose output functions for input and output layers are linear.

3. Approximate Realization of Continuous Mappings by Neural Networks

Reference [10] considers the possibility of representing continuous mappings by neural networks whose output functions in hidden layers are sigmoid function, for example, . It is simply noted here that general continuous mappings cannot be exactly represented by Rumelhart-Hinton-William’s networks. For example, if a real analytic output function such as the sigmoid function is used, then an input-output mapping of this network is analytic and generally cannot represent all continuous mappings.

Let points of -dimensional Euclidean space be denoted by and the norm of defined by .

Definition 3.1 (see [11]). Let is a linear space. A function is called a radial basis function that can be represented in the form , where , , and , .
In this paper, I study approximate by using which is a representation model of neural networks, through analyzing previous theoretical studies. Also we present a study of convergent of solution by approximate solution , where is radial basis function.
We also give some theorems for soliciting the conditions to converge the approximation solution for (1.1) by neural networks method.

Theorem 3.2 (Irie-Miyake, [6]). Let , that is, let be absolutely integrable and . Let be integrable and be Fourier transforms of and , respectively. If  , then

Theorem 3.3. Let be a radial basis function, nonconstant, bounded, and monotone increasing continuous function. Let be a compact subset of and a real value continuous function on . Then for an arbitrary , there exists an integral and real constants such that satisfies . In other words, for an arbitrary , there exists a three-layer network whose output function for the hidden layer is , whose output functions for input and output layers are linear, and which has an input-output function such that .

Proof. First
Since is a continuous function on a compact subset of , can be extended to be a continuous function on with compact support.
If I operate the mollifier on , -function with compact support. Furthermore, uniformly on . Therefore, we may suppose is a -function with compact support for proving Theorem 3.3. By the Paley-Wiener theorem [10], the Fourier transform of is real analytic and, for any integer , there exists a constant such that In particular .
I define , , and as follows: where is defined by , .
The essential part of the proof of Irie-Miyake’s integral formula [1] is the equality , and this is derived from in our discussion, using the estimate of , it is easy to prove that uniformly on . Therefore, uniformly on . I can state that for any , there exists such that

Second
I will approximate by finite integrals on . For , fix which satisfies (3.5). For , set I will show that, for , I can take so that Using the following equation: the fact and compactness of , we can take so that Therefore,

Third
From (3.5) and (3.7), I can say that for any , there exists such that can be approximated by the finite integral uniformly on . The integral of can be replaced by the real part and is continuous on , hence can be approximated by the Riemann sum uniformly on . Since the Riemann sum can be represented by a three-layer network. Therefore can be represented approximately by three-layer network.

Theorem 3.4. Let be a linear space. The set of radial basis functions denoted by is fundamental in .

Proof. Let denotes the linear span of the set, as I will prove an algebra in , for and (where is the dual space of ) [11] Let be the zero functional in , then for all . Hence, contains constants.
Now if and , then there exists such that . hence separates point of .
Now, I will prove each basic neighborhood of a fixed point in intersects ; is compact set such that , a basic neighborhood of corresponding to , and has the form , now restrict and all members of to , then is still on algebra-containing constants separating of , hence is dense in , and consequently, intersects , as required. Therefore, is fundamental in and required.

4. Numerical Methods

Conceding a feed forward network with input layer of a single hidden layer, and an output layer consisting of a single unit, I have purposely chosen a single output unit to simplify the exposition without loss of generality.

The network is designed to perform a nonlinear mapping form the input space to the hidden space, followed by a linear mapping from the hidden space to the output space.

Given a set of different points and a corresponding set real numbers , we find a function : that satisfied the interpolation conditions

For strict interpolation as specified here, the interpolating surface (i.e.; function ) is constraining to pass through all the training data points, initial , , , we will use algorithm of backpropagation neural network [11] to find the best weights , and , inserting the interpolation conditions of (1.1) in (2.1), we obtain the following set of simultaneous linear equations for the unknown coefficients of the expansion where , .

Compute the error , and .

Calculate is training rat ,

After that the best weights , , , are fined. Given the substitute in (2.1), the approximate solution is found.

The above method is supposed to work in this situation. In such case, the values of absence of the exact solution of independent variables will be substituted in the boundary conditions in order to get the exact values. Those values can be used in the phase of training of the considered backpropagation neural network. The approach is proved to get good weights of (2.1). The radial basis function in Theorems 3.3 and 3.4 was used in order to obtain converge training of the backpropagation neural network.

5. Numerical Example

The following two-sided fractional partial differential equation were considered on a finite domain and with the coefficient functions and , and the forcing function initial condition , and boundary conditions . This fractional PDE has the exact solution , which can be verified applying the fractional formulas (see [1]).

Table 1 shows the training data by values of the boundary conditions. Table 2 shows the comparisons between exact solution and the approximated solution test points at , . Table 3 compares maximum error between approximate solution by artificial neural networks method and finite difference numerical method [1].

6. Conclusion

Referring back to Tables 13, it is clear that the approximation of fractional partial FPDE using neural network with RBF obtains a good approximated solution. The converge of this method is given as well. In addition, the discussed method is able to solve fractional partial differential equations with more than two variables where many other methods fail, clearly. The suggested method provides a general approximated solution to the interval domain and depends on boundary conditions.