Advances in Mathematical Physics

Advances in Mathematical Physics / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 439526 | 12 pages | https://doi.org/10.1155/2015/439526

A Numerical Method for Solving Fractional Differential Equations by Using Neural Network

Academic Editor: Fawang Liu
Received05 Feb 2015
Revised27 Apr 2015
Accepted27 Apr 2015
Published04 Oct 2015

Abstract

We present a new method for solving the fractional differential equations of initial value problems by using neural networks which are constructed from cosine basis functions with adjustable parameters. By training the neural networks repeatedly the numerical solutions for the fractional differential equations were obtained. Moreover, the technique is still applicable for the coupled differential equations of fractional order. The computer graphics and numerical solutions show that the proposed method is very effective.

1. Introduction

Recently, fractional differential equations have gained considerable importance due to their frequent appearance applications in fluid flow, rheology, dynamical processes in self-similar and porous structures, diffusive transport akin to diffusion, electrical networks, probability and statistics, control theory of dynamical systems, viscoelasticity, electrochemistry of corrosion, chemical physics, optics and signal processing [17], and so on. These applications in interdisciplinary sciences motivate us to try to find out the analytic or numerical solutions for the fractional differential equations. But for most ones it is difficult to find out or even have exact solutions. Thus, necessarily, the numerical techniques are applied to the fractional differential equations.

Now, many effective methods for solving fractional differential equations have been presented, such as nonlinear functional analysis method including monotone iterative technique [8, 9], topological degree theory [10], and fixed point theorems [1113]. Also, numerical solutions are obtained by the following methods: random walk [2], matrix approach [14], the Adomian decomposition method and variational iteration method [15], HAM [1619], homotopy perturbation method (HPM) [20], and so forth. Not long ago, in [21], Raja et al. by applying Particle Swarm Optimization (PSO) algorithm along with feedforward ANN obtained the numerical solutions for fractional differential equations. But the convergence of the algorithm has not been proven, and this method is only applied to the single fractional differential equations. In this paper, we construct two different neural networks based on cosine functions and obtain the conditions of algorithm convergence.

The first neural network (NU) is applied to linear and nonlinear fractional differential equations of the form with initial condition as follows:where is the Caputo fractional derivatives of order .

The second neural network (NU) is applied to the fractional coupled differential equations of the form with initial conditions as follows: where is the Caputo fractional derivatives of order . The solutions for the above two problems are written as cosine basis functions, whose parameters can be adjusted to minimize an appropriate error function. So we need to compute the gradient of the error with respect to the network parameters. By adjusting the parameters repeatedly, we obtain the numerical solutions when the error values are less than the required accuracy or the training times reach maximum.

2. Definitions and Lemma

Definition 1 (see [22]). The Riemann-Liouville fractional integral of order , of a function , is defined as

Definition 2 (see [22]). The Riemann-Liouville and Caputo fractional derivatives of order , are given by where .

Definition 3 (see [22]). The classical Mittag-Leffler function is defined by The generalized Mittag-Leffler function is defined by

Definition 4. The functions , () are defined by Obviously, Euler’s equations have the following forms:

Lemma 5. If and are defined as in Definition 4, then

Proof. The beta function was defined by , and we have the following equation: Then according to the definition of Caputo fractional derivatives, we have Then (11) holds. Similarly, we obtain (12). In particular, when , , we have

3. Illustration of the Method and Application

3.1. The First Neural Network

To describe the method, we consider (1) with initial condition . The th trial solution satisfying the initial condition is written as where represents the number of neurons and are unknown weights of the network determined in training procedures to reduce the error function: where represents the number of sample points, is Euclidean norm, and where ; then we can adjust the weights by the following equation: where

3.2. Convergence of the Algorithm

Theorem A. Let represent learning rate, let represent the number of sample points, and let represent the number of neurons: ,  , . Suppose , on the interval for . (From Figure 10, we see that the function is bounded when .) Then the neural network is convergent on the interval when

Proof. Let , and then we denote by where and . Then according to (17), we have where . Then we have Noting , then we get Define Lyapunov function ; we have Suppose and then in accordance with (25) that yields whereThus, where is Frobenius matrix norm, defined by . Since , in order to make this neural network converge, we have which yields Hence, In accordance with , we obtain By calculating (25), we get Finally, we have

3.3. Example
3.3.1. Example  1

We first consider the following linear fractional differential equation: with condition . The exact solution is . This equation also can be solved by the following methods: Genetic Algorithm (GA) [21], Grünwald-Letnikov classical numerical technique (GL) [23], and Particle Swarm Optimization (PSO) algorithm [23]. We set the parameters , , and and train the neural network 4500 times, and the weights of the network for Example 1 are given in Table 1. Figure 1 shows that sample points are on the exact solution curve after training is completed. Then we check whether the other points also match well with the exact solution (see Figure 2). From Figure 3 we see the error values decrease rapidly. Tables 2(a) and 2(b) show the numerical solutions and accuracy for Example 1 by the different methods. In this paper, all numerical experiments are done by using Lenovo T400, Intel Core 2 Duo CPU P8700, 2.53 GHz, and Matlab version R2010b. The neural networks with cosine basis functions have taken about 850 s, but the other algorithms mentioned above need to run about 2,240 s.


Example Example Example

485763106665822699418880541540705101
4080021929140680
168013723715
189642045290329539476096
5534022218950482137228152901
018204240273

(a)

Numerical solution Accuracy
GLPSOGANUGLPSOGANU

0.010.0101
0.040.04010.04040.03960.0407
0.090.09010.09070.0917
0.160.16010.16040.15960.1621
0.250.25010.24960.2505
0.360.36020.35830.35730.3571
0.49 0.49020.48690.4853
0.64 0.64020.63620.63520.6397
0.81 0.81020.80690.8186
1 0.10010.10000.10040.1003

(b)

Numerical solution Accuracy
GLPSONUGLPSONU

0.010.0107 0.0103 0.0092
0.040.04130.04140.0377
0.090.09180.09280.0875
0.160.16220.16360.1592
0.250.25270.25380.2511
0.360.36310.36310.3609
0.49 0.49340.49180.4884
0.64 0.64380.64020.6373
0.81 0.81410.80910.8106
1 1.00440.99911.0020

3.3.2. Example  2

We secondly consider the following linear fractional differential equation: with condition . The exact solution is . We set the parameters , , and and train the neural network 1000 times, and the weights of the network for Example 2 are given in Table 1. Figures 4, 5, and 6 show that the neural network is still applicable when . Table 3 shows the exact solution, approximate solution, and accuracy for Example 2.


Numerical solution Accuracy


3.3.3. Example  3

We thirdly consider the following nonlinear fractional differential equation: with condition . The exact solution is . We set the parameters , , and and train the neural network 1000 times, and the weights of the network for Example 2 are given in Table 1. Table 4 shows the exact solution, approximate solution, and accuracy for Example 3.


Numerical solution Accuracy

0.10.00310.00220.00550.0066
0.20.01780.01330.02340.0266
0.30.04920.04260.05660.0603
0.40.10110.09720.10750.1093
0.50.17670.17730.17830.1772
0.60.27880.27970.27330.2711
0.70.40990.40550.40100.4009
0.80.57240.56430.57120.5738
0.90.76840.76700.78320.7847
111.00641.01051.0056

3.4. The Second Neural Network

To describe the method, we consider (3) with initial conditions and . The th trial solutions for the problem are written as where represents the number of sample points and and are unknown weights of the network determined in training procedures to reduce error function: where