Abstract

We present a new method for solving the fractional differential equations of initial value problems by using neural networks which are constructed from cosine basis functions with adjustable parameters. By training the neural networks repeatedly the numerical solutions for the fractional differential equations were obtained. Moreover, the technique is still applicable for the coupled differential equations of fractional order. The computer graphics and numerical solutions show that the proposed method is very effective.

1. Introduction

Recently, fractional differential equations have gained considerable importance due to their frequent appearance applications in fluid flow, rheology, dynamical processes in self-similar and porous structures, diffusive transport akin to diffusion, electrical networks, probability and statistics, control theory of dynamical systems, viscoelasticity, electrochemistry of corrosion, chemical physics, optics and signal processing [17], and so on. These applications in interdisciplinary sciences motivate us to try to find out the analytic or numerical solutions for the fractional differential equations. But for most ones it is difficult to find out or even have exact solutions. Thus, necessarily, the numerical techniques are applied to the fractional differential equations.

Now, many effective methods for solving fractional differential equations have been presented, such as nonlinear functional analysis method including monotone iterative technique [8, 9], topological degree theory [10], and fixed point theorems [1113]. Also, numerical solutions are obtained by the following methods: random walk [2], matrix approach [14], the Adomian decomposition method and variational iteration method [15], HAM [1619], homotopy perturbation method (HPM) [20], and so forth. Not long ago, in [21], Raja et al. by applying Particle Swarm Optimization (PSO) algorithm along with feedforward ANN obtained the numerical solutions for fractional differential equations. But the convergence of the algorithm has not been proven, and this method is only applied to the single fractional differential equations. In this paper, we construct two different neural networks based on cosine functions and obtain the conditions of algorithm convergence.

The first neural network (NU) is applied to linear and nonlinear fractional differential equations of the form with initial condition as follows:where is the Caputo fractional derivatives of order .

The second neural network (NU) is applied to the fractional coupled differential equations of the form with initial conditions as follows: where is the Caputo fractional derivatives of order . The solutions for the above two problems are written as cosine basis functions, whose parameters can be adjusted to minimize an appropriate error function. So we need to compute the gradient of the error with respect to the network parameters. By adjusting the parameters repeatedly, we obtain the numerical solutions when the error values are less than the required accuracy or the training times reach maximum.

2. Definitions and Lemma

Definition 1 (see [22]). The Riemann-Liouville fractional integral of order , of a function , is defined as

Definition 2 (see [22]). The Riemann-Liouville and Caputo fractional derivatives of order , are given by where .

Definition 3 (see [22]). The classical Mittag-Leffler function is defined by The generalized Mittag-Leffler function is defined by

Definition 4. The functions , () are defined by Obviously, Euler’s equations have the following forms:

Lemma 5. If and are defined as in Definition 4, then

Proof. The beta function was defined by , and we have the following equation: Then according to the definition of Caputo fractional derivatives, we have Then (11) holds. Similarly, we obtain (12). In particular, when , , we have

3. Illustration of the Method and Application

3.1. The First Neural Network

To describe the method, we consider (1) with initial condition . The th trial solution satisfying the initial condition is written as where represents the number of neurons and are unknown weights of the network determined in training procedures to reduce the error function: where represents the number of sample points, is Euclidean norm, and where ; then we can adjust the weights by the following equation: where

3.2. Convergence of the Algorithm

Theorem A. Let represent learning rate, let represent the number of sample points, and let represent the number of neurons: ,  , . Suppose , on the interval for . (From Figure 10, we see that the function is bounded when .) Then the neural network is convergent on the interval when

Proof. Let , and then we denote by where and . Then according to (17), we have where . Then we have Noting , then we get Define Lyapunov function ; we have Suppose and then in accordance with (25) that yields whereThus, where is Frobenius matrix norm, defined by . Since , in order to make this neural network converge, we have which yields Hence, In accordance with , we obtain By calculating (25), we get Finally, we have

3.3. Example
3.3.1. Example  1

We first consider the following linear fractional differential equation: with condition . The exact solution is . This equation also can be solved by the following methods: Genetic Algorithm (GA) [21], Grünwald-Letnikov classical numerical technique (GL) [23], and Particle Swarm Optimization (PSO) algorithm [23]. We set the parameters , , and and train the neural network 4500 times, and the weights of the network for Example 1 are given in Table 1. Figure 1 shows that sample points are on the exact solution curve after training is completed. Then we check whether the other points also match well with the exact solution (see Figure 2). From Figure 3 we see the error values decrease rapidly. Tables 2(a) and 2(b) show the numerical solutions and accuracy for Example 1 by the different methods. In this paper, all numerical experiments are done by using Lenovo T400, Intel Core 2 Duo CPU P8700, 2.53 GHz, and Matlab version R2010b. The neural networks with cosine basis functions have taken about 850 s, but the other algorithms mentioned above need to run about 2,240 s.

3.3.2. Example  2

We secondly consider the following linear fractional differential equation: with condition . The exact solution is . We set the parameters , , and and train the neural network 1000 times, and the weights of the network for Example 2 are given in Table 1. Figures 4, 5, and 6 show that the neural network is still applicable when . Table 3 shows the exact solution, approximate solution, and accuracy for Example 2.

3.3.3. Example  3

We thirdly consider the following nonlinear fractional differential equation: with condition . The exact solution is . We set the parameters , , and and train the neural network 1000 times, and the weights of the network for Example 2 are given in Table 1. Table 4 shows the exact solution, approximate solution, and accuracy for Example 3.

3.4. The Second Neural Network

To describe the method, we consider (3) with initial conditions and . The th trial solutions for the problem are written as where represents the number of sample points and and are unknown weights of the network determined in training procedures to reduce error function: where Then we adjust the weights and by the following two equations: where

3.5. Convergence of the Algorithm

Theorem B. Let represent learning rate, let represent the number of sample points, and let represent the number of neurons: , , . Suppose , , , , on the interval for . Then the neural network is convergent on the interval when

Proof. Let , and then we denote and by respectively, where Then we have Define Lyapunov function ; then similarly to the proof of Theorem A we get By simply calculating , we have where and , and finally we obtain This completes the proof.

3.6. Example
3.6.1. Example  4

We first consider the following linear coupled fractional differential equations: with initial condition as follows: The exact solution is and . We set the parameters , , , and and train the neural network 2000 times, and the weights of the network for Example 4 are given in Table 5. Figures 7 and 8 show that the sample points and checkpoints are in well agreement with the exact solutions for the problem. Figure 9 shows that the error of the numerical solutions decreases rapidly within the first 50 training times. Table 6 shows the exact solution, approximate solution, and accuracy for Example 4.

3.6.2. Example  5

We second consider the following nonlinear fractional coupled differential equations: with initial conditions as follows: The exact solution is and . We set the parameters , , , and . Numerical solutions in Table 7 show that this network can also be applied to the nonlinear fractional coupled differential equations but we need more time to train the network.

4. Conclusion

In this paper, by using the neural network, we obtained the numerical solutions for single fractional differential equations and the systems of coupled differential equations of fractional order. The computer graphics demonstrates that numerical results are in well agreement with the exact solutions. In (1), suppose that ; then the problem transformed into Fractional Riccati Equations (Example  3 in this paper). In (3), suppose that and ; then the problem transformed into fractional-order Lotka-Volterra predator-prey system. We will consider this problem in another paper. The neural network is a powerful method and is effective for the above two problems, which should be also able to solve fractional partial differential equations.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the referees for their many constructive comments and suggestions to improve the paper. This work was partly supported by the Special Funds of the National Natural Science Foundations of China under Grant no. 11247310, the Foundations for Distinguished Young Talents in Higher Education of Guangdong under Grant no. 2012LYM0096, and the Fund of Hanshan Normal University under Grant nos. LY201302 and LF201403.