Recent Advances in Function Spaces and its Applications in Fractional Differential Equations 2021View this Special Issue
Research Article | Open Access
Feng Gao, Yumin Dong, Chunmei Chi, "Solving Fractional Differential Equations by Using Triangle Neural Network", Journal of Function Spaces, vol. 2021, Article ID 5589905, 7 pages, 2021. https://doi.org/10.1155/2021/5589905
Solving Fractional Differential Equations by Using Triangle Neural Network
In this paper, numerical methods for solving fractional differential equations by using a triangle neural network are proposed. The fractional derivative is considered Caputo type. The fractional derivative of the triangle neural network is analyzed first. Then, based on the technique of minimizing the loss function of the neural network, the proposed numerical methods reduce the fractional differential equation into a gradient descent problem or the quadratic optimization problem. By using the gradient descent process or the quadratic optimization process, the numerical solution to the FDEs can be obtained. The efficiency and accuracy of the presented methods are shown by some numerical examples. Numerical tests show that this approach is easy to implement and accurate when applied to many types of FDEs.
Fractional differential equations (FDEs) have been a hot topic in many scientific fields, such as dynamical system control theory, fluid flow, modelling in rheology, dynamic process of self-similar porous structure, diffusion transport similar to diffusion, electric network, and probability statistics [1–9]. These problems in science and engineering sometimes require us to get the solutions of various fractional differential equations. But as we know, it is difficult to find the exact solutions in most cases. So, we have to use numerical methods to solve fractional differential equations.
In the literature, some numerical methods for solving FDEs have been proposed, such as nonlinear functional analysis methods, including monotone iterative technique , topological degree theory , and fixed point theorem . In addition, someone proposed the following numerical methods: random walk , Adomian decomposition method and variational iteration method , homotopy perturbation method [15–17], etc.
In recent years, some scholars try to use the neural network to solve differential equations [18–20]. Lagaris et al.  proposed an artificial neural network method for solving initial and boundary value problems. In their work, a trial solution is adopted and written as the sum of two parts. The first part satisfies the initial or boundary conditions and does not contain adjustable parameters while the construction of the second part does not affect the initial and boundary conditions. Then, the neural network is trained to satisfy the differential equation at many selected points. The question for this method is that it is difficult to construct the first part of the trial solution and this method cannot be applied to fractional partial differential equations.
Piscopo et al.  also introduced a method to find the numerical solutions of many types of differential equations. The proposed method does not depend on the trial solution and therefore has more flexibility in many cases. It can be used for solving many types of ODE and PDE. The two mentioned neural network techniques motivate us to develop more neural network methods to solve FDEs, but how to get the fractional order of the neural network is a difficult problem.
To overcome this difficulty, in this work, we use a triangle base neural network as basis function to propose an alternative method called triangle neural network methods. This paper is organized as follows. In Section 2, we study the fractional derivative of the triangle base neural network and present the numerical method for solving many types of FDEs. In Section 3, we show the efficiency of the proposed method by some numerical examples. Section 4 is the conclusion.
2. Fractional Derivative of Triangle Neural Network and Numerical Algorithm
2.1. Ordinary Fractional Differential Equation
We consider the following triangle base neural network 1 (see Figure 1) to approximate the solution of problems (1) and (2), where are weights for the neural networks and are triangle base functions as the following: where are activation function of neurons in the hidden layer of the above neural network and is an integer and .
Let the weight matrix be and the activation matrix be . The triangle base neural network can be written as
When this neural network is used to be the numerical solution of problem (1), the loss function is
For problem (2), The loss function is where are training points. We have two methods to minimize the loss function to get the corresponding numerical solution. One is the gradient descent algorithm, and another one is the optimization process. For both methods, we need to compute the derivative of the triangle neural network. For this purpose, we have the following theorems.
Theorem 1. For given , , then
Let , we have
We also have the following.
Theorem 2. Given , , then
We can get
Thus, we get the loss function for problem (1):
To carry out the gradient descent process, we have for , and we also have for .
So, we can see that getting the numerical solution of (1) is equivalent to finding s by minimizing the loss function . Usually, we have two methods to do this work. One is the gradient descent method, and another one is adopting the optimization process.
The gradient descent method is as below: where is the step size for the gradient descent. If the function is a linear function of , the initial value problem can also be reduced to
That is, which is a quadratic optimization problem. For fractional boundary value problem (2), the numerical solution can be reduced to the following optimization process:
So, there are two methods to solve this problem. One method is to get the solution through the gradient descent method. Another method is using the optimization technique.
2.2. Fractional Partial Differential Equation
For fractional partial differential equation problem where , and problem where . We use the triangle base neural network (see Figure 2) to approximate the solution of problems (22) and (23). The triangle base neural network can be written as where are weights for the import layer in the neural network, are bias parameters for the hidden layer in the neural network, are weights for the export layer in the neural network, and is the activation function of neurons in the hidden layer.
The loss function for problem (23) can be given in the same way.
We use the gradient descent algorithm to train the neural network. In fact, we can train the neural network by layers. First, we train the export layer to get s, then the bias parameters to get s in the hidden layer, and finally, we train the import layer to get s and s.
3. Numerical Experiment
3.1. Numerical Test 1
Consider the following example 1: where . The exact solution to this problem is . We let be and use the optimization method when and gradient descent method when , respectively. The computational results are listed in Tables 1 and 2.
3.2. Numerical Test 2
Consider the following example 2 for boundary value problem: where . The exact solution to this problem is .
We let be and , respectively; the computational error is listed in Figure 3.
We let be and . The computational error is listed in Figure 4.
As we see in example 1, the solution becomes more accurate when is increased. And for the boundary value problem, we use two constraints when we use the optimization process.
3.3. Numerical Test 3
Consider the following example 3: where . The exact solution to this problem is .
We use the gradient descent method to solve this problem, and the computational error is listed in Figure 5. In the training process of the neural network, we set a stopping criteria for the computing process to stop. If this stopping criteria cannot be achieved, the computing will be stopped when times of training is completed.
3.4. Numerical Test 4
Consider the following example 4: where . The exact solution to this problem is .
We use the gradient descent method to solve this problem, and the computational error is listed in Figure 6.
The neural network method is a promising approach for solving fractional differential equations. The difficulty for this method is how to calculate the fractional derivatives of the involved neural network. In this paper, we propose numerical methods for solving fractional differential equations including the initial problem, boundary value problem, and partial FDEs by using the triangle base neural network and gradient descending method. All the involved fractional derivatives in this work are considered as Caputo type. We first analyze the fractional derivative of the triangle base neural network. Then, based on the loss function, the proposed numerical methods reduce the fractional differential equation into the gradient descent process or the quadratic optimization problem. By carrying out the gradient descent process or the quadratic optimization process, we can get the numerical solutions. Numerical tests show that this approach is easy to implement and the solution is accurate when applied to many types of FDEs.
All the data supporting this study are available within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
All authors contributed equally and significantly to this paper. All authors read and approved the final manuscript.
This research is supported by the National Natural Science Foundation of China (No. 61772295).
- K. S. Miller, An Introduction to Fractional Calculus and Fractional Differential Equations, J. Wiley and Sons, New York, 1993.
- K. Oldham and J. Spanier, “The fractional calculus,” in Theory and Applications of Differentiation and Integration of Arbitrary Order, Academic Press, USA, 1974.
- A. Kilbas, H. Srivastava, and J. Trujillo, Theory and Applications of Fractional Differential Equations, Math. Studies, North-Holland, New York, 2006.
- I. Podlubny, Fractional Differential Equations, Academic Press, USA, 1999.
- M. D. Ortigueira and J. A. Tenreiro Machado, “What is a fractional derivative?” Journal of Computational Physics, vol. 293, pp. 4–13, 2015.
- R. Metzler and J. Klafter, “Boundary value problems for fractional diffusion equations,” Physica A, vol. 278, no. 1-2, pp. 107–125, 2000.
- Z. Odibat and S. Momani, “Numerical methods for nonlinear partial differential equations of fractional order,” Applied Mathematical Modelling, vol. 32, no. 1, pp. 28–39, 2008.
- Q. Yang, F. Liu, and I. Turner, “Numerical methods for fractional partial differential equations with Riesz space fractional derivatives,” Applied Mathematical Modelling, vol. 34, no. 1, pp. 200–218, 2010.
- Z. Liu and J. Liang, “A class of boundary value problems for first-order impulsive integro- differential equations with deviating arguments,” Journal of Computational and Applied Mathematics, vol. 237, no. 1, pp. 477–486, 2013.
- Y. Cui, Q. Sun, and X. Su, “Monotone iterative technique for nonlinear boundary value problems of fractional order ,” Advances in Difference Equations, vol. 2017, no. 1, Article ID 248, 2017.
- Z. Liu, N. V. Loi, and V. Obukhovskii, “Existence and global bifurcation of periodic solutions to a class of differential variational inequalities,” International Journal of Bifurcation and Chaos, vol. 23, no. 7, article 1350125, 2013.
- H. Qu and X. Liu, “Existence of non-negative solutions for a fractional m-point boundary value problem at resonance,” Boundary Value Problems, vol. 2013, no. 1, Article ID 127, 2013.
- R. Metzler and J. Klafter, “The random walk's guide to anomalous diffusion: a fractional dynamics approach,” Physics Reports, vol. 339, no. 1, pp. 1–77, 2000.
- I. Podlubny, A. Chechkin, T. Skovranek, Y. Chen, and B. M. Vinagre Jara, “Matrix approach to discrete fractional calculus II: partial fractional differential equations,” Journal of Computational Physics, vol. 228, no. 8, pp. 3137–3153, 2009.
- Z. Odibat, S. Momani, and H. Xu, “A reliable algorithm of homotopy analysis method for solving nonlinear fractional differential equations,” Applied Mathematical Modelling, vol. 34, no. 3, pp. 593–600, 2010.
- S. Das and P. K. Gupta, “Homotopy analysis method for solving fractional hyperbolic partial differential equations,” International Journal of Computer Mathematics, vol. 88, no. 3, pp. 578–588, 2011.
- A. Elsaid, “Homotopy analysis method for solving a class of fractional partial differential equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 9, pp. 3655–3664, 2011.
- P. Kadam, G. Datkhile, and V. A. Vyawahare, “Artificial neural network approximation of fractional-order derivative operators: analysis and DSP implementation,” in Fractional Calculus and Fractional Differential Equations, Birkhäuser, Singapore, 2019.
- A. A. S. Almarashi, “Approximation solution of fractional partial differential equations by neural networks,” Advances in Numerical Analysis, vol. 2012, Article ID 912810, 10 pages, 2012.
- H. Qu and X. Liu, “A numerical method for solving fractional differential equations by using neural network,” Advances in Mathematical Physics, vol. 2015, Article ID 439526, 12 pages, 2015.
- I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural networks for solving ordinary and partial differential equations,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 987–1000, 1998.
- M. L. Piscopo, M. Spannowsky, and P. Waite, “Solving differential equations with neural networks: applications to the calculation of cosmological phase transitions,” Physical Review D, vol. 100, no. 1, article 016002, 2019.
Copyright © 2021 Feng Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.