Abstract

In this paper, with the aid of symbolic computation system Python and based on the deep neural network (DNN), automatic differentiation (AD), and limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization algorithms, we discussed the modified Korteweg-de Vries (mkdv) equation to obtain numerical solutions. From the predicted solution and the expected solution, the resulting prediction error reaches . The method that we used in this paper had demonstrated the powerful mathematical and physical ability of deep learning to flexibly simulate the physical dynamic state represented by differential equations and also opens the way for us to understand more physical phenomena later.

1. Introduction

In recent years, nonlinear phenomena have been widely used in fields such as mathematics, physics, chemistry, biology, finance, and engineering technology. Because a large number of mathematical models of scientific and engineering problems are reduced to the problem for determining solutions of ordinary differential equations (ODEs) and partial differential equations (PDEs) and the problems are complex and the amount of calculation is huge, except for a few special types of differential equations that can be solved by analytical methods, the analytical expressions to be obtained are extremely difficult in most cases. Therefore, the research on the numerical methods for PDE has become a popular mainstream direction. Numerical solutions have attracted the attention of scientific researchers, and it is also a large-scale scientific and engineering calculation.

The numerical method of PDE is based on whether the regular grid method and the gridless method are used when discretizing. Due to the difficulties in the structure of the numerical format and meshing, it is subject to many restrictions in practice. In obtaining high-precision and high-resolution solutions, not experienced computational mathematicians will have difficulty for the reason that the structure of the numerical format is very complicated.

Artificial neural networks (ANN) which are simplified models of the biological nervous system represent a technology that has various applications in the area of mathematical modeling, text recognition, voice recognition, learning and memory, pattern recognition, signal processing, automatic control, signal processing, decision-making assistance and time-series analysis, etc. [1]. ANN has been applied to solve ordinary differential equations and partial differential equations as early as more than 20 years ago. As we all know, solving differential equations by neural networks can be regarded as a mesh-free numerical method. Due to the importance of differential equations, many methods have been developed in the literature for solving them [2]. Rosenblatt introduced the first model of supervised learning based on a single-layer neural network with a single neuron [3]. Mcfall studied boundary value problems with arbitrary irregular boundaries by an artificial neural network method in 2006 [4]. Mall and Chakraverty solved ordinary differential equations with the application of the Legendre neural network in 2016 [5].

However, due to the limitation of computing methods and computing resources at that time, this technology has not received enough attention. With the development of deep learning in recent years, Professor Karniadakis from the Department of Applied Mathematics at Brown University and his collaborators reexamined the technology and developed a set of deep learning algorithm frameworks based on the original. It was named “physics-informed neural networks (PINN)” and was first used to solve forward and inverse problems of partial differential equations. This has also triggered a lot of follow-up research work and has gradually become a research hotspot in the emerging interdisciplinary field of Scientific Machine Learning (SCIML). From the point of view of function approximation theory in mathematics, the neural network can be regarded as a general nonlinear function approximator, and the modeling process of partial differential equations is also looking for nonlinear functions satisfying constraint conditions, which have something in common. Thanks to the AD technology widely used in the deep learning neural network, the differential form constraint conditions in the differential equation are integrated into the loss function design of the neural network, so as to obtain the neural network constrained by the physical model—this is the most basic design idea of PINN.

Both PINN’s network structure and loss function need to be tailored to the form of differential equations, which is different from current work in computational physics that directly utilizes machine learning algorithms. Different from the classical supervised learning task, PINN has the regularization factor of differential equation and initial boundary value condition in addition to the supervised data part in the design of loss function. These regularization factors are different and need to be tailored to achieve the optimal design according to the problem. The traditional computational differential equation numerical solution is obtained by finite difference, finite element, and other numerical methods, but the disadvantage is that it needs to give clear initial value conditions, and the numerical solution algorithm is sensitive to the boundary region; the condition slightly changed must be recalculated, which is difficult to be used in real-time calculation and prediction. PINN overcomes the problem that the traditional numerical simulation method is sensitive to the region and the initial and boundary conditions. Raissi et al. introduced physics-informed neural network data-driven solution, and they presented their developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations in 2017 [6]. Raissi et al. used multistep neural networks to study the dynamical systems of nonlinear dynamical systems in 2018 [7]. Raissi and Karniadakis study the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time-dependent linear fractional equations by machine learning [8]. Liu et al. solved differential equations with neural networks in 2019 [9]. Han et al. solved high-dimensional partial differential equations by using deep learning in 2018 [10].

In the present study, we take advantage of the fast developing machine learning and use the method of PINN that was proposed by Raissi et al. [11] to study the mkdv equation. AD and L-BFGS [12] optimization algorithms had been used to train loss function. First, we introduced the main ideas of the algorithm. Second, we use the method to study two kinds of initial solutions of the mkdv equation, and the predicted solitary wave is first shown in this paper. We also show the relative -norm error between the predicted and the exact solution for the different number of initial and boundary training data and different number of collocation points . The three-dimensional diagram and projected image of the exact solutions and predicted solutions of the mkdv equation with different initial solutions are shown in Figures 14. Finally, we conclude the paper. From the results obtained in the experiment, some novel and important developments for searching for analytical solitary wave solutions for PDE were investigated. The results of this manuscript may well complement the existing literature as the following: extended and modified direct algebraic method, extended mapping method, and Seadawy techniques to find solutions for some nonlinear partial differential equations such as dispersive solitary wave solutions of Kadomtsev-Petviashvili-Burgers dynamical equations [13]; the elliptic function, bright and dark solitons, and solitary wave solutions of higher-order NLSE [14]; abundant lump solution and interaction phenomenon of ()-dimensional generalized Kadomtsev-Petviashvili equation [15]; describing the bidirectional propagation of small amplitude long capillary gravity waves on the surface of shallow water [16]; dispersive traveling wave solutions of the equal-width and modified equal-width equations [17]; periodic solitary wave solutions of the ()-dimensional variable-coefficient Caudrey-Dodd-Gibbon-Kotera-Sawada equation [18]; rational solutions and lump solutions to the generalized ()-dimensional shallow water-like equation [19]; new solitary wave solutions to coupled Maccari’s system [20]; and lump solutions to a ()-dimensional fourth-order nonlinear PDE possessing a Hirota bilinear form [21]. Therefore, this study is of significance for the later study of soliton solutions.

2. Main Ideas of the Algorithm

2.1. Illustration of the Algorithm

Deep learning is a new field in machine learning research. Its motivation lies in establishing and simulating a neural network for analysis and learning of the human brain. It mimics the mechanism of the human brain to interpret data. The concept of deep learning comes from the research of artificial neural networks. The multilayer perceptron with multiple hidden layers is a kind of deep learning structure. We give the structure of a simple neural network and deep neural network in Figure 5. In this paper, the network was used as a supervised network that means multilayer perceptron needs a teacher to tell the neural network what the desired output should be. Deep learning forms a more abstract high-level representation attribute category or feature by combining low-level features to discover distributed feature representations of data. Deep learning uses a hierarchical structure similar to neural networks. The system consists of a multilayer network consisting of an input layer, a hidden layer (multilayer), and an output layer. Only nodes in adjacent layers are connected. There is no connection between each other, and each layer can be regarded as a logistic regression model. Deep learning allows computers to construct complex concepts through simpler concepts with powerful capabilities and flexibility. DNN have shown great potential in approximating high-dimensional functions compared with traditional approximations based on Lagrangian interpolation or spectral methods [22].

Typical examples of deep learning models are feedforward deep networks or multilayer perceptrons. Multilayer perceptron is a mathematical function that maps input to output value. This function is composed of many simpler functions. Each layer of a fully connected DNN can be expressed as follows: where means input vector and is activation function (we choose hyperbolic tangent function as the activation function where :

, is the weights from layer to layer , and means the weight between the -input and -neuron of the hidden layer. is bias vector. can be considered an approximate solution for of a PDE. The final approximate solution is solved by adjusting the parameters to minimize the error of the approximate solution and the exact solution.

A fully connected neural network was previously proven [23, 24] by Jones and Carroll and Dickinson that any continuous function defined in a finite domain can be approximated. In this paper, we introduced the form and construction of the solution of PDE using the physics-informed neural network method. The schematic of the physics-informed neural network for the mkdv equation is shown in Figure 6. Consider the general form of the PDE as follows: where is a nonlinear function of time , space , solution , and its derivatives and the subscripts denote partial differentiation in either time or space . For example, is the second derivative of with respect to .

2.2. Details of the Algorithm

According to Equation (1), let us define as follows:

Objective function of the trial function can be defined as [25] where mean square errors are defined, respectively, as and :

The objective function of DNN training is performed by the mean squared error on the network outputs.

The weight and bias between the neural networks and can be learned by minimizing the mean squared error loss, were domain data, is the number of sampling points on the boundary, is the number of sampling points on the region, , were initial and boundary training data on , and is predicted solution.

3. Example for Modified Korteweg-de Vries Equation

The modified Korteweg-de Vries (mkdv) equation may be written as [26]

If , . We got the training and test data by using conventional spectral methods and using the Chebfun package [27] with a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step size .

There are two parts of data points to form the collocation points of training : one part used the Latin hypercube sampling strategy to generate 10000 data points and the other part uses random sampling to generate 456 data points. Randomly extract data from the initial and boundary data as training points, and we learn the latent solution by using the L-BFGS algorithm to optimize the parameters to minimize the error function Equation (7). We had shown the predicted solution in Figure 7 by an 11-layer deep neural network in which each hidden layer contained 30 neurons. The relative -norm error for this case is . The code runs on a personal laptop with Intel® Core™ i5, 2.50 GHz, and the running time is 930.9392 seconds. Judging from the physical propagation diagram of the exact solution which is a soliton solution obtained by the Chebfun package and the predicted solution at the bottom of Figure 7, the waveform of the single soliton has not changed over time. The exact dynamics and learned dynamics of are shown in Figure 8. We choose the number of training points as and collocation points as ; under this condition, we study the influence of different layers and different neurons on the relative -norm error. The relative -norm error tends to decrease with the increase of layers and neurons, and it is shown in Table 1. We also studied the effect of the DNN architecture which is constructed by 9 layers with 20 neurons per hidden layer with different training points and collocation points on relative -norm error which is shown in Table 2. The three-dimensional diagram and projected image of the exact solutions and predicted solutions of the mkdv equation with initial state are shown in Figures 1 and 2.

In order to further study the effectiveness of the performance of the algorithm to approximate the exact solutions of mkdv equations, we change the initial condition as follows [28]:

We got the training and test data by using conventional spectral methods and using the Chebfun package with a spectral Fourier discretization with 256 modes and a fourth-order explicit Runge-Kutta temporal integrator with time-step size . The data points used to obtain are divided into two parts, one part used the Latin hypercube sampling strategy to generate 10000 data points and the other part uses random sampling to generate 456 data points. Randomly extract data from the initial and boundary data as training points, and we learn the latent solution by using the L-BFGS algorithm to optimize the parameters to minimize the error function Equation (7). We had shown the predicted solution in Figure 9 by an 11-layer deep neural network in which each hidden layer contained 15 neurons. Running time of the code is 439.9942 seconds. The relative -norm error for this case is . Judging from the physical propagation diagram of the exact solution which is a soliton solution obtained by the Chebfun package and the predicted solution in Figure 9, the waveform of the single soliton has not changed over time. The exact dynamics and learned dynamics of are shown in Figure 10. We choose the number of training points as and collocation points as ; under this condition, we study the influence of different layers and different neurons on the relative -norm error. The relative -norm error tends to decrease with the increase of layers and neurons, and it is shown in Table 3. We also studied the effect of the DNN architecture constructed by 9 layers with 15 neurons per hidden layer with different training points and collocation points on relative -norm error which is shown in Table 4. The three-dimensional diagram and projected image of the exact solutions and predicted solutions of the mkdv equation with initial state are shown in Figures 3 and 4.

4. Conclusions

With the increase of data volume, the improvement of computing power, and the emergence of new machine learning algorithms (deep learning), artificial intelligence has become a field with many practical applications and active research topics. Deep learning is one of the ways to artificial intelligence. It is a type of machine learning, a technology that enables computer systems to be improved from experience and data.

In this paper, we briefly describe details of the algorithm of DNN. Figures 510 show the basic structure of the simple neural network and deep neural network, schematic of the physics-informed neural network, and comparison diagram of the precise dynamical system and the predicted dynamical system of the mkdv equation. Tables 1 and 3 show -norm error between the predicted and exact solutions of for different numbers of hidden layers and different numbers of neurons per layer. Tables 2 and 4 show the relative error between the predicted and exact solutions for different numbers of training points and collocation points . Tables 14 illustrate the relative error that tends to decrease with the increase in layers and neurons. This method demonstrates the strong mathematical and physical ability of deep learning to simulate the physical dynamic state represented by differential equations and also opens the way for us to understand more physical phenomena later.

Data Availability

The data in the manuscript can be generated by MATLAB software. The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (11571008, 11661060 and 12061054) and Natural Science Foundation of Inner Mongolia Autonomous Region of China (2018LH01013).