Abstract

In this study, an applicable and effective method, which is based on a least-squares residual power series method (LSRPSM), is proposed to solve the time-fractional differential equations. The least-squares residual power series method combines the residual power series method with the least-squares method. These calculations depend on the sense of Caputo. Firstly, using the classic residual power series method, the analytical solution can be solved. Secondly, the concept of fractional Wronskian is introduced, which is applied to validate the linear independence of the functions. Thirdly, a linear combination of the first few terms as an approximate solution is used, which contains unknown coefficients. Finally, the least-squares method is proposed to obtain the unknown coefficients. The approximate solutions are solved by the least-squares residual power series method with the fewer expansion terms than the classic residual power series method. The examples are shown in datum and images.The examples show that the new method has an accelerate convergence than the classic residual power series method.

1. Introduction

The fractional calculation has been a popular topic for the past few years. Leibniz and L’Hopital were the first to propose fractional differential equations. Fractional differential equations are applied to many different fields, such as control science and engineering [1] and computer science and technology [2]. Many researchers have studied different theories in fractional differential equations. There are many different approximate analytical methods to solve fractional differential equations, such as the homotopy analysis transform method [3], ()-expansion method [4], homotopy perturbation method [5], and variational iteration method [6].

In recent years, a new method has been proposed to solve the fractional differential equations by the fractional residual power series method, which was used to find the analytical solution for several classes of fractional differential equations. Many scholars have devoted themselves to the study of this field. In [7], the time-fractional foam drainage equation was worked out by the residual power series method. Wang and Chen [8] proposed that the residual power series can be applied to time-fractional Whitham–Broer–Kaup equations. The fractional variation of the -dimensional Biswas–Milovic equation [9] was figured out by the residual power series method. Alquran et al. [10] solved the time-fractional Phi-4 equation by the residual power method. A system of multipantograph delay differential equations using the residual power series method was calculated by Komashynska et al. [11]. The approximate solutions for the time-fractional Sharma–Tasso–Olever equation [12] were solved by the residual power series method. In [13], an approach was applied to find the exact solutions of fractional-order time-dependent Schrödinger equations. The residual power series method was also used for many other problems, such as the gas dynamic equation [14], the fractional initial Emden–Fowler equation [15], the generalized Berger-Fisher equation [16], and the nonlinear time-space-fractional Benney–Lin equation [17].

Recently, some existing methods have been modified by the least-squares method so that the approximate solution achieves higher accuracy. Kumar and Koundal [18] proposed an approach in which the system of nonlinear fractional partial differential equations was figured out by generalized least-squares homotopy perturbations. The approximate analytical solutions for nonlinear differential equations were solved by the least-squares homotopy perturbation method [19]. The linear and nonlinear fractional partial differential equations were figured out by the least-squares homotopy perturbation method [20].

In this paper, the least-squares method is combined with the residual power series method, which is called the least-squares residual power series method (LSRPSM). Compared with the classic residual power series method, a more accurate approximate solution with fewer expansion terms can be obtained by the new method.

The rest of the structure is as follows. In Section 2, some basic definitions about the sense of Caputo and fractional partial Wronskian are introduced. In Section 3, the least-squares residual power series method is proposed with necessary definitions. Numerical results and discussions are presented by graphics and charts in Section 4. At last, the conclusion is drawn in Section 5.

2. Basic Definitions

In this section, the definition of the Caputo fractional is introduced systematically. This section also presents the definition of fractional partial Wronskian.

Definition 1 (see [2123]). The Riemann–Liouville fractional integral operator of order is defined aswhere is the convolution product of and .
For the Riemann–Liouville fractional integral, we have(1)(2),where λ and μ are real constants.

Definition 2 (see [24, 25]). Let be a function and be the upper positive integer of α (). The Caputo fractional derivative is defined byFor the Caputo derivative, we have(1)(2)(3)(4)(5)where , and c are real constants.

Definition 3 (see [18]). Let be n functions of variables x and t which are defined on domain ; then, fractional partial Wronskian of iswhere and for and .

Theorem 1 (see [18]). If the fractional partial Wronskian of n functions is nonzero, at last at one point of the domain , then functions are said to be linearly independent.

3. Direct Method of Least-Squares Residual Power Series Method (LSRPSM)

In this section, the general procedure of the least-squares residual power series method is proposed. Based on the classic residual power series method, the method of combining residual power series with the least-squares method is used for the time-fractional differential equations.

3.1. Classic Residual Power Series Method

We consider the following time-fractional differential equation:withwhere is a linear operator, is a nonlinear operator, is an unknown function, and I is an initial condition.

Following the classic residual power series method [26, 27], the algorithm can be proposed by

In order to obtain the approximate value of (6), the form of the ith series of is proposed. Then, the truncated series is defined by

With the initial condition , we define the ith residual function as follows:

In order to get , we look for the solution ofwhere .

Here, classic residual power series method will give the -order approximate solutions withwhere

Theorem 2 (see [28]) (convergence theorem). Suppose that , for , where and can be differentiated with respect to t on . Then,whereand r is the radius of convergence.

Moreover, there exists a value ξ, where so that the error term has the form

According to Theorem 1, we can obtainso

3.2. Least-Squares Residual Power Series Method

The procedure of the least-squares residual power series method is presented, and some definitions we require are proposed in this section.

Let the remainder for the differential equation (4) bewith the conditionwhere is the approximate solution of equation (4).

Remark 1. If converge to the solution of equation (4).

Definition 4. We say is the ε-approximate residual power series method solution of equation (4) on domain ifand (18) is also satisfied by .

Definition 5. We call is the weak ε-approximate residual power series method solution of equation (4) on domain ifand (18) is also satisfied by .
We propose the following steps for the least-squares residual power series method.

Step 1. We use the classic residual power series method. The form of can be written asand the ith residual function is as follows:Then, we look for the solution of bywhere .
Here the classic residual power series method will give the -order approximate solutions withwhere can be calculated by (11).

Step 2. The linearly independent functions can be verified bywhere , .
Let , where and the elements of are linearly independent in the vector space of continuous functions defined on R.

Remark 2. If we cannot find the point so that the value of is not 0, the set is linearly dependent. So, we must use the classic residual power series method in this case.

Step 3. Assumeas the approximate solution of equation (4). And substituting the approximate solution in (17), we obtain

Step 4. We attach to the following functional:Here, we calculate some constants .
We compute the values of as the values which give the minimum of (29) and the values of again as functions of by using the initial conditions.
So, we can obtain the value of by solving (29):From (27)–(30), we can get

Theorem 3. The values of from (30) satisfy the property:

Proof. Based on the way that is computed, the following inequality holds:Also, from (31) we haveThen, according to the convergence of the residual power series solution from (16), we can getThe ϵ-approximate residual power series solutions are also the weak solutions of equation (4).

4. Illustrative Examples

In this section, some examples are presented by the least-squares residual power series method (LSRPSM). Using the new method, we usually compute the initial iterations by the fractional residual power series method, the rest can be ignored. Then, the least-squares method is applied, and the unknown coefficients are obtained. The approximate solutions are calculated by Maple in Windows 7 (64 bit). We analyse the approximate solutions by charts and graphics.

Example 1. Consider the following time-fractional Fornberg–Whitham equation:with the initial conditionand the exact solution when isUsing the classic residual power series method, we can obtain the following equations [29]:Then,When , . Hence, the functions are linearly independent.
So, we can obtain the approximate solution, which can be written asFrom (36), we can get the residual functionwith the initial conditionUsing the given condition (43) in (41), we get . So, (41) can be written asSo, we can obtain by substituting (44) into (42). Then, the functional J will beWe compute the functional J of (45). We receive two algebraic equations asand then we determine the unknown coefficients of (46) when :In Figure 1, the exact solutions and the approximate solutions using the least-squares residual power series method are presented by the three-dimensional graphics. Figure 1(a) presents the exact solution and Figure 1(b) presents the approximate solution when . There are little differences between Figures 1(a) and 1(b). So, the approximate solution is accurate when the values of α approach 1.
We present the absolute errors between the exact solutions and the approximate solutions by the new method. The absolute error can be written asAs is shown in Table 1, we present the absolute error between different values of x and t when . For each item in the table, the upper corner is the name of the method, and the lower corner is the number of items expanded under the method. And the least-squares residual power series method (LSRPSM) with when is compared with the classic residual power series method (RPSM) with when , as shown in Table 1.
From Table 1, we can find that the absolute errors by the least-squares residual power series method with different x and t between the approximate solutions and the exact solutions are within the acceptable range. The range of magnitude of absolute errors is from to . The smaller the value of the variable t with fixed x is, the bigger the absolute errors are. In the meantime, the smaller the value of the variable x with fixed t is, the smaller the absolute errors are. Comparing the classical method with our new method, we found that the new method has high accuracy.
Using the same method, the linearly independent functions can be verified when . Then, we can obtain the unknown coefficients , respectively, by using the least-squares method.
Figure 2 shows the influence of different α on the approximate solutions when . Figure 2(a) presents approximate solutions for , Figure 2(b) presents approximate solutions when , and Figures 2(c) and 2(d) show the approximate solutions when and .
From Figure 2, the larger the value of α, the smoother the plane. As the parameter α increases, the graphics get closer and closer to the exact solution of the graphic.
For any , the exact value of is 0. The distinction between the approximate solutions and the exact solutions can be shown by the values of . So, we can use the values of to represent the deviation between the approximate solution and the exact solution.
Using the same method, the linearly independent functions can be verified when . Then, the unknown coefficients can be solved, respectively, by using the least-squares method. And we compare the results of the least-squares residual power series method (LSRPSM), the residual power series method (RPSM) and the variational iteration method (VIM) [30]. The approximate solutions of VIM can be written asTable 2 presents the approximate solutions and the values of when .

Example 2. Consider the following time-fractional BBM-Burger equation:with the initial conditionand the exact solution when isUsing the classic residual power series method, we can obtain the following equations [31]:Then,When , . Hence, the functions are linearly independent.
So, we can obtain the approximate solution, which can be written asFrom (50), we can get the residual functionwith the initial conditionUsing the given condition (57) in (55), we get . So, (55) can be written asSo, we can obtain by substituting (58) into (56). Then, the functional J will beWe compute the functional J of (59). We receive two algebraic equations asand then we determine the unknown coefficients of (60) when :In Figure 3, we present the exact solutions and the approximate solutions by the three-dimensional graphics. Figure 3(a) presents the exact solution and Figure 3(b) presents the approximate solution when . From Figure 3, we can find that Figure 3(b) is similar to Figure 3(a). So, the approximate solution is accurate when the values of α approach 1.
As is shown in Table 3, we present the absolute error in (48) between different values of x and t when . And the least-squares residual power series method (LSRPSM) and the q-homotopy analysis method (q-HAM) [32] with when are compared with the classic residual power series method with when , as shown in Table 3. The approximate solutions of the q-HAM can be written aswhereand the values of parameters are .
From Table 3, we can obtain that the range of magnitude of absolute errors is from to . The absolute errors by the new method with different values of x and t are within the acceptable range. Fix the value of t, and the absolute error is the smallest when and the absolute increases as the absolute value of x increases. Compared with the classic residual power series method and the q-homotopy analysis method, the least-squares residual power series method is more accurate.
Using the same method, the linearly independent functions can be verified when . Then, we can obtain the unknown coefficients , respectively, by using the least-squares method.
Figure 4 shows the influence of different α on the approximate solutions when . Figures 4(a)4(c) present approximate solutions for , , and , respectively.
From Figure 4, we can conclude that the larger the value of α, the smoother the images, and the closer to the image of the exact solution.
Using the same method, the linearly independent functions can be verified when . The unknown coefficients can be obtained by using the least-squares method. Table 4 shows the approximate solutions and the values of when .

Example 3. Consider the following fractional biological population equation:with the initial conditionand the exact solution when isUsing the classic residual power series method, we can obtain the following equations [33]:Then,When , . Hence, the functions are linearly independent.
So, we can obtain the approximate solution, which can be written asFrom (64), we can obtain the residual functionwith the initial conditionUsing the given condition (71) in (69), we get . So, (69) can be written asSo, we can obtain by substituting (72) into (70). Then, the functional J will beWe compute the functional J of (73). We receive two algebraic equations asand then we determine the unknown coefficients of (74) when :From Figure 5, we obtain the exact solutions and the approximate solutions by the three-dimensional graphics. Figure 5(a) presents the exact solution, and Figure 5(b) presents the approximate solution. The condition of Figure 5 is . From Figure 5, we can conclude the three-dimensional graphics of the approximate solutions are similar to the exact solutions.
As is shown in Table 5, we present the absolute error in (49) between different values of x and t when . And the least-squares residual power series method (LSRPSM) with when is compared with the classic residual power series method and the homotopy perturbation method (HPM) [34] with when , as shown in Table 5. The approximate solutions of HPM for can be written asFrom Table 5, we can conclude that the absolute errors with the different values of x and y are within the acceptable range. Given the same items, we compare the residual power series method and the homotopy perturbation method with the least-squares residual power series method, and the new method is more accurate.
Using the least-squares residual power series method, the linearly independent functions can be verified when . Then, we can obtain the unknown coefficients , respectively, by using the least-squares method.
Figure 6 shows the influence of different α on the approximate solutions when . Figure 6(a) presents approximate solutions for , Figure 6(b) presents approximate solutions for , Figure 6(c) presents approximate solutions for , and Figure 6(d) presents approximate solutions for .
From Figure 6, the larger the value of alfa, the smoother the image and the closer the image to the exact solution.
Using the least-squares residual power series method, the linearly independent functions can be verified when . The unknown coefficients can be obtained by using the least-squares method. Table 6 shows the approximate solutions and the values of when .

5. Conclusion

In this paper, we discuss the approximate solutions of the least-squares residual power series method. This method is an improvement on the classic residual power series method. We combine the least-squares method and residual power series method. We obtain more accurate approximate solutions with fewer expansion terms. The approximate solutions are presented by data and graphics. The results show that the approximate solutions solved by this method have little error. In summary, this new technique is effective and accurate in finding approximate solutions for the time-fractional differential equations.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (grant nos. 11701446, 11601420, and 11401469), Natural Science Foundation of Shaanxi Province (2018JM1055), New Star Team of Xian University of Posts and Telecommunications, and Construction of Special Funds for Key Disciplines in Shaanxi Universities.