Research Article  Open Access
Oluwaseun Olumide Okundalaye, Wan Ainun Mior Othman, Nallasamy Kumaresan, "Optimal Homotopy Asymptotic MethodLeast Square for Solving Nonlinear FractionalOrder GradientBased Dynamic System from an Optimization Problem", Advances in Mathematical Physics, vol. 2020, Article ID 8049397, 15 pages, 2020. https://doi.org/10.1155/2020/8049397
Optimal Homotopy Asymptotic MethodLeast Square for Solving Nonlinear FractionalOrder GradientBased Dynamic System from an Optimization Problem
Abstract
In this paper, we consider an approximate analytical method of optimal homotopy asymptotic methodleast square (OHAMLS) to obtain a solution of nonlinear fractionalorder gradientbased dynamic system (FOGBDS) generated from nonlinear programming (NLP) optimization problems. The problem is formulated in a class of nonlinear fractional differential equations, (FDEs) and the solutions of the equations, modelled with a conformable fractional derivative (CFD) of the steepest descent approach, are considered to find the minimizing point of the problem. The formulation extends the integer solution of optimization problems to an arbitraryorder solution. We exhibit that OHAMLS enables us to determine the convergence domain of the series solution obtained by initiating convergencecontrol parameter . Three illustrative examples were included to show the effectiveness and importance of the proposed techniques.
1. Introduction
Consider a nonlinear programmingconstrained optimization problems (NLPCOPs) of the form where , and , , are functions. Let be the feasible set of Equation (1), and we assume that is not empty. The general idea of obtaining an approximate analytical solution to Equation (1) is to transform to an unconstrained nonlinear programming problem by any suitable technique such as augmented Lagrange method, barrier method, and penalty method [1, 2]; it can then be solved by any unconstrained optimization numerical method like the steepest descent method, conjugate gradient method, and Newton method. In optimization, the penalty method is the most efficient method to transform a constrained optimization problem into an unconstrained optimization problem [3â€“5]. An efficient penalty function for equality and inequality problem Equation (1) is given below where . It can be seen that under some conditions, the solutions to Equation (1) are solutions of the unconstrained below [6], where is an auxiliary penalty variable. The corollary connecting the minimizer of the constraint problem in Equation (1) and unconstrained problem in Equation (4) is seen in [7]. The gradient descent method as a standard optimization algorithm has been widely applied in many engineering applications, such as optimization machine learning and image [8â€“10]. Through diverse research and studies, it is established that the gradient method is one of the most reliable and efficient ways to find the optimal solution of optimization problems [11]. Nowadays, one of the critical points of the gradient method is how to improve the performance further. As an important area of mathematics, fractional calculus is believed to be an excellent tool to enhance the old gradient descent method, mainly because of its special long memory characteristics and nonlocality [12â€“14]. In the past decade, several methods have been considered to solve unconstrained nonlinear optimization in the form of ordinary differential equation (ODE) dynamic system of which the gradientbased method is one of the approaches. The technique transforms the nonlinear optimization problem to an ODE dynamic system with some optimality conditions, to obtain optimal solutions to the optimization problem. The gradientbased method was first proposed by [15], was developed by [16, 17], and was later extended to solve differential nonlinear programming problems [18]. However, the studies of nonlinear fractionalorder gradientbased dynamic systems are still in the infant stage and are considered further in this paper.
Arbitraryorder ODEs, which are the generalizations of integerorder ODEs, are mostly used to model problems in applied sciences. Several numerical methods had been used to solve linear and nonlinear problems of FDEs, such as the Adomian decomposition method (ADM) [19], variational iteration method (VIM) [20], homotopy perturbation method for solving fractional ZakharovKuznetsov equation [21], a numerical method for FDEs [22], and multivariate padÃ© approximation (MPA) [23]. The usefulness of an arbitraryorder started receiving tremendous attention of researchers in the field of applied science and engineering in the last two decades where some authors in the area of optimization focused on developing approximate analytical methods for different types of nonlinear constrained optimization problems in the form of IVPs of nonlinear FDE systems including multistage ADM for NLP [24], a fractional dynamics trajectory approach [25], the convergence of HAM and application [26], fractional steepest descent approach [27], studied optimal solution of fractional gradient [28], gradient descent direction with Caputo derivative sense for BP neural networks [29], fractionalorder gradient methods [30], and conformable fractional gradientbased system [31]. In 2008, Marinca and Herisanu [32] introduced a numerical method called OHAM to solve a nonlinear problem, later extended by Azimi et al. [33] for strong nonlinear differential equations (NLDEs). This powerful tool called OHAM has not been applied in the area of FOGBDS, which motivates this work.
So, in this paper, we showed that the steadystate solutions of the proposed system can be approximated analytically to the expected exact optimal solution of the nonlinear programming constrained optimization problem by OHAMLS as . The significant contribution is summarized as follows: (1)The reason why OHAMLS is preferable to be the method used [25, 31] to solve FOGBDS(2)The reason why some existing approximate analytical method cannot guarantee the convergence of the series solution is discussed(3)From the previous approximation analytical method of solving FOGBDS, accurate optimal values controlconvergence parameter had been a little bit difficult to achieve which is easily address with least square optimization techniques(4)OHAMLS with guaranteed convergence ability is proposed with conformable fractional derivative sense to solve FOGBDS. The fastest convergence ability of the proposed compared with fourthorder RungeKutta is also shown
We arrange the paper as follows: a brief introduction to the fractional calculus and OHAMLS derivation is given in Section 2. Section 3 is devoted to problem formulation of OHAMLS with FOGBDS and the key contributions. In Section 4, we solved some NLP constrained optimization problems to show the effectiveness of the proposed method. The results obtained from OHAMLS are plotted in several figures with numerical method comparisons to confirm the validity and ability of the method to solve the problem. In the last section are the conclusions.
2. Preliminaries
2.1. Fractional Calculus
The most common arbitraryorder in literature is the RiemannLiouvilleâ€™s and the Caputo fractional derivative. The arbitraryorder definitions are generally used for mathematical modelling within many areas, especially when the classicalorder derivative operator fails or additional memory effect is required. However, the limitation of these two definitions is that they do not provide some of the features that the classical derivative provides, such as chain rule, quotient rule, product rule, and derivative of constant. Recently, Khalil et al. [34] have characterized a new fractional derivative operator, which is an extension of the usual conformable fractional derivative, to overcome these deficiencies. Besides these advantages, the conformable fractional derivative does not show the memory effect, which is inherent for the other classical fractional derivatives.
Definition 1. Let be a given function. The order CFD of given by
and
This new definition preserves many properties of the classical derivatives refer to [34, 35]. Some features that we will adopt are as follows:
Theorem 2. . Let and be differentiable at a point ; if is a differentiable function, then .
Definition 3. , where the integral is the regular Riemann improper integral, and .
Theorem 4. Let be any continuous function in the domain of , then .
Theorem 5. Let be differentiable and . Then, for all , we have .
2.2. The Elementary Concepts of OHAMLS
We start from the fundamental principle of OHAM as described in [36â€“38]. Consider the IVPs with initial conditions where is a linear operator, is a nonlinear operator, is an independent variable, is an unknown function, is the problem domain, and is a known function. According to OHAM, one can construct an homotopy map which satisfies where is an embedding parameter, is a nonzero auxiliary function for , , and is an unknown function. Obviously, when and , it holds that and , respectively. Thus, as varies from to , the solution approaches from to where is the initial guess that satisfies the linear operator which is obtained from Equation (8) for as
is chosen in the form where would be determined in the last part of this work. We consider Equation (8) in the form
Now substituting Equation (11) in Equation (8) and equating the coefficient of like power of , we obtain the governing equation of in a linear form, given in Equation (9). The first and secondorder problems are given by and the general governing equations for are given by where is the coefficient of , obtained by expanding in series with respect to the embedding parameter where is obtained from Equation (11). It should noted that for is governed by the linear Equations (9), (12), and (14) with linear initial conditions that come from the original problem, which can be easily solved.
It has been shown that the convergence of the series Equation (16) depends upon the . If it is convergent at , we have
The result of the order approximation is given as
Substituting Equation (18) in Equation (6), we get the following expression for the residual
If , then is the exact solution. Usually, such a case does not arise for nonlinear problems. Several methods [39, 40] can be used to find the optimal values of convergencecontrol parameters like the method of the least square method, collocation method, Ritz method, and Galerkinâ€™s method. By applying the least square method, we have minimized the functional where the value and depends on the given problem.
With these known , the approximate solution (of order) is well determined.
The correctness of the method by (1)Error Norm . (2)Error Norm .
The OHAMLS is based on hybridization of OHAM with the least square method of optimization technique. The OHAM enable us to determine the convergence domain of the series solution, and the least square method allows us to obtain the optimal values of the .
Remark 6. OHAMLS is preferable because VIM, HPM, and HAM are just a case as proved by [41â€“44].
Remark 7. The existing approximate analytical for FOGBDS cannot guarantee convergence mainly because they possess no criteria for the establishment for convergence of the series solution Equations (20) and (21).
3. Construction of OHAMLS with FOGBDS Generated by NLPCOPs
We begin by considering a NLP constrained in the form where is the objective function, are equality constraint functions, are inequality constraint functions, and are continuous differentiable functions. One of the main ideas of solving unconstrained NLP is by searching for the next point by choosing proper search direction and the stepsize as in the Newton direction [45], trustregion algorithm for unconstrained optimization [46]; the descent method [47], conjugate gradient method [48], threeterm conjugate gradient method [49], and subspace method for nonlinear optimization [50]; the hybrid method for convex NLP [51]; CCM for optimization problem and application [52]; and descent direction stochastic approximation for optimization problem [53]. But there are studies for other approaches. In this paper, we obtain the minimizing point of the problem by solving a certain initialvalue system of FDEs. This kind of FOGBDS was first proposed by Evirgen and Ã–zdemir [24].
Using the penalty function Equation (2) and (3) for Equation (24) with , the conformable FOGBDS model can be constructed as subject to the initial conditions where is the gradient vector of Equation (25) with respect to and is the CFD of .
Note that a point is called an equilibrium point of Equation (25) if it satisfies the RHS of Equation (25). We reformulate fractional dynamic system Equation (25) as
We used OHAMLS to obtain the solution of system Equation (27) by constructing the following homotopy where and . If , Equation (28) becomes and when the homotopy Equation (28) becomes subject to the initial conditions,
The correction functional for the system of conformable fractional nonlinear differential equation Equation (30), according to OHAMLS, can be constructed as
Thus as varies from to , the solution approaches from to where is the initial guess that satisfies the linear operator which is obtained from Equation (32) for as
is chosen in the form where can be determined later. We get an approximate solution by expanding in Taylorâ€™s series with respect to ; we have
Now using Equation (35) in Equation (32) and equating the coefficient of like power of , we obtain the governing equation of in a linear form, given in Equation (33). The 1st and 2ndorder problems are given by and the general governing equations for are given by where is the coefficient of , obtained by expanding
in series with respect to .
It has been shown that the convergence of the series Equation (38) depends upon the . If it is convergent at , one has
The solution of Equation (30) is determined approximately in the form,
Substituting Equation (40) in Equation (30), we get the following expression for the residual error
If , then is the exact solution. Usually, such a case does not arise for nonlinear problems. Using the least square method as below minimizes the functional where the value of and depends on the given problem.
With these known , the analytical approximate solution (of order) is well determined.
The steps for optimal homotopy asymptotic methodleast square (OHAMLS) are as follows:
Step 1. We transform the nonlinear constrained optimization problem to the unconstrained optimization problem by a penalty method.
Step 2. We find the gradient of the unconstrained optimization problem, with given initial conditions.
Step 3. We choose the linear and nonlinear operators for OHAMLS.
Step 4. We construct homotopy for the conformable fractional nonlinear differential equation which includes embedding parameter, auxiliary function, and the unknown function.
Step 5. We substitute the series solution results into the governing equation and equate to zero for an exact solution. Usually, such case a does not arise in nonlinear problems.
Step 6. We find the optimal values for by using the optimization method called least square method, for good analytical approximate solution.
3.1. Convergence Analysis of OHAMLS with FOGBDS
Theorem 8. As long as the series converges where is governed by Equation (40) under the definitions Equations (37) and (38), it must be the solution of Equations (25) and (26).
Proof. If we assume , , converges to , then From Equation (37), we can write So, using above gives From Equation (48), we have
4. Numerical Examples and Results
In this section, three examples are presented to illustrate the efficiency of the new method for solving NLPCOPs. The calculations are performed using maple software 2018, HP ENVY laptop 13 corei7 8th Gen 16GB.
Example 1. Consider the NLPCOP test problem from Schittkowski [54] (No. 216). whose exact solution is not known, but expected optimal solution is . First, we transform the constraint problem to an unconstrained problem by quadratic penalty function for ; then, we have where , and so that the nonlinear FOGBDS can be given as where . By using OHAMLS with auxiliary penalty variable , the terms of the OHAMLS solutions for fractional order are acquired by using the concept of homotopy. According to Equation (6)), we choose the linear and nonlinear operators in the following forms:
We can construct the following homotopy where
Substituting Equations (56)(58) into Equations (54) and (55) and equating the coefficient of the same powers of result to the following set of linear FDEs.
Applying the operator to both sides of Equations (59)(64) with initial conditions given in Equation (5.6), we obtain
Adding up the solution components Equations (65)(70), the 2ndorder approximate solution obtained by OHAMLS at , for , are
For the calculations of and in and given in Equations (71) and (72), we apply the procedure mentioned in Equations (19)(21); we obtain, for , and for ,
Substituting these optimal values into Equations (71) and (72) becomes
Table 1 shows the at different values of for Example 1. Table 2 shows the comparisons and the absolute error between OHAMLS and RK4 at different values of . Figure 1 shows the analytical approximate solutions obtained by OHAMLS for and with RK4 at .
Example 2. Consider the NLPCOPs test problem from Schittkowski [54] [No 320].


(a)
(b)
This is a practical problem, and the exact solution is not known, but the expected optimal solution is , . First, the quadratic penalty function is used to get the unconstrained optimization problem as follows: where and so that the nonlinear FOGBDS be given as
By using OHAMLS with , the terms of the OHAMLS solutions for fractional order are acquired by using the concept of homotopy. According to Equation (6), we choose the linear and nonlinear operators in the following forms: