Research Article | Open Access

Volume 2021 |Article ID 5589322 | https://doi.org/10.1155/2021/5589322

Hala A. Omar, "An Integrated Genetic Algorithm and Homotopy Analysis Method to Solve Nonlinear Equation Systems", Mathematical Problems in Engineering, vol. 2021, Article ID 5589322, 14 pages, 2021. https://doi.org/10.1155/2021/5589322

# An Integrated Genetic Algorithm and Homotopy Analysis Method to Solve Nonlinear Equation Systems

Revised01 Apr 2021
Accepted02 Jun 2021
Published15 Jun 2021

#### Abstract

Solving nonlinear equation systems for engineering applications is one of the broadest and most essential numerical studies. Several methods and combinations were developed to solve such problems by either finding their roots mathematically or formalizing such problems as an optimization task to obtain the optimal solution using a predetermined objective function. This paper proposes a new algorithm for solving square and nonsquare nonlinear systems combining the genetic algorithm (GA) and the homotopy analysis method (HAM). First, the GA is applied to find out the solution. If it is realized, the algorithm is terminated at this stage as the target solution is determined. Otherwise, the HAM is initiated based on the GA stage’s computed initial guess and linear operator. Moreover, the GA is utilized to calculate the optimum value of the convergence control parameter (h) algebraically without plotting the h-curves or identifying the valid region. Four test functions are examined in this paper to verify the proposed algorithm’s accuracy and efficiency. The results are compared to the Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). The results corroborated the superiority of the proposed algorithm in solving nonlinear equation systems efficiently.

#### 1. Introduction

Various applications had to be modeled via a system of ordinary or partial, or even fractional, differential equations . Solving such systems may be a challenge for mathematicians. For this target, several iterative methods were proposed to solve nonlinear equation systems, as summarized in the literature . The homotopy analysis method (HAM)  is a known semianalytic approach to approximating series solutions of nonlinear equations. The HAM characteristics, including the initial approximation, auxiliary linear operator, and function, provide a powerful tool for nonlinear equation systems. Two conversion types of the HAM are available: the convergence of the calculated series solutions to the finite value of the variable x in the nonlinear problem’s domain; the convergence of this value to the nonlinear equation’s solution. The convergence region and the convergence rate of HAM’s series solutions depend on a convergence control parameter (h or co) introduced by Liao . After selecting the initial approximation, auxiliary linear operator, and function, HAM provides one family of solutions. According to this parameter’s value, a member of this family is selected to be an approximated solution to the nonlinear equation.

Consequently, the convergence region and the convergence rate of the series solutions calculated by HAM depend on the convergence control parameter . As proposed by Liao , the convergence control parameter is obtained by plotting the h-curves. A valid region () is a horizontal segment obtained by plotting the curves of specific quantities versus h. The convergence control parameter’s acceptable values can be determined through this valid region by trial and error. However, the h-curve approach cannot give the optimal value of h in . Several techniques were developed to accelerate the convergence of the series solution.

In [21, 22], an optimal homotopy asymptotic method (OHAM) is suggested as a HAM alternative. An auxiliary function is assumed based on a set of unknown constants. After applying the condition of residuals, these constants are determined. If this approximation order increases, it becomes more complex and CPU-time-consuming to solve the resulting nonlinear equations. In , one-step optimal HAM was proposed for estimating the convergence control parameter to overcome this disadvantage. The main difference between both approaches is that one algebraic equation is solved at each order of approximation in the one-step optimal HAM, which reduces the CPU time. It is consequently easier to apply using symbolic computation codes. However, it is not sure that we get the convergence analysis using this approach only. Hence, applying a multiple-step optimal approach is needed as seen in , followed by the one-step optimal procedure. In [24, 25], an approach based on adding a second auxiliary parameter to the zero-order deformation equation is proposed to enhance the convergence characteristics further. Adding a new additional parameter resulted in a new dimension in the convergence region, allowing selecting auxiliary operators more freely and generalizing the homotopy analysis method for a broader range of nonlinear problems. However, higher-order derivatives are computationally intensive. As a result, the approach is recommended for low order homotopy iteration schemes.

Another way to determine the convergence control parameter’s optimal value is to minimize the squared residual error as in . This residual is discretized by numerical rules like the trapezoidal rule as in [24, 29]. In , the residual error was discretized using Chebyshev points. The proposed approach determines the convergence control parameter’s optimal value by minimizing the discrete residual function’s norm at each HAM approximation order. This approach could find an acceptable approximation to a class of nonlinear differential equations. In , a new scheme based on stochastic arithmetic (SA) to find the optimal convergence control parameter was proposed. This scheme was called the CESTAC method. Moreover, the CADNA library is applied to implement the CESTAC method on the algorithms. Although the proposed approach obtained the optimal convergence control and the optimal number of iterations, plotting the h-curves is still needed because the control parameter’s proper initial value is chosen from the convergence region. Another way to calculate the convergence parameter is to minimize the residual as the objective function of a formulated optimization problem. Simultaneously, the solution is the value of the convergence parameter that reduces it to almost zero. The proposed optimization problem is classified as a type of a general equilibrium problem . Then, the traditional methods can be applied to get the solution. However, they have limitations because of the derivative calculations and the influence of the initial guess. These previous modifications provided enhanced approaches to calculate the convergence of the control parameter. However, they still have severe limitations, including the following:(i)Plotting h-curves is a necessity(ii)It hardly converges without being combined with other schemes(iii)The calculations of higher-order derivatives are computationally intensive(iv)The approaches that decrease the symbolic calculations are only suitable for low-dimensional systems(v)The absence of rules for defining the required terms in the HAM series increases the computational time and slows down the convergence

The other type of modifications depended on combining HAM and other conventional schemes for solving nonlinear systems. These combinations aim to improve the traditional methods’ performance and avoid (if possible) exploring the h-curves to find the optimal value of convergence control parameter h. In , a Newton homotopy differential equation (NHDE) was proposed to find the solution of nonlinear systems when initiated by a poor initial approximation. The procedure depends on constructing the Newton homotopy equation of the nonlinear system. Then, it is differentiated to create a linear ordinary differential equation of the two unknowns . Finally, the obtained equation is solved by the Euler method to get the desired solution’s values. Although this algorithm does not need to search for the optimal value of h, it depends on calculating the partial derivatives. In large systems, calculating the partial derivatives is high burden and is affected by a singularity. In , another combination of the Newton–Raphson scheme and HAM was proposed to construct a more efficient numerical algorithm named the Newton homotopy analysis method (NHAM). In this method, the value of h was determined by Newton–Raphson scheme. However, this method was applied to nonlinear equations of only one variable and was not generalized to solve nonlinear systems.

In , a combination of the Newton method and HAM (NHAM) was considered to solve the algebraic and transcendental equations. It aimed to accelerate the HAM’s convergence by beginning the calculations using the HAM to get a new initial point. Then, it applies the Newton method with that initial point. If the Newton method does not converge to the solution after some iterations, HAM is involved again, initiated with the Newton’s method solution. Although this method accelerated the convergence of HAM and enabled it to start whatever the initial guess was, it depends on defining an approximation value of h, leading to plot h-curves, or choosing it before starting the calculations. Choosing the convergence parameter at the beginning of the problem may lead to estimates of unrequired HAM terms, affecting the convergence with an undesired rate. Moreover, it depends on the Jacobian matrix, which is affected seriously by singularities. These modified methodologies have their limitations as summarized below:(i)There is a need for plotting h-curves(ii)The calculations of partial derivatives increase the computational burden(iii)The appearance of the problems in the traditional methods like Jacobian singularity and stiffness hinders the convergence(iv)The undetermined number of the HAM series affects the computation time

Genetic algorithms (GAs) are search-based algorithms based on the concepts of natural selection and genetics. Holland developed them as a subset of evolutionary algorithms . The algorithm begins with creating a pool or a population of possible solutions to the given problem. These solutions then undergo recombination and mutation (like in natural genetics), producing new children, and the process is repeated over various generations. They have been used to solve a lot of applications and optimization problems . Genetic algorithms are randomized, but they perform much better than random local search. GAs have various advantages, making them famous and suitable for multiple applications. They do not require any derivative information. Accordingly, they have massive parallelism that helps update the global search space and are more efficient than traditional methods. They also optimize both continuous and discrete functions with single or multiple objectives besides providing a list of “good” solutions, not just a single solution. They are useful when the search space is enormous, and there are many parameters involved. Several hybridization techniques and combinations were developed based on the GAs . These hybridization techniques aim to benefit the GA’s advantages by accelerating the hybrid method, enhancing their initial conditions, reducing their computational burden, or avoiding derivative calculations based on the gradient.

This work aims to drive an improved algorithm combining the benefits of the previous HAM algorithms’ modifications. It also seeks to use optimization methods in calculating the residual to determine the required number in the series. Finally, it is benefiting from the parallelism and global convergence of the genetic algorithm. For this aim, a hybrid algorithm combining genetic algorithm (GA) and HAM is proposed to solve nonlinear equation systems. The proposed algorithm (GAHAM) tries to determine the target solution through the GA without involving the HAM. If the GA fails to converge to the solution individually, the HAM phase is utilized for this task. Then, the GA enhances the HAM performance by introducing an improved initial guess to the HAM. Moreover, the GA will be used in HAM calculations to determine the optimum value of the convergence control parameter without plotting the h-curves. The contribution of this work can be summarized as follows:(i)The proposed combination of GA and HAM successfully solved nonlinear equation systems with less computational time and effort(ii)The GA initiates the HAM with an improved initial guess. If it fails to converge by itself, improving the HAM procedure’s performance decreases the required HAM terms(iii)The GA is used in the HAM calculations to determine the convergence parameter, which leads to dispensing the plotting of h-curves and decreasing the computational burden(iv)As all calculations are done algebraically during this algorithm, they can be easily repeated in high-dimension or slow convergent problems

The main benefits of the proposed algorithm are stated as follows:(1)If the genetic algorithm succeeded in calculating the solution, the HAM’s symbolic calculations or the gradient calculations using the traditional methods are not needed. As a result, the computational effort and time are reduced.(2)If the genetic algorithm fails to detect the solution, it will introduce an enhanced initial guess and linear operator to the HAM yielding faster convergence and less symbolic computations.(3)If the HAM is needed, the genetic algorithm is applied to calculate the convergence control parameter’s optimum value. The genetic algorithm calculates that value without plotting the h-curves, decreasing the computation time and accelerating the convergence to the required solution.

The paper is organized as follows. Section 2 illustrates the arithmetic criteria, recurrence relation of HAM, and other proposed algorithm details with a flowchart. Section 3 shows the results of the proposed algorithm for four test functions compared with the results of the Newton homotopy differential equation (NHDE)  and Newton homotopy analysis method (NHAM) . Section 4 illustrates the result analysis and concluding remarks. Finally, Section 5 summarizes the conclusions.

#### 2. The Proposed Algorithm

Figure 1 illustrates the flowchart of the proposed algorithm. After choosing a random initial guess , GA is applied first, resulting in a new point (). If the resulting residual value is less than the specified tolerance , this point will be the solution of the investigated system and the algorithm will be terminated without involving the HAM. Unfortunately, this is not guaranteed for all problems. In this case, the point resulting from the GA stage will be utilized as an improved initial guess to the HAM in the second stage, where the HAM has a definite number of terms for solution series (). The choice of the number will be declared in the section of concluding remarks. In stage 3, The HAM computations are completed, while the GA is applied to calculate an optimum value of the convergence control parameter without plotting h-curves. The new point () is obtained, and the residual is compared to the desired tolerance. If the terminating condition is satisfied, the process is terminated using both the GA and HAMs sequentially. If the solution is not reached, the algorithm is repeated from the first stage. The initial population will depend on the resulting point from the HAM procedure.

The described computational stages are defined as follows.

##### 2.1. Stage 1: Improving the Initial Guess

The system of nonlinear equations is defined aswhere

For m number of nonlinear equations in n number of unknowns or variables. The residual function is calculated asand, at an initial random point , the residual is , where for .

Assume that the enhanced point is  =  . As a result of improving the initial guess, it is convenient that

Figure 2 shows an assumption of the relation between the initial and the improved residuals at two points . Every variable in the system will have the components of these two points. According to Figures 2(a) and 2(b), if the line segment joins the pairs of and (), there will be a proposed angle between this line segment and the vertical line . The residual may decrease with the decrease in the variable as shown in Figure 2(a), or it may decrease with the increase of the variable as shown in Figure 2(b). Then, the change in the variable can be calculated as

Because is already unknown, an assumption of obtaining the first stage’s solution is proposed for more simplicity in calculations. As a result, . Thus, (5) can be rewritten for each variable component dx1 as

Then, the improved initial point is calculated as

It is noticeable that the number of angles equals the number of unknowns in the nonlinear system.

As illustrated in (6), the variable change depends on the tan function, which can be any real number within . As a result, the values of the tan function cannot be entered directly into GA as a search domain, unlike the values of . This causes these angles’ values to represent the GA chromosomes. Their domain is defined asfor

This proposed domain of the angle confirms that the new initial point will have a residual less than the initial random guess. According to the proposed angle domain, it should lie in the third or fourth quarter related to , which confirms that the new residual will be less than the initial residual . An additional parameter α is proposed to adjust the step size and accelerate the calculations. The domain of this parameter is . It is entered with the tan function of the angle limited by . As a result, the total number of unknowns in the n-unknowns system will be n + 1.

After constructing the chromosomes based on these intervals, the values of are calculated, and the improved initial point is performed. The optimization problem in GA stage is formulated aswhere

After completing the calculations in GA stage, the resulting point is examined to be the optimum solution. If the residual is less than the predetermined tolerance, the algorithm stops after the first stage using GA, and the obtained solution is confirmed. Otherwise, the resulting point is entered into the HAM stage as an enhanced initial guess.

##### 2.2. Stage 2: The HAM Procedure

Based on the realized initial guess from the GA stage, the HAM is initiated, where it can be represented aswhere is the initial guess and q is the homotopy parameter or embedding parameter.

At , wherethe following homotopy function is considered to perform the homotopy function solving (1).

Then, (12) is written aswhere L is the linear operator and h is the convergence control parameter.

Consider (16) with a nonsingular matrix of B; the deformation equation that is called zeroth-order deformation equation is calculated as

The matrix B is calculated from the results of the first stage as

Both enhanced are calculated from the results of the GA stage. The matrix () is a nonsquare matrix. According to [1, 2], the product () is invertible and the pseudoinverse of can be obtained from the formula

The matrix B is calculated as

Then, (17) is written as

The differentiation of (21) w.r.t. q will result in

After substitution with and , the resulting equation is

As F is a function of the vector X, which in turn is a function of the embedded parameter q, the derivative is computed as

From (23), the term is calculated as

The differentiation of (22) w.r.t q results in(26) is written asand substitution with in (27) results in

Because and from (23), (28) is written asby multiplying both sides by , the term is calculated asdifferentiating (26) w.r.t q results inand substitution with in (30) yields

Because and from (13), (32) is written as

Because , (33) is written asand, by multiplying both sides by , the term is calculated as

By repeating the same procedure, the rth order deformation equation is obtained. This can be used to determine the solution series of algebraic equation systems. In general, the series term is calculated as follows:

According to (36), the solution series is obtained at the end of this stage as a function of the convergence control parameter (h). The last step is applying GA to determine the optimum value of h and, therefore, the final solution.

##### 2.3. Stage 3: Calculating the Optimal Value of Convergence Control Parameter and Final Solution

In this stage, the residual function, which is a function of the convergence control parameter (h), is involved as GA’s objective function. The domain of h should be defined to start the GA optimization process. As illustrated earlier, past contributions depended on plotting the h-curves, and the resulting horizontal segment will give the desired search domain. On the other hand, this proposed algorithm is advantageous with no need to draw these curves. As reported in [4, 5], . Accordingly, the convergence control parameter’s search domain is defined as . The domain of the control convergence parameter is defined according to (37) to avoid indefinite values.

According to (37), the search for the optimum value of h occurs in the indefinite domain of real numbers through the definite ∅ domain search. The optimization problem is formulated in GA at this stage as

At the end of this stage, the convergence control parameter’s optimum value is obtained, and therefore a new solution is calculated. Hence, stages 2 and 3 represent the HAM procedure (stage) based on the enhanced initial guess. If the resulting solution of HAM procedure is optimum, the algorithm terminates, and the solution is displayed. Otherwise, it will be entered to the first stage as a new initial guess, and then the algorithm starts again until the desired tolerance is reached or no more improvement in the residual function is achieved. Then, the final solution of the nonlinear equations is declared.

It is worth mentioning that, according to this sequence of steps, the number of calculated terms using the HAM is identified before starting the HAM stage. In the proposed algorithm, it is selected as five. The reason for this choice is explained in the section of test functions. For more illustration of the algorithm procedure, Algorithm 1 shows the pseudocode of the proposed algorithm.

 Initialize: (i) the size of the population (N) (ii) the maximum number of generations (G) (iii) the initial random guess with its corresponding residual function F (iv) (n + 1) the number of angles () with their respective domains, where n is the number of system variables (v) m number of HAM series (vi) The angle with its domain (vii) The desired tolerance for stopping = Tol While the stop criterion is not satisfied, do Stage 1 For It = 1 : G Do For i = 1 : N Do Calculate the new individual Calculate its fitness function End For End For If Then, display solution = Else Stage 2 Stage 3 For It = 1 : G Do For i = 1 : N Do End for End for If Then, display solution = Convergence control parameter optimum value = Else Go to stage 1 End while

#### 3. Test Functions and Result Analysis

Four test functions were utilized to investigate the performance of the proposed algorithm. The results are compared to Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). For more generality, square and nonsquare systems are considered test functions. In cases of nonsquare systems, matrices are calculated using a pseudoinverse method. The number of variables and the number of equations are n and m, respectively. All these test functions were described in detail in [34, 4144].

The proposed algorithm is coded in MATLAB 2020a and implemented on the computer with Intel® Core™ i7 CPU, 3.2 GHz, and 16 GB RAM. For computational studies, a population size equal to 50, and 50 generations are used; crossover fraction is 0.8; migration interval is 20; and migration factor is selected to be 0.2.

##### 3.1. Selected Test Function 1 (Beale Function) 

As seen in , the first test function is represented as

The exact solution of this system is . The comparison starts with the NHAM. Three different cases of initial guesses are proposed, and the results are summarized in Table 1. In the NHAM and the GAHAM, the number of terms calculated is five. This choice was selected carefully, where it achieved the goal of improving or reaching the solution with the tested cases while reducing the time-consuming symbolic calculations.

 Case 1. Initial guess: 7.0611, 6.0747 Method Result HAM iterations Other iterations NHAM Converging 1 Newton 56 GAHAM Converging 1 GA 1 2 NHDE Diverging None 500 Case 2. Initial guess: 0.127, 0.9134 Method Result HAM iterations Other iterations NHAM Converging 1 Newton 77 GAHAM Converging None GA 1 NHDE Converging None 22 Case 3. Initial guess: Method Result HAM iterations Other iterations NHAM Converging 1 Newton 100 2 11 GAHAM Converging None GA 1 NHDE Converging None 105

According to Table 1, the NHAM in case 1 converged to the exact solution after one iteration of the HAM followed by 56 iterations of Newton’s method. The completion of calculations required the plotting of the h-curve of X1 to identify the valid region and, therefore, the convergence parameter optimum value. Figure 3 shows the h-curve and the corresponding valid region using the NHAM. The NHDE approach diverges despite increasing the number of iterations to 500. In the GAHAM, the solution is obtained after two iterations of the HAM and one iteration of the GA stage. The HAM iteration includes the last two stages of the proposed algorithm. All calculations proceeded algebraically without drawing the corresponding h-curve, which led to the acceleration of computations and accurate results.

For case 2, the NHAM used one HAM iteration, followed by 77 Newton’s iterations. NHDE needed 22 iterations to converge. However, no HAM iterations were required in the GAHAM. Convergence occurred after one iteration of the first stage using the GA stage.

For case 3, NHAM converges after two iterations of HAM and 111 Newton’s iterations. It should be mentioned here that, after the first iteration of HAM, 100 Newton’s iterations were used without convergence which leads to the second iteration of the HAM. After the second iteration of the HAM and 11 Newton’s iterations, the final solution was reached. Furthermore, the calculation of the convergence parameter from curves was time-consuming.

NHAM’s consequence prevented the movement to the Newton method step until completion of convergence control parameter calculations, which affected the speed of convergence and increased the computational burden. Figure 4 shows the h-curve of the HAM iterations to detect the valid region of the convergence control parameter. NHDE converges after 105 iterations. However, the GAHAM converges after one iteration of the GA stage, and no HAM iterations were needed. The GAHAM has the minimum number of iterations. Besides, the ability to converge, whatever the initial guess was without plotting h-curves as the NHAM and without the required Jacobian avoiding singularity or divergence probability, may appear using the NHDE method.

##### 3.2. Selected Test Function 2 (Kowalik and Osborne Function) 

According to , selected test function 2 is represented as

The exact solution of this test function is , where the results are summarized in Table 2. Figures 5(a) and 5(b) illustrate the variation of residual versus iterations for each method.

 Test function 2 Method Result HAM iterations Other iterations NHAM Almost converging 1 Newton 302 NHDE Converging None 495 GAHAM Converging 3 GA None

Figure 5(a) shows the results of three methods based on the whole number of iterations. Figure 5(b) shows the details of convergence during the first five iterations to show the difference in calculated residuals for both NHAM and GAHAM. The GAHAM converged at the third iteration, whereas the NHAM required many iterations to reach near the solution. Hence, the NHAM needs a condition for stopping in case of no more improvement in the resulting solution. However, the NHDE found the exact solution after 495 iterations, which means that the GAHAM was faster than the other two methods and more accurate than the NHAM.

##### 3.3. Selected Test Function 3

According to , selected test function 3 is represented as

The exact solution of this function is . The exact solution calculated in  may result under certain conditions and an initial guess. On the contrary, when the NHAM was applied to this function based on random initial guess and five-term HAM series in each HAM iteration, it did not converge. The Newton method used 100 iterations, and the HAM used 15 iterations in the NHAM. Table 3 summarizes the results of the three methods. Figure 6(a) shows the residual of the three methods up to 12 iterations, where the GAHAM was not included in the figure because of NHAM’s high residual. Compressing the residual’s scale, Figure 6(b) shows that the residual decreased fastness in the GAHAM and its ability to reach the exact solution is faster than the NHDE method.

 Test function 3 Method Result HAM iterations Other iterations NHAM Diverging 15 Newton 100 NHDE Converging None 11 GAHAM Converging 4 GA 4
##### 3.4. Selected Test Function 4 (Penalty Function II) 

Selected test function 4 is represented as

The exact solution of this function is

As shown in Table 4, the NHAM and the NHDE diverge. The NHAM was applied for five iterations of the HAM. 100 iterations of Newton’s method followed each iteration of the HAM. However, Newton method’s iterative process was interrupted in each stage after ten consecutive iterations of divergence, so that the whole number of Newton’s iterations was 61. NHDE also diverged even with increasing the number of iterations to 600. The exact solution was detected using GAHAM after nine iterations of the three stages. Figure 7(a) shows the residual of the three methods during all iterations. Figure 7(b) shows the three methods’ results during the first ten iterations to illustrate more details about the three methods’ performance. It declares that the GAHAM decreases the residual continuously during the iterations until the exact solution is reached.

 Test function 4 Method Result HAM iterations Other iterations NHAM Diverging 5 Newton 61 NHDE Diverging None 600 GAHAM Converging 9 GA None

#### 4. Analysis of the Results and Concluding Remarks

Comparing the proposed GAHAM's overall results and the considered NHAM and the NHDE methods raises the following remarks.(1)The proposed algorithm combines the benefits of parallelism, the global ability of search of the GA, and the convergence of the HAM to find the solution of nonlinear equations.(2)Unlike Newton's method and other local algorithms, the algorithm does not require derivative calculations to improve the initial guess, facilitating the computations and converging even in singular systems (test function 3). Moreover, calculations based on the Jacobian matrix are time-consuming with highly computational burden in large systems unless an approximation is used for the Jacobian, but this may affect the solution's convergence.(3)In the NHAM and the NHDE method, the residual during the calculations does not continually decrease. It increases during the computations, affecting the speed of convergence and the number of required iterations. However, the GAHAM always offers continuous decrease of residual, ensuring the improved solution’s sustained improvement, and speeds up the convergence.(4)The main differences of the HAM calculations in the proposed algorithm and other HAM modifications are as follows:(i)The number of terms can be identified before the calculations, enabling symbolic computations’ control. This number varies according to the system’s complexity and nonlinearity.(ii)The drawing of the h-curve to find the convergence parameter’s optimum value is not needed because the proposed domain includes all the real numbers, which contains the valid region.(5)The specified criterion and domain of angles in the GA stage allow converging to the solution. As shown in the test functions, the GA stage could find the system’s solution without involving any HAM stage in some problems.(6)The proposed algorithm may not contain the additional parameter of step size (α) in extensive systems and fast convergence cases.(7)The advantages of this approach can be summarized as follows:(i)It benefits from the genetic algorithm’s parallelism to realize the solution without derivatives, initial guesses considerations, and symbolic calculations of HAM (if possible).(ii)If the HAM is required, the proposed approach introduces an initial guess that is as close as possible to the optimum solution. As a result, the symbolic calculations using HAM will be reduced, leading to saving of computational burden and time.(iii)The proposed approach calculates the convergence parameter of HAM without plotting the h-curves, which accelerates the computations and increases the ability to repeat calculations.(iv)The ability to repeat the calculations enables the computation of the minimum number of HAM terms required to reach the optimum solution, which accelerates the convergence and reduces its required computational time and burden.(9)It is known that each algorithm or approach has its own limitations. The limitations of the proposed GAHAM algorithm are as follows:(i)It depends on defining the number of HAM terms before starting calculations. There is no definite rule to achieve that, so it is determined according to the resulting residual yields from the GA stage. If this residual is not high, the required number is expected to be low. The proposed search found that the required terms are less than or equal to five.(ii)The effect of the GA parameters has not been involved in this search. The effect of these parameters on the convergence of the algorithm should be investigated in future work.

#### 5. Conclusion

This paper proposes a new algorithm (GAHAM) combining both the HAM and the GA for solving nonlinear equation systems. The algorithm calculates the solution based on three sequential stages. First, the GA is utilized to find either the system's solution or the best initial guess to the HAM. If the first stage failed to reach the target solution, the second stage is performed using the HAM. The HAM calculations were based on a specific number of series terms, and the linear operator matrix resulted from the improved GA stage’s solution. In the third stage, the HAM’s convergence control parameter is calculated using the GA to minimize the residual algebraically based on the predefined search domain without plotting the h-curves. The proposed algorithm’s performance was investigated using four selected test functions with square and nonsquare systems compared with the Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). The results demonstrated that, even in singular systems, the proposed algorithm succeeded in obtaining the solution while other compared methods failed. The proposed algorithm gets the benefits of the parallelism of the genetic algorithm to calculate the required solution. If it fails to find that solution, it accelerates HAM calculations by introducing an enhanced initial guess and linear operator and calculating the convergence parameter without plotting h-curves. All results verified the fastness and efficiency of the proposed algorithm in solving nonlinear equation systems.

#### Data Availability

All data related to the work are included in the paper and its relevant references. Clarification or data requirements that support the findings of this study are available from the author upon request.

#### Conflicts of Interest

The author declares no conflicts of interest.

1. I. Ahmed, P. Borisut, M. A. Demba, P. Kumam, K. Shah, and K. Sitthithakerngkiet, “Stability results for implicit fractional pantograph differential equations via -hilfer fractional derivative with a nonlocal riemann-liouville fractional integral condition,” Mathematics, vol. 8, no. 1, pp. 1–28, 2021. View at: Google Scholar
2. Z. Shah, H. Babazadeh, P. Kumam, A. Shafee, and P. Thounthong, “Numerical simulation of magnetohydrodynamic nanofluids under the influence of shape factor and thermal transport in a porous media using CVFEM,” Frontiers in Physics, vol. 7, no. 11, pp. 1–10, 2019. View at: Publisher Site | Google Scholar
3. D. D. Vo, Z. Shah, M. Sheikholeslami, A. Shafee, and T. K. Nguyen, “Numerical investigation of MHD nanomaterial convective migration and heat transfer within a sinusoidal porous cavity,” Physica Scripta, vol. 94, no. 11, pp. 1–13, 2019. View at: Publisher Site | Google Scholar
4. E. F. Doungmo Goufo, S. Kumar, and S. B. Mugisha, “Similarities in a fifth-order evolution equation with and with no singular kernel,” Chaos, Solitons and Fractals, vol. 130, no. 1, pp. 1–7, 2020. View at: Publisher Site | Google Scholar
5. S. Kumar, R. Kumar, M. S. Osman, and B. Samet, “A wavelet based numerical scheme for fractional order SEIR epidemic of measles by using genocchi polynomials,” Numerical Methods for Partial Differential Equations, vol. 37, no. 2, pp. 1250–1268, 2020. View at: Publisher Site | Google Scholar
6. S. Kumar, S. Ghosh, R. Kumar, and M. Jleli, “A fractional model for population dynamics of two interacting species by using spectral and hermite wavelets methods,” Numerical Methods for Partial Differential Equations, vol. 37, no. 2, pp. 1652–1672, 2020. View at: Publisher Site | Google Scholar
7. I. Jaradat, M. Alquran, S. Sivasundaram, and D. Baleanu, “Simulating the joint impact of temporal and spatial memory indices via a novel analytical scheme,” Nonlinear Dynamics, vol. 103, no. 3, pp. 2509–2524, 2021. View at: Publisher Site | Google Scholar
8. J. Nocedal and S. J. Wright, “Nonlinear equations,” Numerical Optimization, Springer, New York, NY, USA, 1999. View at: Google Scholar
9. J. E. Gentle, “Solution of nonlinear equations and optimization,” Computational Statistics, Springer, New York, NY, USA, 2009. View at: Google Scholar
10. T. F. Coleman, “Chapter 4, “nonlinear equations and nonlinear least squares,” Large Sparse Numerical Optimization, Springer Berlin Heidelberg, Heidelberg, Germany, 1984. View at: Google Scholar
11. S. Liao, Homotopy Analysis Method in Nonlinear Differential Equations, Springer, Berlin, Germany, 2012.
12. S. Liao, “On the homotopy analysis method for nonlinear problems,” Applied Mathematics and Computation, vol. 147, no. 2, pp. 499–513, 2004. View at: Publisher Site | Google Scholar
13. D. Adrian, “A generalized homotopy theory,” Nonlinear Analysis: Theory, Methods & Applications, vol. 30, no. 8, pp. 4937–4948, 1997. View at: Publisher Site | Google Scholar
14. Y.-X. Wang, H.-Y. Si, and L.-F. Mo, “Homotopy perturbation method for solving reaction-diffusion equations,” Mathematical Problems in Engineering, vol. 2008, no. 2, Article ID 795838, 6 pages, 2008. View at: Publisher Site | Google Scholar
15. P. Veeresha, D. G. Prakasha, and S. Kumar, “A fractional model for propagation of classical optical solitons by using nonsingular derivative,” Mathematical Methods in the Applied Sciences, vol. 63, no. 3, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
16. S. Abbasbandy, E. Shivanian, and K. Vajravelu, “Mathematical properties of -curve in the frame work of the homotopy analysis method,” Communications In Nonlinear Science And Numerical Simulation, vol. 16, no. 11, pp. 4268–4275, 2011. View at: Publisher Site | Google Scholar
17. S. Liao, Beyond Perturbation : Introduction To The Homotopy Analysis Method, Chapman And Hall/Crc, Boca Raton, FL, USA, 2004.
18. E. Hetmaniok, D. Słota, T. Trawinski, and R. Wituła, “Usage of the homotopy analysis method for solving the nonlinear and linear integral equations of the second kind,” Numerical Algorithms, vol. 67, no. 1, pp. 163–185, 2020. View at: Publisher Site | Google Scholar
19. D. N. Yu, J. H. He, and A. G. Garcia, “Homotopy perturbation method with an auxiliary parameter for nonlinear oscillators,” Journal of Low Frequency Noise Vibration and Active Control, vol. 38, no. 4, pp. 1540–1554, 2019. View at: Publisher Site | Google Scholar
20. M. Turkyilmazoglu, “Some issues on HPM and HAM methods: a convergence scheme,” Mathematical and Computer Modelling, vol. 53, no. 9, pp. 1929–1936, 2011. View at: Publisher Site | Google Scholar
21. V. Marinca and N. Herişanu, “Application of optimal homotopy asymptotic method for solving nonlinear equations arising in heat transfer,” International Communications In Heat And Mass Transfer, vol. 35, no. 6, pp. 710–715, 2008. View at: Publisher Site | Google Scholar
22. V. Marinca, N. Herişanu, C. Bota, and B. Marinca, “An optimal homotopy asymptotic method applied to the steady flow of a fourth-grade fluid past a porous plate,” Applied Mathematics Letters, vol. 22, no. 2, pp. 245–251, 2009. View at: Publisher Site | Google Scholar
23. Z. Niu and C. Wang, “A one-step optimal homotopy analysis method for nonlinear differential equations,” Communications In Nonlinear Science And Numerical Simulation, vol. 15, no. 8, pp. 2026–2036, 2010. View at: Publisher Site | Google Scholar
24. Y. Wu and K. F. Cheung, “Two-parameter homotopy method for nonlinear equations,” Numerical Algorithms, vol. 53, no. 4, pp. 555–572, 2010. View at: Publisher Site | Google Scholar
25. Z. Odibat, “An improved optimal homotopy analysis algorithm for nonlinear differential equations,” Journal of Mathematical Analysis and Applications, vol. 488, no. 2, pp. 1–13, 2020. View at: Publisher Site | Google Scholar
26. W. Jia, X. He, and L. Guo, “The optimal homotopy analysis method for solving linear optimal control problems,” Applied Mathematical Modelling, vol. 45, no. 1, pp. 865–880, 2017. View at: Publisher Site | Google Scholar
27. S. J. Liao and Y. Tan, “A general approach to obtain series solutions of nonlinear differential equations,” Studies in Applied Mathematics, vol. 119, no. 4, pp. 297–354, 2007. View at: Publisher Site | Google Scholar
28. R. A. Van Gorder and K. Vajravelu, “On the selection of auxiliary functions, operators, and convergence control parameters in the application of the homotopy analysis method to nonlinear differential equations: a general approach,” Communications in Nonlinear Science and Numerical Simulation, vol. 14, no. 3, pp. 4078–4089, 2009. View at: Publisher Site | Google Scholar
29. Y. Y. Wu and K. F. Chueng, “Homotopy solution for nonlinear differential equations in wave propagation problems,” Wave Motion, vol. 46, no. 7, pp. 1–14, 2009. View at: Publisher Site | Google Scholar
30. S. Abbasbandy and M. Jalili, “Determination of optimal convergence-control parameter value in homotopy analysis method,” Numerical Algorithms, vol. 64, no. 4, pp. 593–605, 2020. View at: Publisher Site | Google Scholar
31. S. Noeiaghdam, M. A. Fariborzi Araghi, and S. Abbasbandy, “Finding optimal convergence control parameter in the homotopy analysis method to solve integral equations based on the stochastic arithmetic,” Numerical Algorithms, vol. 81, no. 4, pp. 237–267, 2021. View at: Publisher Site | Google Scholar
32. H. ur Rehman, P. Kumam, Y. J. Cho, and P. Yordsorn, “Weak convergence of explicit extragradient algorithms for solving equilibrium problems,” Journal of Inequalities and Applications, vol. 282, no. 1, pp. 1–25, 2019. View at: Publisher Site | Google Scholar
33. T. H. Hasan, M. S. H. Chowdhury, and P. Tonoto, “Solving nonlinear algebraic problem using Newton homotopy differential equation,” Australian Journal of Basic and Applied Sciences, vol. 5, no. 4, pp. 56–59, 2011. View at: Google Scholar
34. S. Abbasbandy, Y. Tan, and S. J. Liao, “Newton-homotopy analysis method for nonlinear equations,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1794–1800, 2007. View at: Publisher Site | Google Scholar
35. J. Izadian, R. Abrishami, and M. Jalili, “A new approach for solving nonlinear system of equations using Newton method and HAM,” Iranian Journal of Numerical Analysis and Optimization, vol. 4, no. 2, pp. 57–72, 2014. View at: Publisher Site | Google Scholar
36. J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–72, 1992. View at: Publisher Site | Google Scholar
37. J. Ryan, “Book review,” Orsa Journal on Computing, vol. 3, no. 2, p. 176, 1991. View at: Publisher Site | Google Scholar
38. Z. Michalewicz and C. Z. Janikow, “Genetic algorithms for numerical optimization,” Statistics and Computing, vol. 1, no. 2, pp. 75–91, 1991. View at: Publisher Site | Google Scholar
39. A. Konak, D. W. Coit, and A. E. Smith, “Multi-objective optimization using genetic algorithms: a tutorial,” Reliability Engineering & System Safety, vol. 91, no. 9, pp. 992–1007, 2006. View at: Publisher Site | Google Scholar
40. Z. Sabir, D. Baleanu, M. Shoaib, and M. A. Z. Raja, “Design of stochastic numerical solver for the solution of singular three-point second-order boundary value problems,” Neural Computing and Applications, vol. 33, no. 7, pp. 2427–2443, 2020. View at: Publisher Site | Google Scholar
41. D. G. Sotiropoulos, E. C. Stavropoulos, and M. N. Vrahatis, “A new hybrid genetic algorithm for global optimization,” Nonlinear Analysis: Theory, Methods & Applications, vol. 30, no. 7, pp. 4529–4538, 1997. View at: Publisher Site | Google Scholar
42. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site | Google Scholar
43. B. Farnad, A. Jafarian, and D. Baleanu, “A new hybrid algorithm for continuous optimization problem,” Applied Mathematical Modelling, vol. 55, no. 3, pp. 652–673, 2018. View at: Publisher Site | Google Scholar
44. R. Pandiya, W. Salmah, and I. Endrayanto, “A novel parameter-free filled function applied to global optimization,” Engineering Letters, vol. 29, no. 1, pp. 1–10, 2021. View at: Google Scholar
45. I. Fajfar, J. Puhan, and Á. Bűrmen, “Evolving a nelder-mead algorithm for optimization with genetic programming,” Evolutionary Computation, vol. 25, no. 3, pp. 351–373, 2017. View at: Publisher Site | Google Scholar
46. M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming , Theory and Algorithms, Wiley-Interscience, Hoboken, NJ, USA, 3rd edition, 2006.