Abstract

Many engineering and scientific models are based on the nonlinear system of equations (NSEs), and their effective solution is critical for development in these domains. NSEs can be modeled as an optimization problem. So, the goal of this paper is to propose an optimization method, to solve the NSEs, which is called a chaotic enhanced genetic algorithm (CEGA). CEGA is a chaotic noise-based genetic algorithm (GA) that improves performance. CEGA will be configured so that it uses a new definition which is chaotic noise to overcome the drawbacks of optimization methods such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution. The goal of chaotic noise is to reduce the number of repeated solutions and iterations to speed up the convergence rate. In the chaotic noise, the chaotic logistic map is utilized since it has been used by numerous researchers and has proven its efficiency in increasing the quality of solutions and providing the best performance. CEGA is tested using many well-known NSEs. The suggested algorithm's results are compared to the original GA to prove the importance of the modifications introduced in CEGA. Promising results were obtained, where CEGA’s average percentage of improvement was about 75.99, indicating that it is quite effective in solving NSEs. Finally, comparing CEGA’s results with previous studies, statistical analysis by Friedman and Wilcoxon’s tests demonstrated its superiority and ability to solve this kind of problem.

1. Introduction

Many models in engineering and science are based on the nonlinear system of equations (NSEs), and their solution is very critical for development in these fields. NSEs can be found directly in some applications, but they can also be found indirectly when practical models are transformed into NSEs [1]. Finding a robust and effective solution for the NSEs might be a difficult task in theory.

The bisection technique, Muller’s method, false-position method, Levenberg–Marquardt algorithm, Broyden method, steepest descent methods, branch and prune approach, Halley’s method, Newton/damped Newton methods, and Secant method have traditionally been used to solve NSEs [2]. Secant and Newton are the methods of choice for solving NSEs in general. Some techniques, on the other hand, turn the NSEs into an optimization problem [3], which is subsequently solved using the augmented Lagrangian method [4]. These approaches are time-consuming, may diverge, are inefficient when solving a set of nonlinear equations, require a tedious process to calculate partial derivatives to build the Jacobian matrix, and are sensitive to initial conditions [5].

Because of these constraints, the researchers used evolutionary algorithms (EAs) to solve NSEs. EAs are a sort of metaheuristic that is often used to address problems of optimization that are too difficult to solve using traditional methods. EAs such as the genetic algorithm (GA) [68], particle swarm algorithm (PSO) [9, 10], artificial bee colony (ABC) [11], cuckoo search algorithm (CSA) [12], and firefly algorithm (FA) [13] have been used to solve NSEs. In [6], Chang proposed a real-coded GA for solving the nonlinear system. In [7], Grosan and Abraham offered a novel approach based on GA for dealing with the problem of complex NSEs by recasting it as a multiobjective optimization problem. In [8], an efficient GA with symmetric and harmonic individuals was used to solve NSEs. Mo et al. in [9] presented a conjugate direction to PSO for addressing NSEs, which merges the conjugate direction method (CDM) into PSO to enhance it and enable for fast optimization of high-dimensional optimization problems. By moving the challenge of high-dimensional function optimization to low-dimensional, CDM aids PSO in avoiding local minima. Jaberipour et al. suggested a new version of PSO for solving NSEs, which is based on a novel way of updating each particle’s location and velocity [10]. To tackle the drawbacks of the classic PSO approach, such as trapping in local minima and delaying convergence, they changed the way each particle was updated. Also, Jia and He presented a hybrid ABC technique for solving NSEs in [11], which combined the ABC and PSO algorithms. The hybrid algorithm corrects the problem of sinking into a premature or local optimum by integrating the benefits of both strategies. Furthermore, in [12], Zhou and Li proposed an upgraded CSA to handle the NSEs. They employed a novel encoding strategy that ensures the provided solution is achievable without requiring the cuckoo’s evolution to be altered. Finally, in [13], enhanced FA to solve NSEs as an optimization problem is introduced by Ariyaratne et al. with several advantages such as eliminating the need for beginning assumptions, differentiation, or even function continuity and allowing it to provide many root estimates at the same time.

The genetic algorithm (GA), based on natural selection, genetics, and evolution, was presented in 1975 [14] and described in 1989 [15] as a competent global strategy for tackling optimization problems. GA is well suited to solving optimization issues, and it continues to pique academics’ interest. According to the literature, GA was commonly used to solve NSEs, where Mangla et al., in [16], highlight flaws in existing approaches (Bisection, Regula Falsi, Newton–Raphson, Secant, Muller, and so on) and justify the GA’s application to NSEs while an approach for sorting out NSEs to solve them using the fixed-point method was proposed in [17], with the equations’ arrangement determined by a GA that works with a population of the possible resolution procedures for the system. In addition, in [18], Ji et al. presented an optimization approach based on clustering evolution for obtaining an optimum piecewise linear approximation of a set of nonlinear functions. The technique is built on a balance of approximation precision and simplicity, and it enhances the approximate linear with the fewest possible departments. In [19], a GA technique to solve NSEs for a variety of applications is presented, in which the roots of NSEs were approximated using population size, degree of mutation, crossover rate, and coefficient size. Also, a method for solving nonlinear equations using GA was given in [20]. Furthermore, in [21], evolutionary algorithms to solve NSEs were used, which were turned into an unconstrained optimization problem with some basic mathematical relations. Finally, in [22], a new intelligent computer strategy for solving nonlinear equations based on evolutionary computational approaches was proposed mainly based on variants of GAs. But, when it works with complex and massive systems, however, GA has some downsides, including being extremely slow and making it hard to identify the global optimal solution due to the increased number of iterations required or long search time.

From this motivation, this study offers an algorithm that solves one of the most significant drawbacks with GA and all EAs which is the repeating of solutions during the optimization process, which wastes time. The proposed optimization algorithm is called a chaotic enhanced genetic algorithm (CEGA). Chaotic is a mathematical strategy that has been shown to improve the performance of numerous optimization algorithms. It has received a great deal of attention, and it has been applied in a range of domains including optimization [23]. The proposed CEGA is a combination between GA and chaotic noise. The chaotic noise is used when the solutions are repeated, during the optimization process of GA, to change the positions of the solutions chaotically. This combination aims to enhance GA by overcoming its drawbacks such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution.

The major contributions of this paper include the following:(1)Proposing a new methodology called a chaotic enhanced genetic algorithm (CEGA) to solve NSEs by using a combination between GA and chaotic noise(2)Presenting sufficient diversity of the solutions, and preventing consuming time during the optimization process by overcoming repetition of solutions(3)Ensuring improvement in every iteration by using chaotic noise and fast convergence to best solutions(4)Testing CEGA by many well-known NSEs(5)Using statistical tests to determine the relevance of the CEGA findings(6)Showing that CEGA is competitive and better than other optimization algorithms

The following is how the paper is structured. Section 2 discusses nonlinear systems of equations. The proposed technique is detailed in Section 3. The numerical findings and discussions are shown in Sections 4 and 5, respectively. Section 6 concludes with observations and conclusions.

2. Nonlinear System of Equations

The mathematical definition of a nonlinear system of equations (NSEs) iswhere is a vector of n components subset of , and are the nonlinear functions that translate the n-dimensional space n’s vector to the real line. Some of the functions may be nonlinear, while others are linear. Finding a solution for NSEs entails finding a solution in which each of the Q functions above equals zero [24].

Definition 1. If the functions then the solution is called the optimal solution of the NSEs.
Many approaches [2527] transform the NSEs into an unconstrained optimization problem by the inclusion of the left side of all equations and the use of the absolute value function aswhere denotes the objective function. If all of the nonlinear equations are equal to zero , the objective function in (2) has a global minimum.

3. The Proposed Methodology

This section provides an overview of GA and chaos theory. The suggested CEGA is next presented in detail.

3.1. Genetic Algorithm

In 1975 and 1989, respectively, Holland and Goldberg proposed and defined the genetic algorithm (GA) as an optimization technique [14, 15]. GA begins with a collection of chromosomes (solutions). Then, using GA operators (selection, mutation, and crossover), a new set of chromosomes is generated (solutions). The freshly generated chromosomes will be of greater quality than the preceding generation. These procedures are repeated until the termination conditions are met. As a final solution, the best chromosome (solution) of the previous generation is offered. Figure 1 depicts the generic GA’s pseudocode.

3.2. Chaos Theory

Chaos theory is concerned with the behavior of systems that obey deterministic laws yet look random and unpredictable. Many elements of the optimization sciences have benefited from the mathematics of chaos theory. Chaos optimization algorithms have received a lot of attention as a novel method of global optimization because they are based on many chaotic maps, and the inherent characteristics of chaotic maps can improve optimization algorithms by allowing them to escape from local solutions and increase the convergence to reach the global solution. To increase solution quality, many researchers advocated integrating chaos theory and optimization algorithms [2831]. Chaotic maps are maps (evolution functions) that display chaotic behavior and typically take the form of iterated functions. Many well-known chaotic maps may be found in the literature, including the sinusoidal map, Chebyshev map, singer map, tent map, sine map, circle map, Gauss map, and logistic map.

3.3. Chaotic Enhanced Genetic Algorithm

In this subsection, the proposed chaotic enhanced genetic algorithm (CEGA) will be described, which is an integration between GA and chaos theory. CEGA be configured so that it uses chaotic noise to overcome any limitations that can be appearing during optimization by GA such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution. CEGA operates in two phases: in the first one, the genetic algorithm is implemented as a global optimization system to solve the NSEs. If the best solution is repeated during the GA optimization process, the chaotic noise is employed as the second phase. Chaotic noise tries to show a sufficient diversity of solutions while preventing time consumption during the optimization process by overcoming the repetition of the best solution and reducing the number of iterations. The following is a full description of the suggested algorithm:Step 1: initialization(i)Individuals of the population (in n-dimensions) are created with random placements in the search domain and the number of iterations set to one (ii)The fitness function is assessed for each individual(iii)Assign the best individual to the best position Step 2: evolution by GA ()(i)Ranking [32]: individuals are ranked based on their fitness value, and a vector containing the corresponding individual fitness value is returned, allowing the selection process to compute survival probabilities.(ii)Tournament selection (TS) [33]: many solutions (individuals) are chosen at random from the population, and the best of these solutions is chosen to be a parent. This process is performed as many times as necessary to choose parents.(iii)BLX-α crossover operator [34]: two-parent candidate solutions with n design variables, and , are chosen with crossover probability . The BLX-α operator creates the k-th component of a new offspring W. The k-th component of W is a uniform random scalar in the range , where I defines the distance between parent candidates given by and a is a user-defined parameter.The BLX-α efficacy comes from its capacity to seek in a space domain that is not always constrained by the parents. Furthermore, because the search space is dependent on the distance between the parents, the GA is self-adaptive. The parameter must be chosen carefully since it quantitatively specifies the search domain. Based on the findings of Herrera et al. [35], we choose in this investigation.(iv)Real-valued mutation [36]: randomly generated values are added to the variables for each new offspring with a low probability (Pm) as follows:where uniform at random, is mutation range (standard: 10%), uniform at random, and m is mutation precision.(v)Elitist strategy: the best individuals in the generation are directly added to the new generation .(vi)Evaluation: for each individual, is evaluated to find the new best position .(vii)Updating: if the new best position is worse than or equal to the previous best position , go to Step 3. Otherwise, continue by updating the best position as the best individual position discovered so far as (viii)Termination criteria: the proposed algorithm is terminated when the maximum number of iterations is achieved or when the individual convergences. Convergence happens when the locations of all individuals in the population are identical. Finally, put out the optimal solution as the best individual position .Step 3: chaotic noise(i)Chaotic noise: chaotic noise is applied if the best solution is repeated during the GA optimization process. It tries to show a sufficient diversity of solutions while preventing time consumption during the optimization process by overcoming the repetition of the best solution and reducing the number of iterations. In this step, the population at generation t is changed by chaotic noise as follows:where is a chaotic random number generated by the logistic map by using the following equation:The logistic map, according to the results in [37], improves the quality of the solutions and provides the best performance.(ii)Evaluation: for each individual in , is evaluated to find the new best position .(iii)Updating: if the new best position is better than the previous best position , update the best position as the best individual’s position found so far and continue and go to Step 2. Otherwise, repeat Step 3.

Figure 2 depicts the suggested algorithm’s pseudocode.

4. Numerical Results

Four systems of nonlinear equations are solved to assess the suggested method. These four test systems are common challenges that have been explored by other researchers and are known as benchmarks. The proposed algorithm is coded in MATLAB R2012b and implemented on the PC with Intel(R) Core(TM) i7-6600U CPU @ 2.60 GHz, 16 GB RAM, and Windows 10 operating system. The results will be compared to those obtained by the original GA to demonstrate the benefits of the suggested modifications and their impact on achieving an optimal solution.

For computational studies, a population size equal to 20, generation gap (GGAP) is 0.9, crossover probability Pc is 0.8, and mutation probability Pm is 0.02. Also, the termination criterion for CEGA is defined as

is the optimum value of the objective function which is 0 in all nonlinear system cases while is the calculated objective function at each iteration t. It should be noted that the maximum number of iterations for both algorithms (original GA and CEGA) is the same, and all results are recorded from the first run. Furthermore, when one of them meets the termination requirement, the computations stop and the number of used iterations is reported. Finally, to statistically evaluate the CEGA compared to other algorithms, the Friedman test and Wilcoxon rank-sum test are executed here.

4.1. Benchmark 1: Experiment Test

This benchmark problem can be described as [7]

This benchmark is solved by many algorithms such as Newton’s method, Secant’s method, evolutionary algorithm approach (EAA) [7], genetic algorithms (GAs) [21], and hybridization of grasshopper optimization algorithm with genetic algorithm (hybrid-GOA-GA) [38]. Table 1 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA. The convergence curves of the best achieved so far using original GA and CEGA are shown in Figure 3.

4.2. Benchmark 2: Arithmetic Application

This benchmark problem can be described as [7]

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 2 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA while the convergence curves of the best achieved so far using original GA and CEGA are shown in Figure 4.

4.3. Benchmark 3: Combustion Application

This benchmark problem can be described as [7]

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 3 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA, while Figure 5 shows the convergence curves of the best F obtained so far by original GA and CEGA.

4.4. Benchmark 4: Neurophysiology Application

This benchmark problem can be described as [7]

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 4 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA while Figure 6 shows the convergence curves of the best F obtained so far by original GA and CEGA.

5. Discussions

Tables 14 show the results of all algorithms for the four benchmark problems in terms of the best-obtained solution and the number of iterations. We can observe, for the 1st benchmark problem (experiment test), that hybrid-GOA-GA [38] surpassed the other algorithms in reaching the lowest value of , which is 1.7904E − 06, but in the number of iterations of 300 while the proposed CEGA obtained a solution very close to the solution obtained by hybrid-GOA-GA, which is 2.7227E − 06, but in only 11 iterations. For the 2nd benchmark problem (arithmetic application), we find that the proposed CEGA outperformed the rest of the algorithms in obtaining the lowest value of , which is 3.0855E − 14, in 272 iterations, while GAs [21] got an acceptable solution, which is 1.2674E − 09, in the least number of iterations, which is 10. For the 3rd benchmark problem (combustion application), we find that hybrid-GOA-GA [38] outperformed the rest of the algorithms in obtaining the lowest value of , which is 1.2499E − 09, in 300 iterations. CEGA got an acceptable solution, which is 4.5300E − 09, in an acceptable number of iterations, which is 183, while GAs [21] obtained a reasonable solution, which is 1.8034E − 05, in the fewest number of iterations, which is 70. Finally, for the 4th benchmark problem (Neurophysiology application), we find that the proposed CEGA outperformed the rest of the algorithms in obtaining the lowest value of , which is 1.0693E − 11, in 87 iterations, while GAs [21] got an acceptable solution, which is 5.2127E − 11, in the least number of iterations, which is 20.

On the other hand, we can see that the original GA’ convergence curves had several straight portions, which reflect periods of nonimproving in the objective function owing to entrapment in a local minimum as seen in Figures 36, while for CEGA, it is clear that the chaotic noise was successful in permanently improving the objective function and not repeating solutions or spending time on iterations that did not enhance the objective function. The following percentage relationship (IMP%) is used to indicate the improvement between the original GA and the proposed CEGA algorithm:

As indicated in Table 5, CEGA improved all results significantly by 75.99% on average. So, we can say that chaotic noise guides GA to eliminate the local minimum and enhance the search results, reducing the number of iterations and, as a result, time, by preventing iterations from being used without improvement or convergence to the best solution.

The EAA [7], GAs [21], hybrid-GOA-GA [38], original GA, and the proposed CEGA solved the 4 benchmark problems. Therefore, a statistical evaluation of CEGA compared to these algorithms will be done, according to the best function value F(z) by implementing the Friedman test [39] and the Wilcoxon signed-rank test [40] here. The Friedman test compares the algorithms’ average ranks and produces Friedman statistics, where the smaller the ranking, the better the performance of the algorithm while the Wilcoxon signed-rank test is used to show the significant differences between the CEGA and the other algorithms.

The Friedman test results are shown in Table 6. Table 6 shows that the Asymp. Sig. ( value) is smaller than 0.05, indicating that there are variations in the outcomes obtained by all algorithms. Furthermore, with a lower mean rank, the suggested CEGA algorithm outperforms the other algorithms.

Table 7, on the other hand, displays the results of the Wilcoxon signed-rank test. The sum of positive ranks is R+, whereas the sum of negative ranks equals R−. Table 7 demonstrates that CEGA achieves better R+ values than R− values in 3 cases and is equal in 1 case, indicating that it outperforms other algorithms. As a result of Table 7, we can infer that the proposed CEGA is a significant algorithm and better than the other algorithms.

6. Conclusions

In this paper, a chaotic enhanced genetic algorithm (CEGA) to solve the nonlinear system of equations (NSEs) is proposed, which is a combination of genetic algorithm (GA) and chaos theory. CEGA was designed by using a new definition which is chaotic noise to solve the shortcomings of original GA such as a lack of solution variety, an imbalance between exploitation and exploration, repeating best solution throughout the optimization process, and sluggish convergence of the optimal solution. NSEs are first transformed into an unconstrained optimization problem, which is then solved using CEGA.

Four benchmarks problems were considered, which are experiment test, arithmetic application, combustion application, and neurophysiology application. The results obtained by CEGA and the original GA showed that CEGA leads to faster convergence and is successful in finding the optimal solution in fewer iterations than the original GA with an average improvement percentage of about 75.99. On the other hand, the convergence curves showed how the original GA consumes time in trapping into the local minima while the CEGA, by using the chaotic noise, terminated this sticking in the local minimum and moved the optimization process to new better search space. In addition, by comparing CEGA results with other studies, we find that CEGA is competitive and the best. Furthermore, statistical analysis by Friedman and Wilcoxon’s tests showed the significance of the CEGA findings, where it got the lowest mean rank and achieved better R+ values than R values.

In our future works, three directions will be concentrated: (i) implementing more modifications for CEGA and assessing their impact on optimization results, (ii) applying CEGA to solve optimization problems in different fields, and (iii) using other metaheuristic algorithms to solve this kind of problems, such as particle swarm optimization [41], ant colony optimization [42], artificial bee colony (ABC) Algorithm [43], krill herd [44], monarch butterfly optimization (MBO) [45], earthworm optimization algorithm (EWA) [46], elephant herding optimization (EHO) [47], moth search (MS) algorithm [48], slime mould algorithm (SMA) [49], hunger games search (HGS) [50], Runge Kutta optimizer (RUN) [51], colony predation algorithm (CPA) [52], and harris hawks optimization (HHO) [53].

Data Availability

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Authors’ Contributions

All authors are equally contributed to this article.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia, for funding this research work through the project number (IF-PSAU-2021/01/18396).