• Views 1,750
• Citations 10
• ePub 20
• PDF 658
`Journal of Applied MathematicsVolume 2013, Article ID 757391, 18 pageshttp://dx.doi.org/10.1155/2013/757391`
Research Article

## A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

1College of Information Science and Engineering, Guangxi University for Nationalities, Nanning, Guangxi 530006, China
2Guangxi Key Laboratory of Hybrid Computation and IC Design Analysis, Nanning, Guangxi 530006, China

Received 4 September 2013; Revised 2 November 2013; Accepted 9 November 2013

Copyright © 2013 Yongquan Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In view of the traditional numerical method to solve the nonlinear equations exist is sensitive to initial value and the higher accuracy of defects. This paper presents an invasive weed optimization (IWO) algorithm which has population diversity with the heuristic global search of differential evolution (DE) algorithm. In the iterative process, the global exploration ability of invasive weed optimization algorithm provides effective search area for differential evolution; at the same time, the heuristic search ability of differential evolution algorithm provides a reliable guide for invasive weed optimization. Based on the test of several typical nonlinear equations and a circle packing problem, the results show that the differential evolution invasive weed optimization (DEIWO) algorithm has a higher accuracy and speed of convergence, which is an efficient and feasible algorithm for solving nonlinear systems of equations.

#### 1. Introduction

Systems of nonlinear equations arise in many domains of practical importance such as engineering, mechanics, medicine, chemistry, and robotics. Solving such a system involves finding all the solutions (there are situations when more than one solution exists) of the polynomial equations contained in the mentioned system. The algorithms of solving nonlinear equations systems are worse than linear equations in convergence speed and ratio, especially solving nonconvex nonlinear equations. The traditional solutions of nonlinear equations include Newton-Raphson method, Quasi-Newton method, and homotopy method. Newton-Raphson method is a much more classical method, but it is sensitive to initial iteration value. In addition, not only it requires a large amount of calculation, but also sometimes it accompanied by difficulty calculation. Quasi-Newton method is to solve the difficult caused by Jacobi matrix. It has now become one of the most effective methods that solve nonlinear equations and optimization problems. While, its stability is poor and sometimes its iterative effect is not well. The basic idea of homotopy method is to start from easily solved equations and then gradually transit to the original equations and get the solution of problems. In recent years, with the rapid development of computational intelligence, computational intelligence techniques have also been used to solve nonlinear equations, such as genetic algorithm [13], particle swarm optimization algorithm [4], differential evolution algorithm [5], artificial fish-swarm algorithm [6], artificial bee colony algorithm [7] harmony search algorithm [8], and probabilistic-driven search (PDS) algorithm [9]. These swarm intelligent algorithm has several advantages when adopted for searching solutions for systems of nonlinear equations: their does not require of a “good” initial point to perform the search, and the search space can be bounded by lower and upper values for each decision variable. Additionally, no continuity or differentiability of the objective function is required. What can be considered as the main disadvantage of swarm intelligent algorithm in this sort of application is its relatively poor accuracy, which is caused by the coarse granularity of the search performed by the algorithm. This can, of course, be improved either by running the swarm intelligent algorithm for a larger number of iterations (although at a higher computational cost) or by postprocessing the solution produced by the swarm intelligent algorithm with a traditional numerical optimization technique.

In 2006, a novel stochastic optimization model, invasive weed optimization (IWO) algorithm [10], was proposed by Mehrabian and Lucas, which is inspired from a common phenomenon in agriculture: colonization of invasive weeds. The algorithm was inspired from colonizing weeds, which is used to mimic the natural behavior of weeds in colonizing and occupying suitable places for growth and reproduction. It has the robustness, adaptation, and randomness and is simple but effective with an accurate global search ability. So far, it has been applied in many engineering fields.

In this paper, a new hybrid algorithm based on the population diversity of IWO and the heuristic differential evolution (DE) is presented to solve nonlinear systems of equation. With the population diversity, which enhances the global searching ability of algorithm, and heuristic method, which improves the local mining capacity, the hybrid algorithm gets a higher optimization accuracy and faster convergence speed. The performance of the proposed approach is evaluated for several well-known benchmark problems from kinematics, chemistry, combustion, and medicine. Numerical results reveal the efficiency of the proposed approach and its flexibility to solve large-scale systems of equations.

#### 2. Descriptions of Nonlinear Equation Systems

The general form of nonlinear equations systems can be described as follows: where and . The following fitness function is used to calculate conveniently where , are nonlinear functions. Because there are many equality constraints, the system of equations usually has no solution such that , . Thus, we find an approximate solution of simultaneous equations such that , where is an arbitrary small positive number. In order to do so, we define the fitness function error: So, the smaller the fitness error is, the higher the solution quality is. The fitness error of theoretical solutions is 0. Thus, nonlinear equation systems problems can be formulated as solving function optimization In this way, the best value of formula (4) is the solution of (1).

#### 3. Algorithm of Invasive Weed Optimization

In the basic IWO, weeds represent the feasible solutions of problems and population is the set of all weeds. A finite number of weeds are being dispread over the search area. Every weed produces new weeds depending on its fitness. The generated weeds are randomly distributed over the search space by normally distributed random numbers with a mean with a mean equal to zero equal to zero. This process continues until the maximum number of weeds is reached. Only the weeds with better fitness can survive and produce seed, others are being eliminated. The process continues until maximum iterations are reached or hopefully the weed with the best fitness is closest to the optimal solution. The process is addressed in detail as follows.

Step 1 (initialize a population). A population of initial solutions is being dispread over the -dimensional search space with random positions.

Step 2 (reproduction). The higher the weed’s fitness is, the more seeds it produces. The formula of weeds producing seeds is where is the current weed’s fitness. and , respectively, represent the maximum and the least fitness of the current population. and , respectively, represent the maximum and the least value of a weed.

Step 3 (spatial dispersal). The generated seeds are randomly distributed over the -dimensional search space by normally distributed random numbers with a mean equal to zero but with a varying variance. This ensures that seeds will be randomly distributed so that they abide near to the parent plant. However, standard deviation () of the random function will be reduced from a previously defined initial value () to a final value () in every generation. In simulations, a nonlinear alteration has shown satisfactory performance given as follows: where is the maximum number of iterations, is the standard deviation at the present time step, and is the nonlinear modulation index. Generally, is set to 3.

Step 4 (competitive exclusion). After passing some iteration, the number of weeds in a colony will reach its maximum () by fast reproduction. At this time, each weed is allowed to produce seeds. The produced seeds are then allowed to spread over the search area. When all seeds have found their position in the search area, they are ranked together with their parents (as a colony of weeds). Next, weeds with lower fitness are eliminated to reach the maximum allowable population in a colony. In this way, weeds and seeds are ranked together and the ones with better fitness survive and are allowed to replicate. The population control mechanism also is applied to their offspring to the end of a given run, realizing competitive exclusion.

#### 4. Differential Evolution Invasive Weed Optimization

Reviewing the IWO, some initial weeds are dispread over the search space randomly and then produce new individuals (seeds). We select the better plants from the population consisting of weeds and seeds. The process continues until maximum number of plants is reached. In order to speed up the optimization process, differential evolution (DE) [11] is used to cooperate with IWO so that every weed in iteration can move towards the best individual of the current iteration. In this way, the algorithm designated as DEIWO not only ensures the individual diversity by IWO, but also improves the optimization accuracy and the speed by DE (see Pseudocode 1).

Pseudocode 1: The pseudocode for DEIWO algorithm.

##### 4.1. Differential Evolution

Differential evolution (DE), introduced by Storn and Price in 1997, is one of the most prominent new generation EAs for solving real-valued optimization problems. Using only a few control parameters, DE offers exclusive advantages, such as a simple and easy-to-understand concept, ease of implementation, powerful search capability, and robustness. The main procedure of DE includes mutation, crossover, and selection. These operators are based on natural evolution principles in order to keep the population diversity, as well as to avoid premature convergence. In this way, mutation and crossover operators are used to generate new vectors. Then, the selection operator determines which vectors will survive in the next generation. This procedure is repeated until a stopping condition is reached. The basic principle of DE is given as follows.

Suppose the objective function can be expressed as follow. The problem space is , where , ; for each parent vector a mutant vector is generated according to where parent vector , , are mutually exclusive individuals. The parent vector is the best individual of the current iteration. The scaling factor is a positive constant which controls the amplification of the difference vector, and . After mutation, the trial vector is generated using the parent and mutated vectors as follows: where is a real-valued crossover rate constant and is a random integer.

The individual of the next generation is generated as the follows: where denotes the fitness function.

##### 4.2. The Hybrid Strategy of DE and IWO

The hybrid strategy of DE and IWO can be summarized as follows. Every initial weed produces new seeds depending on its fitness. The generated seeds are randomly distributed over the search space by normally distributed random numbers. This process continues until the maximum number of weeds is reached. Only the weeds with better fitness can survive and produce seed, when the others are being eliminated. We can see that IWO ensures the diversity of individuals and the worse individuals also have the opportunity to reproduce, while individuals of DE move toward the best individual, using the information of distance and direction obtained from the current population. We can conclude that DE improves the convergence speed and searching precision. So, combining DE and IWO is useful to enrich the search behavior of optimization process and get high quality solutions.

#### 5. Experiments and Results

This section reports several experiments and comparisons using the proposed approach. Some well-known applications are also considered in the subsequent section.

##### 5.1. Testing Platform

The experimental program testing platform included processor: CPU Intel Core i3-370, frequency: 2.40 GHz, memory: 4 GB, operating system: Windows 7, and run software: MATLAB 7.6.

##### 5.2. Testing Nonlinear Equation Systems

In order to test the performance of DEIWO for solving nonlinear equation systems, 8 nonlinear equation systems in the literature are used and the testing results are compared with the literature [12]. Table 2 shows the experiment parameters. The search area and the number of iterations are as shown in Table 3. Table 4 to Table 11 is the results of 8 equation systems.

Example 1. Consider the following nonlinear system:
The results obtained by applying Newton, Secant, Broyden, and Effati methods; evolutionary approach (EA); Probabilistic-driven search; IWO; and the proposed DEIWO method are presented in Table 4.

Example 2. Consider the following nonlinear system:
The results obtained by Effati, evolutionary approach (EA), probabilistic-driven search, IWO, and the proposed DEIWO method are given in Table 5.

Example 3 (interval arithmetic benchmark). We consider one benchmark problem proposed from interval arithmetic [13]. The benchmark consists of the following system of equations:
Parameters used by the DEIWO approach are listed in Tables 2 and 3. The results obtained by evolutionary approach (EA) and the proposed DEIWO method are given in Table 6.

Example 4 (neurophysiology application). We considered the example proposed in [14], which consisted of the following equations: where the constant can be randomly chosen. In our experiments, we considered , , as in the literature [12]. In [15], this problem is used to show the limitations of Newton’s method for which the running time exponentially increases with the size of the initial intervals. We considered the following values for the parameters used by the DEIWO as given in Tables 2 and 3. Some of the solutions obtained by our approach as well as the values of the objective functions for these values are presented in Table 7.

Example 5 (chemical equilibrium application). We consider the chemical equilibrium system as given by the following [16]: where The parameters used by the DEIWO approach are presented in Tables 2 and 3. Some of the solutions obtained by the DEIWO approach for the chemical equilibrium application are depicted in Table 8.

Example 6 (kinematic application). We consider the kinematic application kin2 as introduced in [17], which describes the inverse position problem for a six-revolute-joint problem in mechanics. The equations describe a denser constraint system and are given as follows: The coefficients, , , , are given in Table 1. The parameters used by the DEIWO approach are presented in Tables 2 and 3. Some of the solutions obtained by the DEIWO approach for the kinematic example kin2 are presented in Table 9.

Table 1: Coefficients for the kinematic application.
Table 2: Parameters values used by the DEIWO.
Table 3: Benchmarks used in the experiments.
Table 4: Comparison of results for Example 1.
Table 5: Comparison of results for Example 2.
Table 6: Comparison of results for interval arithmetic benchmark.
Table 7: Comparison of results for neurophysiology application benchmark.
Table 8: Comparison of results for chemical equilibrium application benchmark.
Table 9: Comparison of results for kinematic application benchmark.

Example 7 (combustion application). We consider the combustion problem for a temperature of 3000°C as proposed in [18]. The problem is described by the following sparse system of equations: The parameters used by the DEIWO approach are presented in Tables 2 and 3. Some of the solutions obtained by the DEIWO approach are presented in Table 10.

Table 10: Comparison of results for combustion application benchmark.
Table 11: Comparison of results for economics modeling application benchmark.

Example 8 (economics modeling application). The following modeling problem is considered as difficult and can be scaled up to arbitrary dimensions [18]. The problem is given by the following system of equations:
The constant can be randomly chosen. We considered the value 0 for the constants in our experiments and the case of 20 equations as in literature [12].
The parameters used by the DEIWO approach are presented in Tables 2 and 3. Some of the solutions obtained by the DEIWO approach are presented in Table 11.
Table 2 gives the parameters values used by the DEIWO.

##### 5.3. Experimental Results and Discussion

For Example 1, we can see from Table 4 that the error of DEIWO is very small. When iteration number is 500, the error of DEIWO is E-020, compared to the traditional numerical method has high precision 16, compared to 14 EA high accuracy, precision is higher than IWO 12. What is more, all the accuracy for functions values is very high simultaneously. Thus, the solutions acquired can be approximately equal to the theoretical value. Figures 1 and 2 show that the convergence speed of DEIWO is faster than that of IWO, and the optimization precision higher than that of IWO. For Example 2, Table 5 shows the error of DEIWO is E-024, compared with traditional methods, and high precision of 20, 18 accuracy is higher than the EA, and precision is higher than IWO 16; almost equal to 0 when iteration number is 500. When iteration number reaches 800, the error is 0. Figures 3 and 4 show that IWO easy to fall into local optimum, while DEIWO can effectively jump out of local optimal solution. In addition, DEIWO can effectively jump out of local optimal, fewer than 200 generations has been largely convergence. All of this indicates that DEIWO is more effective than traditional numerical method and IWO.

Figure 1: Results of 500 iterations.
Figure 2: Results of 800 iterations.
Figure 3: Results of 500 iterations.
Figure 4: Results of 800 iterations.

For interval arithmetic benchmark, we can see the errors of all solutions obtained by DEIWO are almost in E-005. Moreover, the given 8 solutions are stable. Phenomenon of fluctuation does not appear. Thus, DEIWO can effectively resolve high-dimensional equation systems.

For neurophysiology application benchmark, 12 solutions are given in Table 7. The solutions’ errors of DEIWO are between E-006 and E-007, EA solving accuracy in E-001 and E-003. This also indicates that DEIWO is more effective than EA.

Table 8 shows 14 solutions of DEIWO and EA for chemical equilibrium application. The errors of DEIWO are all E-004, while those of EA are E-001.

For kinematic application benchmark, we can see that the errors of EA are very large, such as Solution 6; the error is 3.8, while the errors of DEIWO are between E-004 and E-005. Solutions for combustion application are given in Table 10. It is obvious that DEIWO can effectively resolve the nonlinear equation systems, and the error is much smaller than that of EA.

The number of equations for economics modeling application is variable. In our experiment, the number is 20. As can be seen from the Table 11, even if the equations of variable number is 20, DEIWO by solving the error precision can achieve the fitness of E-006, and even E-007. Than the error of high precision of the solutions of EA for five orders of magnitude. At the same time, precision of functions values obtained by DEIWO is between E-004 and E-006. No functions values are too large or too small.

The running time required for our algorithm to converge is presented in Table 12. It is measured in seconds using processor: CPU Intel Core i3-370 using frequency: 2.40 GHz, memory: 4 GB, operating system: Windows 7, run software: MATLAB 7.6. The comparison of the number of function evaluations required by the DEIWO for all the examples is shown in Table 13.

Table 12: Comparison of results for CPU time required by the DEIWO for all the examples.
Table 13: Comparison of the number of function evaluations required by the DEIWO for all the examples.
##### 5.4. A Practical Example for Geometric Constraint

The geometric constraint problem is generally divided into three categories: integrity constraint, underconstraint, and overconstraint [19]. Solving geometric constraint problems generally uses the divide and conquer method; the problem is decomposed into a number of small problems, we use numerical methods to solve. This paper uses the example in literature [20]: a circle packing problem. First, transform the geometric constraints into equations, and then the equations are transformed into an optimization problem; finally solve optimization equation. In this way, we do not need to consider if the number of variables and the number of equations are equal; thus, this problem can be effectively solved.

Single circle packing problem model is as shown in Figure 5, where, the Point at the origin of coordinates, the line segment at the -axis of positive direction, the length of line segment is , the length of line segment is , and the length of line segment is . There is an inscribed circle in triangle , center is , and radius is . The geometric constraint problem can be transformed and simplified, can obtain the following equations: Solve the equations, and then get the coordinate values of points , , , , and . Parameters are set as , , , and . The remaining parameters are the same as in Table 2. Unknown quantity needed to be solved is , , , , with range is . Input parameters are , , and . For the first set of parameters, the fitness function value obtained by DEIWO is 1 precision higher than that of PSO. For the second set of parameters, accuracy is the same. From the third and the fourth set of parameters, we can see that there is conflict. The results of solving geometric constraint are shown in Table 14.

Table 14: Results of solving geometric constraint.
Figure 5: Single circle packing problem model.

The DEIWO algorithm is very efficient for solving equations systems. The algorithm has the abilities to overcome local optimal solutions and to obtain global optimal solutions.

#### 6. Conclusions

In this paper, based on the IWO algorithm and DE algorithm, we present a new hybrid DEIWO algorithm for solving nonlinear equation systems. The proposed DEIWO approach seems to be very efficient for solving equations systems. We analyzed the case of nonlinear equations systems. We first compared our approach for some simple equations systems having only two equations that were recently used for analyzing the performance of a newly proposed method. The results obtained using the proposed DEIWO optimization approach are very promising. In optimization process, useful information obtained by DE from population is used to guide evolution direction of weeds. Experiments show that the DEIWO is effective and the proposed method could be extended for more higher dimensional systems, although this will also involve increased computational complexity. In a similar manner, we can also solve inequality systems and systems of differential equations, which are part of our future research work.

#### Acknowledgments

This work is supported by the National Science Foundation of China under Grant no. 61165015, Key Project of Guangxi Science Foundation under Grant no. 2012GXNSFDA053028, Key Project of Guangxi High School Science Foundation under Grant no. 20121ZD008, the Funded by Open Research Fund Program of Key Lab of Intelligent Perception and Image Understanding of Ministry of Education of China under Grant no. IPIU01201100, and the Innovation Project of Guangxi Graduate Education under Grant no. YCSZ2012063.

#### References

1. C. L. Karr, B. Weck, and L. M. Freeman, “Solutions to systems of nonlinear equations via a genetic algorithm,” Engineering Applications of Artificial Intelligence, vol. 11, no. 3, pp. 369–375, 1998.
2. G. P. Rangaiah, “Evaluation of genetic algorithms and simulated annealing for phase equilibrium and stability problems,” Fluid Phase Equilibria, vol. 187-188, pp. 83–109, 2001.
3. N. Chakraborti and P. K. Jha, “Pb-S-O vapor system re-evaluated using genetic algorithms,” Journal of Phase Equilibria and Diffusion, vol. 25, no. 5, pp. 421–426, 2004.
4. J. Zhang and X. Wang, “Particle swarm optimization for solving nonlinear equation and system,” Computer Engineering and Applications, vol. 7, pp. 56–58, 2006.
5. Z. Wang, “Solving nonlinear systems of equations based on differential evolution,” Computer Engineering and Applications, vol. 46, no. 4, pp. 54–55, 2010.
6. D. Wang and Y. Zhou, “Artificial fish-swarm algorithm for solving nonlinear equations,” Application Research of Computers, vol. 24, no. 6, pp. 242–244, 2007.
7. J. Zhang, “Artificial bee colony algorithm for solving nonlinear equation and system,” http://www.cnki.net/kcms/detail/11.2127.TP.20111114.0947.046.html.
8. C. Li, H. Lai, and J. Zhou, “Solution of nonlinear equations based on maximum entropy harmony search algorithm,” Computer Engineering, vol. 37, no. 20, pp. 189–190, 2011.
9. T. Nguyen Huu and H. Tran Van, “A new probabilistic algorithm for solving nonlinear equations systems,” Journal of Science, vol. 30, pp. 1–14, 2011.
10. A. R. Mehrabian and C. Lucas, “A novel numerical optimization algorithm inspired from weed colonization,” Ecological Informatics, vol. 1, no. 4, pp. 355–366, 2006.
11. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
12. C. Grosan and A. Abraham, “A new approach for solving nonlinear equations systems,” IEEE Transactions on Systems, Man, and Cybernetics A, vol. 38, no. 3, pp. 698–714, 2008.
13. R. E. Moore, Methods and Applications of Interval Analysis, vol. 2 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1979.
14. J. Verschelde, P. Verlinden, and R. Cools, “Homotopies exploiting newton polytopes for solving sparse polynomial systems,” SIAM Journal on Numerical Analysis, vol. 31, no. 3, pp. 915–930, 1994.
15. P. van Hentenryck, D. Mcallester, and D. Kapur, “Solving polynomial systems using a branch and prune approach,” SIAM Journal on Numerical Analysis, vol. 34, no. 2, pp. 797–827, 1997.
16. K. Meintjes and A. P. Morgan, “Chemical equilibrium system as numerical test problems,” ACM Transactions on Mathematical Software, vol. 16, no. 2, pp. 143–151, 1990.
17. A. Morgan and A. Sommese, “Computing all solutions to polynomial systems using homotopy continuation,” Applied Mathematics and Computation, vol. 24, no. 2, pp. 115–138, 1987.
18. A. P. Morgan, Solving Polynomial Systems Using Continuation for Scientific and Engineering Problems, Prentice-Hall, Englewood Cliffs, NJ, USA, 1987.
19. X. Gao and K. Jiang, “Survey on geometric constraint solving,” Journal of Computer Aided Design & Computer Graphics, vol. 16, no. 4, pp. 385–396, 2004.
20. S. Wang and Z. Wang, “Study of the application of PSO algorithms for nonlinear problems,” Journal of Huazhong University of Science and Technology, vol. 33, no. 12, pp. 4–7, 2005.