Abstract

Solving systems of nonlinear equations is perhaps one of the most difficult problems in all of numerical computations, especially in a diverse range of engineering applications. The convergence and performance characteristics can be highly sensitive to the initial guess of the solution for most numerical methods such as Newton’s method. However, it is very difficult to select reasonable initial guess of the solution for most systems of nonlinear equations. Besides, the computational efficiency is not high enough. Aiming at these problems, an improved particle swarm optimization algorithm (imPSO) is proposed, which can overcome the problem of selecting reasonable initial guess of the solution and improve the computational efficiency. The convergence and performance characteristics of this method are demonstrated through some standard systems. The results show that the improved PSO for solving systems of nonlinear equations has reliable convergence probability, high convergence rate, and solution precision and is a successful approach in solving systems of nonlinear equations.

1. Introduction

Solving systems of nonlinear equations is one of the most important problems in all of numerical computations, especially in a diverse range of engineering applications. Many applied problems can be reduced to solving systems of nonlinear equations, which is one of the most basic problems in mathematics. This task has applications in many scientific fields [17]. So great efforts have been made by a lot of people and many constructive theories and algorithms are proposed to solve systems of nonlinear equations [811]. However there still exist some problems in solving systems of nonlinear equations. For most traditional numerical methods such as Newton’s method, the convergence and performance characteristics can be highly sensitive to the initial guess of solution. However, it is very difficult to select reasonable initial guess of solution for most nonlinear equations. The algorithm would fail or the results may be improper if the initial guess of the solution is unreasonable. Many different combinations of the traditional numerical methods and the intelligent algorithms are applied to solve the systems of nonlinear equations [12, 13], which can overcome the problem of selecting reasonable initial guess of the solution. But the algorithms are too complicated or expensive to calculate when there are a number of systems of nonlinear equations to solve. Many improved intelligent algorithms, such as particle swarm algorithm and genetic algorithm, are proposed to solve systems of nonlinear equations. Though they overcome the problem of selecting reasonable initial guess of the solution, they lack the sophisticated search capabilities in local area, which may lead to convergence stagnation.

Here an improved particle swarm optimization algorithm (imPSO) is put forward, which can overcome the dependence on reasonable initial guess of the solution and improve the computational efficiency.

A system of nonlinear equations can be expressed aswhere are the variables.

Set the value of :

Then the problem of solving nonlinear equations is transformed to a problem of seeking a vector of to minimize the value of and the best value of is zero, which becomes an optimization problem. Then the imPSO is employed to solve the optimization problem.

Most solutions of the nonlinear equations in engineering have a limitative span, according to which we can initialize the initial guess. The sophisticated search capabilities in local area are improved by changing the parameters in the particle swarm algorithm. And the unnecessary iterations will be cancelled if the value of meets the standard (such as ), which can improve computational efficiency.

2. Particle Swarm Algorithm

Particle swarm optimization algorithm (PSO), originating from the study of birds seeking food, is a kind of intelligent optimization algorithm, which is proposed by Eberhart and Kennedy in 1995 [17], and then, in order to promote the explorations in early optimization stages, the inertia weight is introduced into PSO [18]. Owing to its simple structure, PSO is developing rapidly and has plenty of modified forms.

A modified PSO put forward by Shi can be expressed as

Equation (3) is to update the velocity and (4) is to update the new position; is the velocity and is the position; represents the inertia weight; represents the th particle and is the th generation; and denote weighing factors called accelerated coefficients; and are random variables uniformly distributed within ; denotes the th personal best fitness and denotes the th global best fitness; the initial velocity and position of each particle are random variables generated by the standard normal distribution.

3. Improved Particle Swarm Algorithm

3.1. The Selection of Inertia Weight

Most intelligent optimization algorithms require a large search scope in earlier optimization stages to avoid falling into local optimal value and a fast convergence speed in latter optimization stages to get the optimal value quickly. The inertia weight is one of the most important factors that affect the search scope and convergence speed of PSO. The search scope will be large when inertia weight is big and convergence speed will be fast when inertia weight is small. So it is very important to select the inertia weight .

In order to meet the demands, the inertia weight can be expressed by the following function:where the inertia weight varies with the difference between and . is a parameter, which is a number between 0.8 and 1. , , , and are parameters selected according to the nonlinear equations. Generally is a number between 1 and 1.5; is a number between 0.6 and 1.2; is a number between 0.05 and 0.2; is a number between 1 and 2.5. is the th global best fitness, and is the standard deviation of all the th generation particles.

During the process of optimization, the inertia weight becomes smaller and smaller along with becoming smaller and smaller according to the second part of (5). Since solving nonlinear equations is to make (2) equal to zero (or a value close to zero), the lower bound of is zero. But if the is too big, the inertia weight is too big to converge, so the in (5) has an upper bound . If the is bigger than , will equal in (5).

If the th generation particles are scattered, the algorithm will have a large searching scope. However, if the th generation particles are too centralized, the algorithm will have a small searching scope and may be lost into local optimal value. The standard deviation reflects the distribution of particles. According to the third part of (5), the inertia weight becomes bigger and bigger along with becoming smaller and smaller. may become zero at last, so the upper bound of inertia weight is , and the lower bound of inertia weight is , so the inertia weight can meet the demands.

3.2. Dynamic Conditions of Stopping Iterating

For most intelligent optimization algorithms, there must be enough iterations to guarantee getting the best value. However, the best value can be got through a few iterations for PSO, and then there is many iterations that will be useless, which leads to a low computational efficiency. Besides, if PSO is lost into local optimal value, it may lead to useless iterations and the wrong results. And these problems should be found timely and solved.

Aiming at these problems, a comprehensive plan is proposed.

Assign zero (or a value close to zero) to the standardized fitness value () and then the iterations will be cancelled if the fitness value is less than or equal to ; that is, equals zero. Now the solutions of the nonlinear equations are got and the useless iterations are avoided. That is,

For the problem of being lost into local optimal value, if the difference between and and the difference between and equal zero, PSO is considered as being lost into local optimal value. That is,where is the th best fitness value and the is the best fitness value before iteration. is the th best position, and the is the best position before iteration. is a parameter between 50 and 250 generally.

Equation (6) is the standard in which the optimal values are found. Equations (7) are the standards in which PSO is lost into local optimal value. If (7) are tenable, PSO will restart from the starting.

3.3. The Standardized Number of Restarting Times

The standardized number of restarting times is calculated according to the reliability theory.

The probability of succeeding getting the optimal value for a single PSO is , which can be calculated through thousands of times of computing, being generally between 0.1 and 1. So the probability of succeeding getting the optimal value before ()th restarting can be expressed as

If is sufficiently small, will be large enough that we can believe that the probability of succeeding getting the optimal value equals 1. can be calculated through (9).

3.4. The Steps of the Improved PSO

Step 1. Set equal to 1.

Step 2. Judge whether is less than . If is less than , the algorithm goes to Step 3. If is not less than , the algorithm put out “no results.”

Step 3. Initialize the and randomly, and calculate the and .

Step 4. Judge whether is less than . If is less than sfv, the algorithm will end. If is not less than , the algorithm goes to Step 5.

Step 5. If the is bigger than , will equal in (5). If the is smaller than or equal to , will be in (5). Update the inertia weight according to (5), and then calculate the , , , and . Judge whether is less than . If is less than , the algorithm will end. If is not less than , the algorithm goes to Step 6.

Step 6. Judge whether is less than (the biggest number of iterations in th time computing). If is less than , the algorithm goes to Step 8. If is not less than , the algorithm goes to Step 7.

Step 7. Judge whether the algorithm is lost into local optimal value according to (7). If the algorithm is lost into local optimal value, the algorithm goes to Step 8. If the algorithm is not lost into local optimal value, the algorithm goes to Step 5.

Step 8. Set . And then the algorithm goes to Step 2.

And then the imPSO is formed, whose steps are shown in Figure 1.

4. Experiments and Results

In this section, benchmark functions are employed to investigate the performance of the imPSO.

Test 1 (Freudenstein-Roth function). Consider

The minimum value of the Freudenstein-Roth function is zero when is located at point .

The values of parameters are , , , , and . The calculated result is , and the calculated minimum value of the Freudenstein-Roth function is zero. The number of iterations is 93. is 1. The convergence history of test 1 is showed in Figure 2, and the variation of is shown in Figure 3.

The number of iterations of PSO is 474, whose convergence history is showed in Figure 4.

Test 2 (Rosenbrock function). Consider

The Rosenbrock function is a nonconvex pathological function, which has a long narrow and curved valley in the function. So it may require a large number of iterations to obtain the best solution. The minimum value of the Rosenbrock function is zero when is located at point .

The values of parameters are , , , , and . The calculated result is , and the calculated minimum value of the Rosenbrock function is zero. The number of iterations is 75. is 1. The convergence history of test 2 is showed in Figure 5, and the variation of is shown in Figure 6.

The number of iterations of PSO is 435, whose convergence history is showed in Figure 7.

Test 3 (Schaffer function). Consider

The Schaffer function has the global maximum value 1 when is located at point . However, the Schaffer function has many local maximums 0.9903 at the points around the point . It is very hard to get the global maximum. Here the Schaffer function is transformed to get the minimum value through the following equation:

So (13) has the minimum value zero at point .

The values of parameters are , , , , and . The calculated result is , and the calculated minimum value of the Schaffer function is zero. The number of iterations is 28. is 1. The convergence history of test 3 is showed in Figure 8, and the variation of is shown in Figure 9.

The number of iterations of PSO is 150, whose convergence history is showed in Figure 10.

Test 4 (Powell quartic function). Consider

The Powell quartic function is singular at the minimum point. However, it is very hard to get the minimum value [14]. The minimum value zero is obtained at the point .

The values of parameters are , , , , and . The calculated result is , and the calculated minimum value of the Powell quartic function is zero. The number of iterations is 1973. is 1. The convergence history of test 3 is showed in Figure 11, and the variation of is shown in Figure 12.

The number of iterations of PSO is 10000 when the minimum value is , whose convergence history is showed in Figure 13.

The number of iterations of PSO in [14] is more than 4000.

Test 5 (Ackley function). Consider

The dimension here is . Ackley function is a multimodal function, which has a global optimal value and many local optimal values. And the variables have no relevance. It is very easy to be lost into the local optimal values and cannot jump out to find global local optimal values. The minimum value is zero at point .

The values of parameters are , , , , and . The calculated result is (, , , ), and the calculated minimum value of the Powell quartic function is zero. The number of iterations is 109. is 1. The convergence history of test 3 is showed in Figure 14, and the variation of is shown in Figure 15.

The number of iterations of PSO is 9500 when the minimum value is , whose convergence history is showed in Figure 16.

Test 6 (Griewank function). Consider

The dimension here is . Griewank function is a multimodal function, which has a global optimal value and many local optimal values. And the variables are relevant. It is very easy to be lost into the local optimal values and cannot jump out to find global local optimal values. The minimum value is zero at point .

The values of parameters are , , , , and . The calculated result is (, , , , , ), and the calculated minimum value of the Powell quartic function is zero. The number of iterations is 49. is 1. The convergence history of test 3 is showed in Figure 17, and the variation of is shown in Figure 18.

The number of iterations of PSO is 9500 when the minimum value is , whose convergence history is showed in Figure 19.

Table 1 shows the results of each test with 10000 times computing. The value of is 100.

Table 1 shows the total iteration times of each test with 10000 times computing, from which the mean iteration times can be calculated. Value of indicated that if the is big enough, systems of nonlinear equations can be solved successfully.

5. Case Study

In order to demonstrate the efficiency of the improved particle swarm optimization algorithm for solving systems of nonlinear equations, some standard nonlinear equations systems are employed.

Case 1 (see [19]). ConsiderThe values of parameters are , , , , and . The calculated result is (−0.290514555507251, 1.084215081491351), and the number of iterations is 69, while the number of iterations in [14] is more than 100. is 1. The convergence history of Case 1 is shown in Figure 20, and the variation of is shown in Figure 21. The result is more accurate than the result in [19].

Case 2 (see [20]). ConsiderThe values of parameters are , , , , and . The calculated result is (0.175598924177659, 0.824401075822341, 1.000000000000000), and the number of iterations is 147. is 1. The convergence history of Case 2 is shown in Figure 22, and the variation of is shown in Figure 23. The result is more accurate than the result in [14]. The result in [14] is (0.17559892417766, 0.82440107582234, 1).

Case 3 (see [20]). ConsiderThe values of parameters are , , , , and . The calculated result is (0.500000000000000, −0.000000000141655, −0.523598775601840), and the number of iterations is 111, while the number of iterations in [14] is more than 150. is 1, and the variation of is shown in Figure 25.
The convergence history of Case 3 is shown in Figure 24. Table 2 shows the results.

Case 4 (see [13]). ConsiderThe values of parameters are , , , , and . The calculated result is (4.00000000000000, 3.00000000000000, 1.00000000000000), and the number of iterations is 152, while the number of iterations in [14, 16] is more than 200. is 3. The convergence history of Case 4 is shown in Figure 26, and the variation of is shown in Figure 27.

Case 5 (see [21]). ConsiderThe values of parameters are , , , , and . The calculated result is (12.256519599348696, 22.894938623626285, 2.789817919538154) or (−12.256519599348696, −22.894938623626285, −2.789817919538154), and the number of iterations is 148, while the number of iterations in [14] is more than 200. is 4. The convergence history of Case 5 is showed in Figure 28, and the variation of is shown in Figure 29.

Case 6 (neurophysiology application). ConsiderThe values of parameters are , , , , and . The calculated result with imPSO is (−0.994926998030712, 0.539699980991932, −0.100599545672903, 0.841857428854381, 0, 0), and the number of iterations is 2161. is 1. The convergence history of Case 6 is showed in Figure 30, and the variation of is shown in Figure 31. The best results in different methods and the results of imPSO are shown in Table 3.

Table 4 shows the results of Case 1 with 10000 times computing under different parameters. The value of is 100 and the values of parameters are and .

According to and , total iterations become fewer when parameter is bigger, but the probability of convergence becomes bigger. According to and , total iterations become fewer when parameter is smaller, but the probability of convergence becomes bigger. According to and , total iterations become fewer when parameter is smaller, but the probability of convergence becomes smaller. Table 4 shows that total iterations become fewer when parameter is less, but the probability of convergence becomes smaller.

6. Conclusions

In this paper, a method for solving a system of nonlinear equations is proposed, which is converted to an optimization problem, and an improved particle swarm algorithm is employed to solve the optimization problem. This method overcomes the dependence on reasonable initial guess of the solution. And then some standard systems of nonlinear equations are presented to demonstrate the convergence probability, convergence rate, and solution precision in finding the best solution of nonlinear equations with the improved particle swarm algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The project is subsidized by The Research Project Supported by Shanxi Scholarship Council of China 2014-065 and Shanxi Province Natural Science Fund Project 2014011024-5.