Mathematical Problems in Engineering

Volume 2015, Article ID 727218, 13 pages

http://dx.doi.org/10.1155/2015/727218

## Research on Solving Systems of Nonlinear Equations Based on Improved PSO

^{1}College of Materials Science and Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China^{2}College of Mechanical Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China

Received 5 May 2015; Accepted 15 July 2015

Academic Editor: Valery Sbitnev

Copyright © 2015 Yugui Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Solving systems of nonlinear equations is perhaps one of the most difficult problems in all of numerical computations, especially in a diverse range of engineering applications. The convergence and performance characteristics can be highly sensitive to the initial guess of the solution for most numerical methods such as Newton’s method. However, it is very difficult to select reasonable initial guess of the solution for most systems of nonlinear equations. Besides, the computational efficiency is not high enough. Aiming at these problems, an improved particle swarm optimization algorithm (imPSO) is proposed, which can overcome the problem of selecting reasonable initial guess of the solution and improve the computational efficiency. The convergence and performance characteristics of this method are demonstrated through some standard systems. The results show that the improved PSO for solving systems of nonlinear equations has reliable convergence probability, high convergence rate, and solution precision and is a successful approach in solving systems of nonlinear equations.

#### 1. Introduction

Solving systems of nonlinear equations is one of the most important problems in all of numerical computations, especially in a diverse range of engineering applications. Many applied problems can be reduced to solving systems of nonlinear equations, which is one of the most basic problems in mathematics. This task has applications in many scientific fields [1–7]. So great efforts have been made by a lot of people and many constructive theories and algorithms are proposed to solve systems of nonlinear equations [8–11]. However there still exist some problems in solving systems of nonlinear equations. For most traditional numerical methods such as Newton’s method, the convergence and performance characteristics can be highly sensitive to the initial guess of solution. However, it is very difficult to select reasonable initial guess of solution for most nonlinear equations. The algorithm would fail or the results may be improper if the initial guess of the solution is unreasonable. Many different combinations of the traditional numerical methods and the intelligent algorithms are applied to solve the systems of nonlinear equations [12, 13], which can overcome the problem of selecting reasonable initial guess of the solution. But the algorithms are too complicated or expensive to calculate when there are a number of systems of nonlinear equations to solve. Many improved intelligent algorithms, such as particle swarm algorithm and genetic algorithm, are proposed to solve systems of nonlinear equations. Though they overcome the problem of selecting reasonable initial guess of the solution, they lack the sophisticated search capabilities in local area, which may lead to convergence stagnation.

Here an improved particle swarm optimization algorithm (imPSO) is put forward, which can overcome the dependence on reasonable initial guess of the solution and improve the computational efficiency.

A system of nonlinear equations can be expressed aswhere are the variables.

Set the value of :

Then the problem of solving nonlinear equations is transformed to a problem of seeking a vector of to minimize the value of and the best value of is zero, which becomes an optimization problem. Then the imPSO is employed to solve the optimization problem.

Most solutions of the nonlinear equations in engineering have a limitative span, according to which we can initialize the initial guess. The sophisticated search capabilities in local area are improved by changing the parameters in the particle swarm algorithm. And the unnecessary iterations will be cancelled if the value of meets the standard (such as ), which can improve computational efficiency.

#### 2. Particle Swarm Algorithm

Particle swarm optimization algorithm (PSO), originating from the study of birds seeking food, is a kind of intelligent optimization algorithm, which is proposed by Eberhart and Kennedy in 1995 [17], and then, in order to promote the explorations in early optimization stages, the inertia weight is introduced into PSO [18]. Owing to its simple structure, PSO is developing rapidly and has plenty of modified forms.

A modified PSO put forward by Shi can be expressed as

Equation (3) is to update the velocity and (4) is to update the new position; is the velocity and is the position; represents the inertia weight; represents the th particle and is the th generation; and denote weighing factors called accelerated coefficients; and are random variables uniformly distributed within ; denotes the th personal best fitness and denotes the th global best fitness; the initial velocity and position of each particle are random variables generated by the standard normal distribution.

#### 3. Improved Particle Swarm Algorithm

##### 3.1. The Selection of Inertia Weight

Most intelligent optimization algorithms require a large search scope in earlier optimization stages to avoid falling into local optimal value and a fast convergence speed in latter optimization stages to get the optimal value quickly. The inertia weight is one of the most important factors that affect the search scope and convergence speed of PSO. The search scope will be large when inertia weight is big and convergence speed will be fast when inertia weight is small. So it is very important to select the inertia weight .

In order to meet the demands, the inertia weight can be expressed by the following function:where the inertia weight varies with the difference between and . is a parameter, which is a number between 0.8 and 1. , , , and are parameters selected according to the nonlinear equations. Generally is a number between 1 and 1.5; is a number between 0.6 and 1.2; is a number between 0.05 and 0.2; is a number between 1 and 2.5. is the th global best fitness, and is the standard deviation of all the th generation particles.

During the process of optimization, the inertia weight becomes smaller and smaller along with becoming smaller and smaller according to the second part of (5). Since solving nonlinear equations is to make (2) equal to zero (or a value close to zero), the lower bound of is zero. But if the is too big, the inertia weight is too big to converge, so the in (5) has an upper bound . If the is bigger than , will equal in (5).

If the th generation particles are scattered, the algorithm will have a large searching scope. However, if the th generation particles are too centralized, the algorithm will have a small searching scope and may be lost into local optimal value. The standard deviation reflects the distribution of particles. According to the third part of (5), the inertia weight becomes bigger and bigger along with becoming smaller and smaller. may become zero at last, so the upper bound of inertia weight is , and the lower bound of inertia weight is , so the inertia weight can meet the demands.

##### 3.2. Dynamic Conditions of Stopping Iterating

For most intelligent optimization algorithms, there must be enough iterations to guarantee getting the best value. However, the best value can be got through a few iterations for PSO, and then there is many iterations that will be useless, which leads to a low computational efficiency. Besides, if PSO is lost into local optimal value, it may lead to useless iterations and the wrong results. And these problems should be found timely and solved.

Aiming at these problems, a comprehensive plan is proposed.

Assign zero (or a value close to zero) to the standardized fitness value () and then the iterations will be cancelled if the fitness value is less than or equal to ; that is, equals zero. Now the solutions of the nonlinear equations are got and the useless iterations are avoided. That is,

For the problem of being lost into local optimal value, if the difference between and and the difference between and equal zero, PSO is considered as being lost into local optimal value. That is,where is the th best fitness value and the is the best fitness value before iteration. is the th best position, and the is the best position before iteration. is a parameter between 50 and 250 generally.

Equation (6) is the standard in which the optimal values are found. Equations (7) are the standards in which PSO is lost into local optimal value. If (7) are tenable, PSO will restart from the starting.

##### 3.3. The Standardized Number of Restarting Times

The standardized number of restarting times is calculated according to the reliability theory.

The probability of succeeding getting the optimal value for a single PSO is , which can be calculated through thousands of times of computing, being generally between 0.1 and 1. So the probability of succeeding getting the optimal value before ()th restarting can be expressed as

If is sufficiently small, will be large enough that we can believe that the probability of succeeding getting the optimal value equals 1. can be calculated through (9).

##### 3.4. The Steps of the Improved PSO

*Step 1. *Set equal to 1.

*Step 2. *Judge whether is less than . If is less than , the algorithm goes to Step 3. If is not less than , the algorithm put out “no results.”

*Step 3. *Initialize the and randomly, and calculate the and .

*Step 4. *Judge whether is less than . If is less than sfv, the algorithm will end. If is not less than , the algorithm goes to Step 5.

*Step 5. *If the is bigger than , will equal in (5). If the is smaller than or equal to , will be in (5). Update the inertia weight according to (5), and then calculate the , , , and . Judge whether is less than . If is less than , the algorithm will end. If is not less than , the algorithm goes to Step 6.

*Step 6. *Judge whether is less than (the biggest number of iterations in th time computing). If is less than , the algorithm goes to Step 8. If is not less than , the algorithm goes to Step 7.

*Step 7. *Judge whether the algorithm is lost into local optimal value according to (7). If the algorithm is lost into local optimal value, the algorithm goes to Step 8. If the algorithm is not lost into local optimal value, the algorithm goes to Step 5.

*Step 8. *Set . And then the algorithm goes to Step 2.

And then the imPSO is formed, whose steps are shown in Figure 1.