Research Article  Open Access
Zhigang Lian, Songhua Wang, Yangquan Chen, "A VelocityCombined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations", Mathematical Problems in Engineering, vol. 2020, Article ID 6284583, 9 pages, 2020. https://doi.org/10.1155/2020/6284583
A VelocityCombined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations
Abstract
Many people use traditional methods such as quasiNewton method and Gauss–Newtonbased BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. The novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. The new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. The global convergence of the given method is established.
1. Introduction
Many practical problems in engineering technology, information security, and so on can be solved by nonlinear equations [1, 2]. Because the complex of nonlinear equations, it is difficult to solve them, especially for highdimensional nonlinear equations. Newton’s method and its improved form are extensively used at present, but the Newton–Raphson method has limitations [3]. Their convergence and performance characteristics can be highly sensitive to the initial guess of the solution supplied to the methods. Also, it is difficult to select a good initial guess for most systems of nonlinear equations. Many researchers according to different nonlinear equations put forward various solutions. The Jacobianfree Newton–Krylova method is widely used in solving nonlinear equations arising in many applications; however, an effective precondition is required for each iteration, and determining such may be hard or expensive. Xu and Coleman [4] propose an efficient twosided beclouding method to determine the lower triangular half of the sparse Jacobian matrix via automatic differentiation. With this lower triangular matrix, an effective preconditioner is constructed to accelerate the convergence of the Newton–Krylova method. Paper [5] introduces ANTIGONE, algorithms for continuous/integer global optimization of nonlinear equations, a general mixedinteger nonlinear global optimization framework. Yuan and Zhang [6, 7] presented a threeterm Polak–Ribiere–Polyak conjugate gradient algorithm for largescale nonlinear equations. Yan [8] introduced a new unified twoparameter wave model, connecting integrable local and nonlocal vector nonlinear Schrodinger equations. Fan and Lu [9] present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. The trust region algorithm s preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly qcubically under the local error bound condition, which is weaker than the no singularity of the Jacobian at a solution. Paper [10] proposed and analyzed novel energypreserving algorithms for solving the nonlinear Hamiltonian wave equation equipped with homogeneous Neumann boundary conditions. Paper [11] proposed a norm descent derivativefree algorithm for solving largescale nonlinear symmetric equations without involving any information of the gradient or Jacobian matrix by using some approximate substitutions. Yuan and Hu [12] proposed a new threeterm conjugate gradient algorithm under the Yuan–Wei–Lu line search technique to solve largescale unconstrained optimization problems and nonlinear equations. The numerical method is used to solve systems of equations in paper [13].
Evolutionary algorithms are also used to solve systems of equations, such as [14, 15]. This paper studies how to solve nonlinear equations using particle swarm optimization algorithm (PSO). The PSO algorithm was inspired by Shi and Eberhart [16] using social analogy of swarm behavior in populations of natural organisms, and it has been vastly developed. Jie Zhang et al. in paper [17] presented a hybrid clustering algorithm based on PSO using dynamic crossover. Chauhan et al. developed a novel inertia weight strategy for particle swarm optimization, in which he proposes three new nonlinear strategies for selecting inertia weight which plays a significant role in particle’s foraging behavior [18]. Pan et al. [19] proposed an improved consensus protocol on the basis of the velocity and position equations of the canonical PSO algorithm, and transformed the dynamical PSO system into one new linear discretetime system including random variables. The boundary of consensus region is given to better select the parameters in the PSO algorithm. The PSO has been successfully applied in many areas: function optimization, scheduling, fuzzy system control, and other areas. The core concept of PSO is changing the velocity and accelerating each particle toward its previous best position (pbest) and the global best position (gbest). Each particle modifies its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest. Supposing the search space is an Ndimension space, the swarm is made up of m particles. Each particle is represented as . The velocity of the particle is denoted as . The best previous position of the ith particle is represented as . The global best position of all particles is denoted as . The velocity and the position are updated according to the following iterative equation:in which(i) is inertia weight(ii) and are acceleration coefficient related to pbest and gbest(iii) and are random number uniform distribution U (0, 1)
José GarcíaNieto and Enrique Alba used a parallel PSO for gene selection of highdimensional microarray datasets [20]. Lin et al. [21] proposed a novel method for training the parameters of an adaptive networkbased fuzzy inference system (ANFIS) that emphasizes the use of gradient descent (GD) methods.
Haddou and Maheu [22] developed some new smoothing techniques to solve general nonlinear complementarity problems. Litvinov et al. [23] carried out a survey on universal algorithms for solving the matrix Bellman equations over semirings and especially tropical and idempotent semirings. Chen et al. [24] developed a fast Fourier–Galerkin method for solving the nonlinear integral equation which is reformulated from a class of nonlinear boundary value problems. Wang and Zhang [25] presented a new family of twostep iterative methods to solve nonlinear equations. This paper presents the improved PSO algorithm, in order to optimize the complex nonlinear equations.
2. A VelocityCombined Local Best Particle Swarm Optimization Algorithm for Equation
2.1. The Iterative Formula of VCLBPSO Algorithms
Although the PSO algorithm can converge very quickly towards the nearest optimal solution for many optimization problems, the PSO experiences difficulties in reaching the global optimal solution [26]. The fact is that the diversity of the swarm decreases when the swarm approaches the nearest optimal solution. Significant efforts have been devoted to improve the efficiency of the algorithm and enhance the diversity of the population. Gobbi et al. [27] researched on a kind of local approximationbased multiobjective optimization algorithm with applications.
In this article, we will work to increase the diversity of population information. In this new algorithm, when the current time run is completed, the best position of population particle will be used in the position update of next time run. Relatively, to the next round optimization, the current optimal particle is historical local best and differently from the next round global optimum. We call the improved PSO algorithm as the velocitycombined local best particle swarm optimization algorithm, VCLBPSO. The historical local information increases the diversity of reference information used by particles, which is expected to be useful to overcome the problem of premature convergence.
A swarm of m particles moves through a Ddimensional search space. A particle is defined by its current position , and velocity . Each particle remembers its better location information , in which it has found the best solution of the optimization function . Basing on the different combination between historical local best information and updated velocity, the VCLBPSO algorithm can be extended to four ways:(1)The iterative formula of basic VCLBPSO is as follows: where equations (3) and (1) are exactly the same. and , have the same meaning with that in equation (1). is the previous tth time run searched local best solution for optimization problem, and .is a constant coefficient that is used to balance the previous tth time run searched local best and this run searched optimal information and to update velocity. Here, each particle updates its position mainly through a generation ago position and the velocity which updated with the previous tth time run searched local best and this times run searched optimal information.(2)The random combination way called velocity randomly combined local best particle swarm optimization algorithm, for short VRCLBPSO, and its iterative formula is as follows: where, and , , have the same meaning with that in equation (3). In equation (7), is a random number.(3)Entirely, velocity randomly combined the local best particle, for short EVRCLBPSO. Its iterative formula is as follows:where, , , , , and , , have the same meaning with that in equation (3). And is an m×D matrix. If is 1 × n dimension vector, and this algorithm for short NVRCLBPSO and its iterative formula is similarly to EVRCLBPSO.
2.2. The Method of Nonlinear Equations Converting to Function Optimization
To easily solve nonlinear equations, the mathematical expression of nonlinear equations transforming to function optimization is as follows.
Supposing a system of nonlinear equations is made up of n functions, involving n unknown variables, described as follows:where is a system of nonlinear functions, is an unknown vector to it. In accordance with the algorithm principle, construct a fitness function is as follows:
The problem for solving the system of nonlinear equations (15) is transformed into the optimization problem of minimum values of fitness function (16). According to mathematical principles, when obtains the optimal value 0, is also 0, and nonlinear equations are solved. When is optimized by the improved PSO algorithm and obtaining the optimal value, also is nearly to solve.
2.3. The Pseudocode of VCLBPSO Algorithm for Nonlinear Equations
(i)FUCTION VCLBPSOE ()(ii)FOR optimization time from 1 to t (t usually is equation to (12))(iii)Randomly engender original population and velocity(iv)Compute the fitness of all particles using (16)(v)Record the best fitness of all particles(vi)WHILE the current number of iterations is less than the end iteration(vii)Update the particle velocity using (3) of VCLBPSO algorithm(viii)Modify the value of velocity using the update formula of VCLBPSO algorithm (4) or VRCLBPSO algorithm (8) and NVRCLBPSO algorithm (12)(ix)Update the particle position using (5)(x)Through comparing, get the optimal fitness of two generations, and record the better one as individual historical best(xi)Record the best particle as this run global best in population(xii)END(xiii)Using (6), update the historical local best . And if run time is the first, command the VCLBPSO algorithm is the same as basic PSO (1)(2)(xiv)END(xv)Output the position of particle, and obtain the optimal fitness value
3. Experimental Approach
The application research of PSO in nonlinear equations has been investigated by some scholars, but it is limited. R. Brits et al. introduced the concept of shrinking particle neighborhood used in PSO to optimize systems of unconstrained equations [28]. Ouyang et al. proposed the hybrid particle swarm optimization algorithm to solve systems of nonlinear equations [29]. Mo et al. introduced the conjugate direction method into particle swarm optimization in order to improve PSO to solve systems of nonlinear equations [30]. They proposed methods that are mainly used to optimize the lowdimensional equations. Y. Mo and H. Liu used conjugate direction particle swarm optimization solving systems of nonlinear equations. In this paper, the improved PSO algorithm will mainly be used to optimize the middimensional and highdimensional nonlinear equations, besides lowdimensional.
Five nonlinear equations are used to test the effect of the improved PSO algorithm.
Equations 1. The theory of optimal solution is , or :
Equations 2. The theory of optimal solution is :
Equations 3. The theory of optimal solution is :
Equations 4. The discretized twopoint boundary value problem is similar to the problem in [6]:when A is the n×n tridiagonal matrix given byAnd with .
Equations 5. Unconstrained optimization problem is as follows:With Engval function defined bywhere with
4. Experimental Results and Discussion
To test the effect of the improved algorithm for nonlinear equations, the same parameters of different algorithms are set exactly the same. Each problem to be optimized will be run 32 times by different methods. In the following, we will compare the effectiveness of the improved algorithm and the ordinary PSO in the case of nonlinear equations with different dimensions.
The experimental results are listed in Table 1. Relevant parameters are listed in the same table. In Table 1, Dim denotes problem’s dimension; the Ps and Genn indicate population size and algorithm terminate generation, respectively.

Table 1 confirms that the improved algorithms obtained better results in the case of different dimensions. Table 1 shows that introduction history local best information can increase a particle’s flight reference information diversity. This keeps the particles search to avoid falling into local optimal solution too early and the accuracy of convergence is improved. Improved PSO can obtain the small minimum, average, and standard deviation. In general, the VRCLBPSO, NVRCLBPSO, and NVRCLBPSO for lowdimensional problem are efficient, and the NVRCLBPSO search is more stable. The EVRCLBPSO is the best for highdimensional problems. Their convergence and the best distribution of 32 run time figures are shown in Appendix A, in which the d is the dimension of problems.
From Table 1 and Figure 1 of Appendix A, we discover that the convergence rate of the improved method for each problem is clearly faster than ordinary PSO. Also, for each problem with higher dimension, the EVRCLBPSO algorithm has the better convergence speed and more optimal search ability. When optimizing highdimensional nonlinear equation problems, PSO easily falls into local optimum. While this paper presented a method successfully jumped out of the local optimal solution. Numerical results show that the proposed method is effective for some nonlinear equation problems. And the global convergence of the given method is established.
(a)
(b)
(c)
5. Conclusions and Perspectives
So far, the research of PSO applied to optimization equations is very limited, especially for highdimensional nonlinear equations. We propose an improved PSO algorithm for solving nonlinear equations in this paper. Experimental results comparing with the basic PSO algorithm show that the improved method has achieved better effect in both of convergence speed and accuracy. Particularly for highdimensional nonlinear equations, the effect is very obvious. The application scenarios of this improved algorithm are relatively widespread, and it can be applied to solve unconstrained optimization problems, constrained optimization problems, equation problems, engineering practice problems, etc.
Nonlinear equations are an important question. Therefore, its optimization is very meaningful and valuable for many practical problems. It will be a valuable direction to apply the PSO algorithm and its variant to optimizing nonlinear equations, especially highdimensional.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The authors thank Dr. Tomas Oppenheim and Dr. Zhuo Li, School of Engineering, University of California, for their comprehensive revision and some discussion of value. This work was supported by the Shanghai Natural Science Fund (14ZR1417300) and the Guangxi Natural Science Fund (2020GXNSFAA159069).
References
 H.J. Peng, Q. Gao, H.W. Zhang, Z.G. Wu, and W.X. Zhong, “Parametric variational solution of linearquadratic optimal control problems with control inequality constraints,” Applied Mathematics and Mechanics, vol. 35, no. 9, pp. 1079–1098, 2014. View at: Publisher Site  Google Scholar
 N. H. Tuan, L. D. Thang, D. D. Trong, and V. A. Khoa, “Approximation of mild solutions of the linear and nonlinear elliptic equations,” Inverse Problems in Science and Engineering, vol. 23, no. 7, pp. 1237–1266, 2015. View at: Publisher Site  Google Scholar
 N. Bianchini, S. Fanelli, and M. Gori, “Optimal algorithms for wellconditioned nonlinear systems of equations,” IEEE Transactions on Computers, vol. 50, no. 7, pp. 689–698, 2001. View at: Publisher Site  Google Scholar
 W. Xu and T. F. Coleman, “Solving nonlinear equations with the NewtonKrylov method based on automatic differentiation,” Optimization Methods and Software, vol. 29, no. 1, pp. 88–101, 2014. View at: Publisher Site  Google Scholar
 R. Misener and C. A. Floudas, “ANTIGONE: algorithms for coNTinuous/integer global optimization of nonlinear equations,” Journal of Global Optimization, vol. 59, no. 23, pp. 503–526, 2014. View at: Publisher Site  Google Scholar
 G. Yuan and M. Zhang, “A threeterms PolakRibièrePolyak conjugate gradient algorithm for largescale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015. View at: Publisher Site  Google Scholar
 G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for largescale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020. View at: Publisher Site  Google Scholar
 Z. Yan, “Integrable PTsymmetric local and nonlocal vector nonlinear Schrödinger equations: a unified twoparameter model,” Applied Mathematics Letters, vol. 47, pp. 61–68, 2015. View at: Publisher Site  Google Scholar
 J. Fan and N. Lu, “On the modified trust region algorithm for nonlinear equations,” Optimization Methods and Software, vol. 30, no. 3, pp. 478–491, 2015. View at: Publisher Site  Google Scholar
 C. Liu, X. Wu, and W. Shi, “New energypreserving algorithms for nonlinear Hamiltonian wave equation equipped with Neumann boundary conditions,” Applied Mathematics and Computation, vol. 339, pp. 588–606, 2018. View at: Publisher Site  Google Scholar
 J. K. Liu and Y. M. Feng, “A norm descent derivativefree algorithm for solving largescale nonlinear symmetric equations,” Journal of Computational and Applied Mathematics, vol. 344, pp. 89–99, 2018. View at: Publisher Site  Google Scholar
 G. Yuan and W. Hu, “A conjugate gradient algorithm for largescale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, pp. 1–19, 2018. View at: Publisher Site  Google Scholar
 K. Anada, T. Ishiwata, and T. Ushijima, “A numerical method of estimating blowup rates for nonlinear evolution equations by using rescaling algorithm,” Japan Journal of Industrial and Applied Mathematics, vol. 35, no. 1, pp. 33–47, 2018. View at: Publisher Site  Google Scholar
 A. Ghodousian and A. Babalhavaeji, “An efficient genetic algorithm for solving nonlinear optimization problems defined with fuzzy relational equations and maxLukasiewicz composition,” Applied Soft Computing, vol. 69, pp. 475–492, 2018. View at: Publisher Site  Google Scholar
 A. Ullah, S. A. Malik, and K. S. Alimgeer, “Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations,” PLoS One, vol. 13, no. 1, pp. 1–18, 2018. View at: Publisher Site  Google Scholar
 Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the 1999 Congress on Evolutionary Computation (CEC 99), pp. 1945–1950, Washington, DC, USA, July 1999. View at: Publisher Site  Google Scholar
 J. Zhang, Y. Wang, and J. Feng, “A hybrid clustering algorithm based on PSO with dynamic crossover,” Soft Computing, vol. 18, no. 5, pp. 961–979, 2014. View at: Publisher Site  Google Scholar
 P. Chauhan, K. Deep, and M. Pant, “Novel inertia weight strategies for particle swarm optimization,” Memetic Computing, vol. 5, no. 3, pp. 229–251, 2013. View at: Publisher Site  Google Scholar
 F. Pan, Q. Zhang, J. Liu, W. Li, and Q. Gao, “Consensus analysis for a class of stochastic PSO algorithm,” Applied Soft Computing, vol. 23, pp. 567–578, 2014. View at: Publisher Site  Google Scholar
 J. GarcíaNieto and E. Alba, “Parallel multiswarm optimizer for gene selection in DNA microarrays,” Applied Intelligence, vol. 37, no. 2, pp. 255–266, 2012. View at: Publisher Site  Google Scholar
 X. Lin, J. Sun, V. Palade, W. Fang, X. Wu, and W. Xu, “Training ANFIS parameters with a quantumbehaved particle swarm optimization algorithm,” Lecture Notes in Computer Science, vol. 7331, pp. 148–155, 2012. View at: Publisher Site  Google Scholar
 M. Haddou and P. Maheux, “Smoothing methods for nonlinear complementarity problems,” Journal of Optimization Theory and Applications, vol. 160, no. 3, pp. 711–729, 2014. View at: Publisher Site  Google Scholar
 G. L. Litvinov, A. Y. Rodionov, S. N. Sergeev, and A. N. Sobolevski, “Universal algorithms for solving the matrix Bellman equations over semirings,” Soft Computing, vol. 17, no. 10, pp. 1767–1785, 2013. View at: Publisher Site  Google Scholar
 X. Chen, R. Wang, and Y. Xu, “Fast FourierGalerkin methods for nonlinear boundary integral equations,” Journal of Scientific Computing, vol. 56, no. 3, pp. 494–514, 2013. View at: Publisher Site  Google Scholar
 X. Wang and T. Zhang, “A new family of Newtontype iterative methods with and without memory for solving nonlinear equations,” Calcolo, vol. 51, no. 1, pp. 1–15, 2014. View at: Publisher Site  Google Scholar
 P. Chou, “Highdimension optimization problems using specified particle swarm optimization,” Lecture Notes in Computer Science, vol. 7331, pp. 164–172, 2012. View at: Publisher Site  Google Scholar
 M. Gobbi, P. Guarneri, L. Scala, and L. Scotti, “A local approximation based multiobjective optimization algorithm with applications,” Optimization and Engineering, vol. 15, no. 3, pp. 619–641, 2014. View at: Publisher Site  Google Scholar
 R. Brits, A. P. Englbrech, and F. Den Bergh, “Solving systems of unconstrained equations using particle swarm optimization,” IEEE International Conference on Systems, Man and Cybernetics, vol. 3, 2002. View at: Publisher Site  Google Scholar
 A. Ouyang, Y. Zhou, and Q. Luo, “Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations,” in Proceedings of the 2009 IEEE International Conference on Granular Computing, pp. 17–19, Nanchang, China, August 2009. View at: Publisher Site  Google Scholar
 Y. Mo, H. Liu, and Q. Wang, “Conjugate direction particle swarm optimization solving systems of nonlinear equations,” Computers & Mathematics with Applications, vol. 57, no. 1112, pp. 1877–1882, 2009. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Zhigang Lian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.