Machine Learning and its Applications in Image RestorationView this Special Issue
Research Article | Open Access
Zhigang Lian, Songhua Wang, Yangquan Chen, "A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations", Mathematical Problems in Engineering, vol. 2020, Article ID 6284583, 9 pages, 2020. https://doi.org/10.1155/2020/6284583
A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations
Many people use traditional methods such as quasi-Newton method and Gauss–Newton-based BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. The novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. The new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. The global convergence of the given method is established.
Many practical problems in engineering technology, information security, and so on can be solved by nonlinear equations [1, 2]. Because the complex of nonlinear equations, it is difficult to solve them, especially for high-dimensional nonlinear equations. Newton’s method and its improved form are extensively used at present, but the Newton–Raphson method has limitations . Their convergence and performance characteristics can be highly sensitive to the initial guess of the solution supplied to the methods. Also, it is difficult to select a good initial guess for most systems of nonlinear equations. Many researchers according to different nonlinear equations put forward various solutions. The Jacobian-free Newton–Krylova method is widely used in solving nonlinear equations arising in many applications; however, an effective precondition is required for each iteration, and determining such may be hard or expensive. Xu and Coleman  propose an efficient two-sided beclouding method to determine the lower triangular half of the sparse Jacobian matrix via automatic differentiation. With this lower triangular matrix, an effective preconditioner is constructed to accelerate the convergence of the Newton–Krylova method. Paper  introduces ANTIGONE, algorithms for continuous/integer global optimization of nonlinear equations, a general mixed-integer nonlinear global optimization framework. Yuan and Zhang [6, 7] presented a three-term Polak–Ribiere–Polyak conjugate gradient algorithm for large-scale nonlinear equations. Yan  introduced a new unified two-parameter wave model, connecting integrable local and nonlocal vector nonlinear Schrodinger equations. Fan and Lu  present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. The trust region algorithm s preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly q-cubically under the local error bound condition, which is weaker than the no singularity of the Jacobian at a solution. Paper  proposed and analyzed novel energy-preserving algorithms for solving the nonlinear Hamiltonian wave equation equipped with homogeneous Neumann boundary conditions. Paper  proposed a norm descent derivative-free algorithm for solving large-scale nonlinear symmetric equations without involving any information of the gradient or Jacobian matrix by using some approximate substitutions. Yuan and Hu  proposed a new three-term conjugate gradient algorithm under the Yuan–Wei–Lu line search technique to solve large-scale unconstrained optimization problems and nonlinear equations. The numerical method is used to solve systems of equations in paper .
Evolutionary algorithms are also used to solve systems of equations, such as [14, 15]. This paper studies how to solve nonlinear equations using particle swarm optimization algorithm (PSO). The PSO algorithm was inspired by Shi and Eberhart  using social analogy of swarm behavior in populations of natural organisms, and it has been vastly developed. Jie Zhang et al. in paper  presented a hybrid clustering algorithm based on PSO using dynamic crossover. Chauhan et al. developed a novel inertia weight strategy for particle swarm optimization, in which he proposes three new nonlinear strategies for selecting inertia weight which plays a significant role in particle’s foraging behavior . Pan et al.  proposed an improved consensus protocol on the basis of the velocity and position equations of the canonical PSO algorithm, and transformed the dynamical PSO system into one new linear discrete-time system including random variables. The boundary of consensus region is given to better select the parameters in the PSO algorithm. The PSO has been successfully applied in many areas: function optimization, scheduling, fuzzy system control, and other areas. The core concept of PSO is changing the velocity and accelerating each particle toward its previous best position (pbest) and the global best position (gbest). Each particle modifies its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest. Supposing the search space is an N-dimension space, the swarm is made up of m particles. Each particle is represented as . The velocity of the particle is denoted as . The best previous position of the i-th particle is represented as . The global best position of all particles is denoted as . The velocity and the position are updated according to the following iterative equation:in which(i) is inertia weight(ii) and are acceleration coefficient related to pbest and gbest(iii) and are random number uniform distribution U (0, 1)
José García-Nieto and Enrique Alba used a parallel PSO for gene selection of high-dimensional microarray datasets . Lin et al.  proposed a novel method for training the parameters of an adaptive network-based fuzzy inference system (ANFIS) that emphasizes the use of gradient descent (GD) methods.
Haddou and Maheu  developed some new smoothing techniques to solve general nonlinear complementarity problems. Litvinov et al.  carried out a survey on universal algorithms for solving the matrix Bellman equations over semirings and especially tropical and idempotent semirings. Chen et al.  developed a fast Fourier–Galerkin method for solving the nonlinear integral equation which is reformulated from a class of nonlinear boundary value problems. Wang and Zhang  presented a new family of two-step iterative methods to solve nonlinear equations. This paper presents the improved PSO algorithm, in order to optimize the complex nonlinear equations.
2. A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Equation
2.1. The Iterative Formula of VCLBPSO Algorithms
Although the PSO algorithm can converge very quickly towards the nearest optimal solution for many optimization problems, the PSO experiences difficulties in reaching the global optimal solution . The fact is that the diversity of the swarm decreases when the swarm approaches the nearest optimal solution. Significant efforts have been devoted to improve the efficiency of the algorithm and enhance the diversity of the population. Gobbi et al.  researched on a kind of local approximation-based multiobjective optimization algorithm with applications.
In this article, we will work to increase the diversity of population information. In this new algorithm, when the current time run is completed, the best position of population particle will be used in the position update of next time run. Relatively, to the next round optimization, the current optimal particle is historical local best and differently from the next round global optimum. We call the improved PSO algorithm as the velocity-combined local best particle swarm optimization algorithm, VCLBPSO. The historical local information increases the diversity of reference information used by particles, which is expected to be useful to overcome the problem of premature convergence.
A swarm of m particles moves through a D-dimensional search space. A particle is defined by its current position , and velocity . Each particle remembers its better location information , in which it has found the best solution of the optimization function . Basing on the different combination between historical local best information and updated velocity, the VCLBPSO algorithm can be extended to four ways:(1)The iterative formula of basic VCLBPSO is as follows: where equations (3) and (1) are exactly the same. and , have the same meaning with that in equation (1). is the previous t-th time run searched local best solution for optimization problem, and .is a constant coefficient that is used to balance the previous t-th time run searched local best and this run searched optimal information and to update velocity. Here, each particle updates its position mainly through a generation ago position and the velocity which updated with the previous t-th time run searched local best and this times run searched optimal information.(2)The random combination way called velocity randomly combined local best particle swarm optimization algorithm, for short VRCLBPSO, and its iterative formula is as follows: where, and , , have the same meaning with that in equation (3). In equation (7), is a random number.(3)Entirely, velocity randomly combined the local best particle, for short EVRCLBPSO. Its iterative formula is as follows:where, , , , , and , , have the same meaning with that in equation (3). And is an m×D matrix. If is 1 × n dimension vector, and this algorithm for short NVRCLBPSO and its iterative formula is similarly to EVRCLBPSO.
2.2. The Method of Nonlinear Equations Converting to Function Optimization
To easily solve nonlinear equations, the mathematical expression of nonlinear equations transforming to function optimization is as follows.
Supposing a system of nonlinear equations is made up of n functions, involving n unknown variables, described as follows:where is a system of nonlinear functions, is an unknown vector to it. In accordance with the algorithm principle, construct a fitness function is as follows:
The problem for solving the system of nonlinear equations (15) is transformed into the optimization problem of minimum values of fitness function (16). According to mathematical principles, when obtains the optimal value 0, is also 0, and nonlinear equations are solved. When is optimized by the improved PSO algorithm and obtaining the optimal value, also is nearly to solve.
2.3. The Pseudocode of VCLBPSO Algorithm for Nonlinear Equations
(i)FUCTION VCLBPSOE ()(ii)FOR optimization time from 1 to t (t usually is equation to (12))(iii)Randomly engender original population and velocity(iv)Compute the fitness of all particles using (16)(v)Record the best fitness of all particles(vi)WHILE the current number of iterations is less than the end iteration(vii)Update the particle velocity using (3) of VCLBPSO algorithm(viii)Modify the value of velocity using the update formula of VCLBPSO algorithm (4) or VRCLBPSO algorithm (8) and NVRCLBPSO algorithm (12)(ix)Update the particle position using (5)(x)Through comparing, get the optimal fitness of two generations, and record the better one as individual historical best(xi)Record the best particle as this run global best in population(xii)END(xiii)Using (6), update the historical local best . And if run time is the first, command the VCLBPSO algorithm is the same as basic PSO (1)-(2)(xiv)END(xv)Output the position of particle, and obtain the optimal fitness value
3. Experimental Approach
The application research of PSO in nonlinear equations has been investigated by some scholars, but it is limited. R. Brits et al. introduced the concept of shrinking particle neighborhood used in PSO to optimize systems of unconstrained equations . Ouyang et al. proposed the hybrid particle swarm optimization algorithm to solve systems of nonlinear equations . Mo et al. introduced the conjugate direction method into particle swarm optimization in order to improve PSO to solve systems of nonlinear equations . They proposed methods that are mainly used to optimize the low-dimensional equations. Y. Mo and H. Liu used conjugate direction particle swarm optimization solving systems of nonlinear equations. In this paper, the improved PSO algorithm will mainly be used to optimize the mid-dimensional and high-dimensional nonlinear equations, besides low-dimensional.
Five nonlinear equations are used to test the effect of the improved PSO algorithm.
Equations 1. The theory of optimal solution is , or :
Equations 2. The theory of optimal solution is :
Equations 3. The theory of optimal solution is :
Equations 4. The discretized two-point boundary value problem is similar to the problem in :when A is the n×n tridiagonal matrix given byAnd with .
Equations 5. Unconstrained optimization problem is as follows:With Engval function defined bywhere with
4. Experimental Results and Discussion
To test the effect of the improved algorithm for nonlinear equations, the same parameters of different algorithms are set exactly the same. Each problem to be optimized will be run 32 times by different methods. In the following, we will compare the effectiveness of the improved algorithm and the ordinary PSO in the case of nonlinear equations with different dimensions.
The experimental results are listed in Table 1. Relevant parameters are listed in the same table. In Table 1, Dim denotes problem’s dimension; the Ps and Genn indicate population size and algorithm terminate generation, respectively.
Table 1 confirms that the improved algorithms obtained better results in the case of different dimensions. Table 1 shows that introduction history local best information can increase a particle’s flight reference information diversity. This keeps the particles search to avoid falling into local optimal solution too early and the accuracy of convergence is improved. Improved PSO can obtain the small minimum, average, and standard deviation. In general, the VRCLBPSO, NVRCLBPSO, and NVRCLBPSO for low-dimensional problem are efficient, and the NVRCLBPSO search is more stable. The EVRCLBPSO is the best for high-dimensional problems. Their convergence and the best distribution of 32 run time figures are shown in Appendix A, in which the d is the dimension of problems.
From Table 1 and Figure 1 of Appendix A, we discover that the convergence rate of the improved method for each problem is clearly faster than ordinary PSO. Also, for each problem with higher dimension, the EVRCLBPSO algorithm has the better convergence speed and more optimal search ability. When optimizing high-dimensional nonlinear equation problems, PSO easily falls into local optimum. While this paper presented a method successfully jumped out of the local optimal solution. Numerical results show that the proposed method is effective for some nonlinear equation problems. And the global convergence of the given method is established.
5. Conclusions and Perspectives
So far, the research of PSO applied to optimization equations is very limited, especially for high-dimensional nonlinear equations. We propose an improved PSO algorithm for solving nonlinear equations in this paper. Experimental results comparing with the basic PSO algorithm show that the improved method has achieved better effect in both of convergence speed and accuracy. Particularly for high-dimensional nonlinear equations, the effect is very obvious. The application scenarios of this improved algorithm are relatively widespread, and it can be applied to solve unconstrained optimization problems, constrained optimization problems, equation problems, engineering practice problems, etc.
Nonlinear equations are an important question. Therefore, its optimization is very meaningful and valuable for many practical problems. It will be a valuable direction to apply the PSO algorithm and its variant to optimizing nonlinear equations, especially high-dimensional.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
The authors thank Dr. Tomas Oppenheim and Dr. Zhuo Li, School of Engineering, University of California, for their comprehensive revision and some discussion of value. This work was supported by the Shanghai Natural Science Fund (14ZR1417300) and the Guangxi Natural Science Fund (2020GXNSFAA159069).
- H.-J. Peng, Q. Gao, H.-W. Zhang, Z.-G. Wu, and W.-X. Zhong, “Parametric variational solution of linear-quadratic optimal control problems with control inequality constraints,” Applied Mathematics and Mechanics, vol. 35, no. 9, pp. 1079–1098, 2014.
- N. H. Tuan, L. D. Thang, D. D. Trong, and V. A. Khoa, “Approximation of mild solutions of the linear and nonlinear elliptic equations,” Inverse Problems in Science and Engineering, vol. 23, no. 7, pp. 1237–1266, 2015.
- N. Bianchini, S. Fanelli, and M. Gori, “Optimal algorithms for well-conditioned nonlinear systems of equations,” IEEE Transactions on Computers, vol. 50, no. 7, pp. 689–698, 2001.
- W. Xu and T. F. Coleman, “Solving nonlinear equations with the Newton-Krylov method based on automatic differentiation,” Optimization Methods and Software, vol. 29, no. 1, pp. 88–101, 2014.
- R. Misener and C. A. Floudas, “ANTIGONE: algorithms for coNTinuous/integer global optimization of nonlinear equations,” Journal of Global Optimization, vol. 59, no. 2-3, pp. 503–526, 2014.
- G. Yuan and M. Zhang, “A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015.
- G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020.
- Z. Yan, “Integrable PT-symmetric local and nonlocal vector nonlinear Schrödinger equations: a unified two-parameter model,” Applied Mathematics Letters, vol. 47, pp. 61–68, 2015.
- J. Fan and N. Lu, “On the modified trust region algorithm for nonlinear equations,” Optimization Methods and Software, vol. 30, no. 3, pp. 478–491, 2015.
- C. Liu, X. Wu, and W. Shi, “New energy-preserving algorithms for nonlinear Hamiltonian wave equation equipped with Neumann boundary conditions,” Applied Mathematics and Computation, vol. 339, pp. 588–606, 2018.
- J. K. Liu and Y. M. Feng, “A norm descent derivative-free algorithm for solving large-scale nonlinear symmetric equations,” Journal of Computational and Applied Mathematics, vol. 344, pp. 89–99, 2018.
- G. Yuan and W. Hu, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, pp. 1–19, 2018.
- K. Anada, T. Ishiwata, and T. Ushijima, “A numerical method of estimating blow-up rates for nonlinear evolution equations by using rescaling algorithm,” Japan Journal of Industrial and Applied Mathematics, vol. 35, no. 1, pp. 33–47, 2018.
- A. Ghodousian and A. Babalhavaeji, “An efficient genetic algorithm for solving nonlinear optimization problems defined with fuzzy relational equations and max-Lukasiewicz composition,” Applied Soft Computing, vol. 69, pp. 475–492, 2018.
- A. Ullah, S. A. Malik, and K. S. Alimgeer, “Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations,” PLoS One, vol. 13, no. 1, pp. 1–18, 2018.
- Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the 1999 Congress on Evolutionary Computation (CEC 99), pp. 1945–1950, Washington, DC, USA, July 1999.
- J. Zhang, Y. Wang, and J. Feng, “A hybrid clustering algorithm based on PSO with dynamic crossover,” Soft Computing, vol. 18, no. 5, pp. 961–979, 2014.
- P. Chauhan, K. Deep, and M. Pant, “Novel inertia weight strategies for particle swarm optimization,” Memetic Computing, vol. 5, no. 3, pp. 229–251, 2013.
- F. Pan, Q. Zhang, J. Liu, W. Li, and Q. Gao, “Consensus analysis for a class of stochastic PSO algorithm,” Applied Soft Computing, vol. 23, pp. 567–578, 2014.
- J. García-Nieto and E. Alba, “Parallel multi-swarm optimizer for gene selection in DNA microarrays,” Applied Intelligence, vol. 37, no. 2, pp. 255–266, 2012.
- X. Lin, J. Sun, V. Palade, W. Fang, X. Wu, and W. Xu, “Training ANFIS parameters with a quantum-behaved particle swarm optimization algorithm,” Lecture Notes in Computer Science, vol. 7331, pp. 148–155, 2012.
- M. Haddou and P. Maheux, “Smoothing methods for nonlinear complementarity problems,” Journal of Optimization Theory and Applications, vol. 160, no. 3, pp. 711–729, 2014.
- G. L. Litvinov, A. Y. Rodionov, S. N. Sergeev, and A. N. Sobolevski, “Universal algorithms for solving the matrix Bellman equations over semirings,” Soft Computing, vol. 17, no. 10, pp. 1767–1785, 2013.
- X. Chen, R. Wang, and Y. Xu, “Fast Fourier-Galerkin methods for nonlinear boundary integral equations,” Journal of Scientific Computing, vol. 56, no. 3, pp. 494–514, 2013.
- X. Wang and T. Zhang, “A new family of Newton-type iterative methods with and without memory for solving nonlinear equations,” Calcolo, vol. 51, no. 1, pp. 1–15, 2014.
- P. Chou, “High-dimension optimization problems using specified particle swarm optimization,” Lecture Notes in Computer Science, vol. 7331, pp. 164–172, 2012.
- M. Gobbi, P. Guarneri, L. Scala, and L. Scotti, “A local approximation based multi-objective optimization algorithm with applications,” Optimization and Engineering, vol. 15, no. 3, pp. 619–641, 2014.
- R. Brits, A. P. Englbrech, and F. Den Bergh, “Solving systems of unconstrained equations using particle swarm optimization,” IEEE International Conference on Systems, Man and Cybernetics, vol. 3, 2002.
- A. Ouyang, Y. Zhou, and Q. Luo, “Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations,” in Proceedings of the 2009 IEEE International Conference on Granular Computing, pp. 17–19, Nanchang, China, August 2009.
- Y. Mo, H. Liu, and Q. Wang, “Conjugate direction particle swarm optimization solving systems of nonlinear equations,” Computers & Mathematics with Applications, vol. 57, no. 11-12, pp. 1877–1882, 2009.
Copyright © 2020 Zhigang Lian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.