Abstract

Many people use traditional methods such as quasi-Newton method and Gauss–Newton-based BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. The novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. The new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. The global convergence of the given method is established.

1. Introduction

Many practical problems in engineering technology, information security, and so on can be solved by nonlinear equations [1, 2]. Because the complex of nonlinear equations, it is difficult to solve them, especially for high-dimensional nonlinear equations. Newton’s method and its improved form are extensively used at present, but the Newton–Raphson method has limitations [3]. Their convergence and performance characteristics can be highly sensitive to the initial guess of the solution supplied to the methods. Also, it is difficult to select a good initial guess for most systems of nonlinear equations. Many researchers according to different nonlinear equations put forward various solutions. The Jacobian-free Newton–Krylova method is widely used in solving nonlinear equations arising in many applications; however, an effective precondition is required for each iteration, and determining such may be hard or expensive. Xu and Coleman [4] propose an efficient two-sided beclouding method to determine the lower triangular half of the sparse Jacobian matrix via automatic differentiation. With this lower triangular matrix, an effective preconditioner is constructed to accelerate the convergence of the Newton–Krylova method. Paper [5] introduces ANTIGONE, algorithms for continuous/integer global optimization of nonlinear equations, a general mixed-integer nonlinear global optimization framework. Yuan and Zhang [6, 7] presented a three-term Polak–Ribiere–Polyak conjugate gradient algorithm for large-scale nonlinear equations. Yan [8] introduced a new unified two-parameter wave model, connecting integrable local and nonlocal vector nonlinear Schrodinger equations. Fan and Lu [9] present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. The trust region algorithm s preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly q-cubically under the local error bound condition, which is weaker than the no singularity of the Jacobian at a solution. Paper [10] proposed and analyzed novel energy-preserving algorithms for solving the nonlinear Hamiltonian wave equation equipped with homogeneous Neumann boundary conditions. Paper [11] proposed a norm descent derivative-free algorithm for solving large-scale nonlinear symmetric equations without involving any information of the gradient or Jacobian matrix by using some approximate substitutions. Yuan and Hu [12] proposed a new three-term conjugate gradient algorithm under the Yuan–Wei–Lu line search technique to solve large-scale unconstrained optimization problems and nonlinear equations. The numerical method is used to solve systems of equations in paper [13].

Evolutionary algorithms are also used to solve systems of equations, such as [14, 15]. This paper studies how to solve nonlinear equations using particle swarm optimization algorithm (PSO). The PSO algorithm was inspired by Shi and Eberhart [16] using social analogy of swarm behavior in populations of natural organisms, and it has been vastly developed. Jie Zhang et al. in paper [17] presented a hybrid clustering algorithm based on PSO using dynamic crossover. Chauhan et al. developed a novel inertia weight strategy for particle swarm optimization, in which he proposes three new nonlinear strategies for selecting inertia weight which plays a significant role in particle’s foraging behavior [18]. Pan et al. [19] proposed an improved consensus protocol on the basis of the velocity and position equations of the canonical PSO algorithm, and transformed the dynamical PSO system into one new linear discrete-time system including random variables. The boundary of consensus region is given to better select the parameters in the PSO algorithm. The PSO has been successfully applied in many areas: function optimization, scheduling, fuzzy system control, and other areas. The core concept of PSO is changing the velocity and accelerating each particle toward its previous best position (pbest) and the global best position (gbest). Each particle modifies its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest. Supposing the search space is an N-dimension space, the swarm is made up of m particles. Each particle is represented as . The velocity of the particle is denoted as . The best previous position of the i-th particle is represented as . The global best position of all particles is denoted as . The velocity and the position are updated according to the following iterative equation:in which(i) is inertia weight(ii) and are acceleration coefficient related to pbest and gbest(iii) and are random number uniform distribution U (0, 1)

José García-Nieto and Enrique Alba used a parallel PSO for gene selection of high-dimensional microarray datasets [20]. Lin et al. [21] proposed a novel method for training the parameters of an adaptive network-based fuzzy inference system (ANFIS) that emphasizes the use of gradient descent (GD) methods.

Haddou and Maheu [22] developed some new smoothing techniques to solve general nonlinear complementarity problems. Litvinov et al. [23] carried out a survey on universal algorithms for solving the matrix Bellman equations over semirings and especially tropical and idempotent semirings. Chen et al. [24] developed a fast Fourier–Galerkin method for solving the nonlinear integral equation which is reformulated from a class of nonlinear boundary value problems. Wang and Zhang [25] presented a new family of two-step iterative methods to solve nonlinear equations. This paper presents the improved PSO algorithm, in order to optimize the complex nonlinear equations.

2. A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Equation

2.1. The Iterative Formula of VCLBPSO Algorithms

Although the PSO algorithm can converge very quickly towards the nearest optimal solution for many optimization problems, the PSO experiences difficulties in reaching the global optimal solution [26]. The fact is that the diversity of the swarm decreases when the swarm approaches the nearest optimal solution. Significant efforts have been devoted to improve the efficiency of the algorithm and enhance the diversity of the population. Gobbi et al. [27] researched on a kind of local approximation-based multiobjective optimization algorithm with applications.

In this article, we will work to increase the diversity of population information. In this new algorithm, when the current time run is completed, the best position of population particle will be used in the position update of next time run. Relatively, to the next round optimization, the current optimal particle is historical local best and differently from the next round global optimum. We call the improved PSO algorithm as the velocity-combined local best particle swarm optimization algorithm, VCLBPSO. The historical local information increases the diversity of reference information used by particles, which is expected to be useful to overcome the problem of premature convergence.

A swarm of m particles moves through a D-dimensional search space. A particle is defined by its current position , and velocity . Each particle remembers its better location information , in which it has found the best solution of the optimization function . Basing on the different combination between historical local best information and updated velocity, the VCLBPSO algorithm can be extended to four ways:(1)The iterative formula of basic VCLBPSO is as follows:where equations (3) and (1) are exactly the same. and , have the same meaning with that in equation (1). is the previous t-th time run searched local best solution for optimization problem, and .is a constant coefficient that is used to balance the previous t-th time run searched local best and this run searched optimal information and to update velocity. Here, each particle updates its position mainly through a generation ago position and the velocity which updated with the previous t-th time run searched local best and this times run searched optimal information.(2)The random combination way called velocity randomly combined local best particle swarm optimization algorithm, for short VRCLBPSO, and its iterative formula is as follows:where, and , , have the same meaning with that in equation (3). In equation (7), is a random number.(3)Entirely, velocity randomly combined the local best particle, for short EVRCLBPSO. Its iterative formula is as follows:where, , , , , and , , have the same meaning with that in equation (3). And is an m×D matrix. If is 1 × n dimension vector, and this algorithm for short NVRCLBPSO and its iterative formula is similarly to EVRCLBPSO.

2.2. The Method of Nonlinear Equations Converting to Function Optimization

To easily solve nonlinear equations, the mathematical expression of nonlinear equations transforming to function optimization is as follows.

Supposing a system of nonlinear equations is made up of n functions, involving n unknown variables, described as follows:where is a system of nonlinear functions, is an unknown vector to it. In accordance with the algorithm principle, construct a fitness function is as follows:

The problem for solving the system of nonlinear equations (15) is transformed into the optimization problem of minimum values of fitness function (16). According to mathematical principles, when obtains the optimal value 0, is also 0, and nonlinear equations are solved. When is optimized by the improved PSO algorithm and obtaining the optimal value, also is nearly to solve.

2.3. The Pseudocode of VCLBPSO Algorithm for Nonlinear Equations

(i)FUCTION VCLBPSOE ()(ii)FOR optimization time from 1 to t (t usually is equation to (12))(iii)Randomly engender original population and velocity(iv)Compute the fitness of all particles using (16)(v)Record the best fitness of all particles(vi)WHILE the current number of iterations is less than the end iteration(vii)Update the particle velocity using (3) of VCLBPSO algorithm(viii)Modify the value of velocity using the update formula of VCLBPSO algorithm (4) or VRCLBPSO algorithm (8) and NVRCLBPSO algorithm (12)(ix)Update the particle position using (5)(x)Through comparing, get the optimal fitness of two generations, and record the better one as individual historical best(xi)Record the best particle as this run global best in population(xii)END(xiii)Using (6), update the historical local best . And if run time is the first, command the VCLBPSO algorithm is the same as basic PSO (1)-(2)(xiv)END(xv)Output the position of particle, and obtain the optimal fitness value

3. Experimental Approach

The application research of PSO in nonlinear equations has been investigated by some scholars, but it is limited. R. Brits et al. introduced the concept of shrinking particle neighborhood used in PSO to optimize systems of unconstrained equations [28]. Ouyang et al. proposed the hybrid particle swarm optimization algorithm to solve systems of nonlinear equations [29]. Mo et al. introduced the conjugate direction method into particle swarm optimization in order to improve PSO to solve systems of nonlinear equations [30]. They proposed methods that are mainly used to optimize the low-dimensional equations. Y. Mo and H. Liu used conjugate direction particle swarm optimization solving systems of nonlinear equations. In this paper, the improved PSO algorithm will mainly be used to optimize the mid-dimensional and high-dimensional nonlinear equations, besides low-dimensional.

Five nonlinear equations are used to test the effect of the improved PSO algorithm.

Equations 1. The theory of optimal solution is , or :

Equations 2. The theory of optimal solution is :

Equations 3. The theory of optimal solution is :

Equations 4. The discretized two-point boundary value problem is similar to the problem in [6]:when A is the n×n tridiagonal matrix given byAnd with .

Equations 5. Unconstrained optimization problem is as follows:With Engval function defined bywhere with

4. Experimental Results and Discussion

To test the effect of the improved algorithm for nonlinear equations, the same parameters of different algorithms are set exactly the same. Each problem to be optimized will be run 32 times by different methods. In the following, we will compare the effectiveness of the improved algorithm and the ordinary PSO in the case of nonlinear equations with different dimensions.

The experimental results are listed in Table 1. Relevant parameters are listed in the same table. In Table 1, Dim denotes problem’s dimension; the Ps and Genn indicate population size and algorithm terminate generation, respectively.

Table 1 confirms that the improved algorithms obtained better results in the case of different dimensions. Table 1 shows that introduction history local best information can increase a particle’s flight reference information diversity. This keeps the particles search to avoid falling into local optimal solution too early and the accuracy of convergence is improved. Improved PSO can obtain the small minimum, average, and standard deviation. In general, the VRCLBPSO, NVRCLBPSO, and NVRCLBPSO for low-dimensional problem are efficient, and the NVRCLBPSO search is more stable. The EVRCLBPSO is the best for high-dimensional problems. Their convergence and the best distribution of 32 run time figures are shown in Appendix A, in which the d is the dimension of problems.

From Table 1 and Figure 1 of Appendix A, we discover that the convergence rate of the improved method for each problem is clearly faster than ordinary PSO. Also, for each problem with higher dimension, the EVRCLBPSO algorithm has the better convergence speed and more optimal search ability. When optimizing high-dimensional nonlinear equation problems, PSO easily falls into local optimum. While this paper presented a method successfully jumped out of the local optimal solution. Numerical results show that the proposed method is effective for some nonlinear equation problems. And the global convergence of the given method is established.

5. Conclusions and Perspectives

So far, the research of PSO applied to optimization equations is very limited, especially for high-dimensional nonlinear equations. We propose an improved PSO algorithm for solving nonlinear equations in this paper. Experimental results comparing with the basic PSO algorithm show that the improved method has achieved better effect in both of convergence speed and accuracy. Particularly for high-dimensional nonlinear equations, the effect is very obvious. The application scenarios of this improved algorithm are relatively widespread, and it can be applied to solve unconstrained optimization problems, constrained optimization problems, equation problems, engineering practice problems, etc.

Nonlinear equations are an important question. Therefore, its optimization is very meaningful and valuable for many practical problems. It will be a valuable direction to apply the PSO algorithm and its variant to optimizing nonlinear equations, especially high-dimensional.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors thank Dr. Tomas Oppenheim and Dr. Zhuo Li, School of Engineering, University of California, for their comprehensive revision and some discussion of value. This work was supported by the Shanghai Natural Science Fund (14ZR1417300) and the Guangxi Natural Science Fund (2020GXNSFAA159069).